Sample records for critical benchmark experiments

  1. Benchmark experiments at ASTRA facility on definition of space distribution of {sup 235}U fission reaction rate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bobrov, A. A.; Boyarinov, V. F.; Glushkov, A. E.

    2012-07-01

    Results of critical experiments performed at five ASTRA facility configurations modeling the high-temperature helium-cooled graphite-moderated reactors are presented. Results of experiments on definition of space distribution of {sup 235}U fission reaction rate performed at four from these five configurations are presented more detail. Analysis of available information showed that all experiments on criticality at these five configurations are acceptable for use them as critical benchmark experiments. All experiments on definition of space distribution of {sup 235}U fission reaction rate are acceptable for use them as physical benchmark experiments. (authors)

  2. Benchmarking study of the MCNP code against cold critical experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sitaraman, S.

    1991-01-01

    The purpose of this study was to benchmark the widely used Monte Carlo code MCNP against a set of cold critical experiments with a view to using the code as a means of independently verifying the performance of faster but less accurate Monte Carlo and deterministic codes. The experiments simulated consisted of both fast and thermal criticals as well as fuel in a variety of chemical forms. A standard set of benchmark cold critical experiments was modeled. These included the two fast experiments, GODIVA and JEZEBEL, the TRX metallic uranium thermal experiments, the Babcock and Wilcox oxide and mixed oxidemore » experiments, and the Oak Ridge National Laboratory (ORNL) and Pacific Northwest Laboratory (PNL) nitrate solution experiments. The principal case studied was a small critical experiment that was performed with boiling water reactor bundles.« less

  3. GROWTH OF THE INTERNATIONAL CRITICALITY SAFETY AND REACTOR PHYSICS EXPERIMENT EVALUATION PROJECTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. Blair Briggs; John D. Bess; Jim Gulliford

    2011-09-01

    Since the International Conference on Nuclear Criticality Safety (ICNC) 2007, the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) have continued to expand their efforts and broaden their scope. Eighteen countries participated on the ICSBEP in 2007. Now, there are 20, with recent contributions from Sweden and Argentina. The IRPhEP has also expanded from eight contributing countries in 2007 to 16 in 2011. Since ICNC 2007, the contents of the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments1' have increased from 442 evaluations (38000 pages), containing benchmark specifications for 3955 critical ormore » subcritical configurations to 516 evaluations (nearly 55000 pages), containing benchmark specifications for 4405 critical or subcritical configurations in the 2010 Edition of the ICSBEP Handbook. The contents of the Handbook have also increased from 21 to 24 criticality-alarm-placement/shielding configurations with multiple dose points for each, and from 20 to 200 configurations categorized as fundamental physics measurements relevant to criticality safety applications. Approximately 25 new evaluations and 150 additional configurations are expected to be added to the 2011 edition of the Handbook. Since ICNC 2007, the contents of the 'International Handbook of Evaluated Reactor Physics Benchmark Experiments2' have increased from 16 different experimental series that were performed at 12 different reactor facilities to 53 experimental series that were performed at 30 different reactor facilities in the 2011 edition of the Handbook. Considerable effort has also been made to improve the functionality of the searchable database, DICE (Database for the International Criticality Benchmark Evaluation Project) and verify the accuracy of the data contained therein. DICE will be discussed in separate papers at ICNC 2011. The status of the ICSBEP and the IRPhEP will be discussed in the full paper, selected benchmarks that have been added to the ICSBEP Handbook will be highlighted, and a preview of the new benchmarks that will appear in the September 2011 edition of the Handbook will be provided. Accomplishments of the IRPhEP will also be highlighted and the future of both projects will be discussed. REFERENCES (1) International Handbook of Evaluated Criticality Safety Benchmark Experiments, NEA/NSC/DOC(95)03/I-IX, Organisation for Economic Co-operation and Development-Nuclear Energy Agency (OECD-NEA), September 2010 Edition, ISBN 978-92-64-99140-8. (2) International Handbook of Evaluated Reactor Physics Benchmark Experiments, NEA/NSC/DOC(2006)1, Organisation for Economic Co-operation and Development-Nuclear Energy Agency (OECD-NEA), March 2011 Edition, ISBN 978-92-64-99141-5.« less

  4. Providing Nuclear Criticality Safety Analysis Education through Benchmark Experiment Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John D. Bess; J. Blair Briggs; David W. Nigg

    2009-11-01

    One of the challenges that today's new workforce of nuclear criticality safety engineers face is the opportunity to provide assessment of nuclear systems and establish safety guidelines without having received significant experience or hands-on training prior to graduation. Participation in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and/or the International Reactor Physics Experiment Evaluation Project (IRPhEP) provides students and young professionals the opportunity to gain experience and enhance critical engineering skills.

  5. Benchmarking of HEU Mental Annuli Critical Assemblies with Internally Reflected Graphite Cylinder

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiaobo, Liu; Bess, John D.; Marshall, Margaret A.

    Three experimental configurations of critical assemblies, performed in 1963 at the Oak Ridge Critical Experiment Facility, which are assembled using three different diameter HEU annuli (15-9 inches, 15-7 inches and 13-7 inches) metal annuli with internally reflected graphite cylinder are evaluated and benchmarked. The experimental uncertainties which are 0.00055, 0.00055 and 0.00055 respectively, and biases to the detailed benchmark models which are -0.00179, -0.00189 and -0.00114 respectively, were determined, and the experimental benchmark keff results were obtained for both detailed and simplified model. The calculation results for both detailed and simplified models using MCNP6-1.0 and ENDF VII.1 agree well tomore » the benchmark experimental results with a difference of less than 0.2%. These are acceptable benchmark experiments for inclusion in the ICSBEP Handbook.« less

  6. Summary of ORSphere critical and reactor physics measurements

    NASA Astrophysics Data System (ADS)

    Marshall, Margaret A.; Bess, John D.

    2017-09-01

    In the early 1970s Dr. John T. Mihalczo (team leader), J.J. Lynn, and J.R. Taylor performed experiments at the Oak Ridge Critical Experiments Facility (ORCEF) with highly enriched uranium (HEU) metal (called Oak Ridge Alloy or ORALLOY) to recreate GODIVA I results with greater accuracy than those performed at Los Alamos National Laboratory in the 1950s. The purpose of the Oak Ridge ORALLOY Sphere (ORSphere) experiments was to estimate the unreflected and unmoderated critical mass of an idealized sphere of uranium metal corrected to a density, purity, and enrichment such that it could be compared with the GODIVA I experiments. This critical configuration has been evaluated. Preliminary results were presented at ND2013. Since then, the evaluation was finalized and judged to be an acceptable benchmark experiment for the International Criticality Safety Benchmark Experiment Project (ICSBEP). Additionally, reactor physics measurements were performed to determine surface button worths, central void worth, delayed neutron fraction, prompt neutron decay constant, fission density and neutron importance. These measurements have been evaluated and found to be acceptable experiments and are discussed in full detail in the International Handbook of Evaluated Reactor Physics Benchmark Experiments. The purpose of this paper is to summarize all the evaluated critical and reactor physics measurements evaluations.

  7. Summary of ORSphere Critical and Reactor Physics Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, Margaret A.; Bess, John D.

    In the early 1970s Dr. John T. Mihalczo (team leader), J. J. Lynn, and J. R. Taylor performed experiments at the Oak Ridge Critical Experiments Facility (ORCEF) with highly enriched uranium (HEU) metal (called Oak Ridge Alloy or ORALLOY) to recreate GODIVA I results with greater accuracy than those performed at Los Alamos National Laboratory in the 1950s. The purpose of the Oak Ridge ORALLOY Sphere (ORSphere) experiments was to estimate the unreflected and unmoderated critical mass of an idealized sphere of uranium metal corrected to a density, purity, and enrichment such that it could be compared with the GODIVAmore » I experiments. This critical configuration has been evaluated. Preliminary results were presented at ND2013. Since then, the evaluation was finalized and judged to be an acceptable benchmark experiment for the International Criticality Safety Benchmark Experiment Project (ICSBEP). Additionally, reactor physics measurements were performed to determine surface button worths, central void worth, delayed neutron fraction, prompt neutron decay constant, fission density and neutron importance. These measurements have been evaluated and found to be acceptable experiments and are discussed in full detail in the International Handbook of Evaluated Reactor Physics Benchmark Experiments. The purpose of this paper is summary summarize all the critical and reactor physics measurements evaluations and, when possible, to compare them to GODIVA experiment results.« less

  8. Highly Enriched Uranium Metal Cylinders Surrounded by Various Reflector Materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernard Jones; J. Blair Briggs; Leland Monteirth

    A series of experiments was performed at Los Alamos Scientific Laboratory in 1958 to determine critical masses of cylinders of Oralloy (Oy) reflected by a number of materials. The experiments were all performed on the Comet Universal Critical Assembly Machine, and consisted of discs of highly enriched uranium (93.3 wt.% 235U) reflected by half-inch and one-inch-thick cylindrical shells of various reflector materials. The experiments were performed by members of Group N-2, particularly K. W. Gallup, G. E. Hansen, H. C. Paxton, and R. H. White. This experiment was intended to ascertain critical masses for criticality safety purposes, as well asmore » to compare neutron transport cross sections to those obtained from danger coefficient measurements with the Topsy Oralloy-Tuballoy reflected and Godiva unreflected critical assemblies. The reflector materials examined in this series of experiments are as follows: magnesium, titanium, aluminum, graphite, mild steel, nickel, copper, cobalt, molybdenum, natural uranium, tungsten, beryllium, aluminum oxide, molybdenum carbide, and polythene (polyethylene). Also included are two special configurations of composite beryllium and iron reflectors. Analyses were performed in which uncertainty associated with six different parameters was evaluated; namely, extrapolation to the uranium critical mass, uranium density, 235U enrichment, reflector density, reflector thickness, and reflector impurities. In addition to the idealizations made by the experimenters (removal of the platen and diaphragm), two simplifications were also made to the benchmark models that resulted in a small bias and additional uncertainty. First of all, since impurities in core and reflector materials are only estimated, they are not included in the benchmark models. Secondly, the room, support structure, and other possible surrounding equipment were not included in the model. Bias values that result from these two simplifications were determined and associated uncertainty in the bias values were included in the overall uncertainty in benchmark keff values. Bias values were very small, ranging from 0.0004 ?k low to 0.0007 ?k low. Overall uncertainties range from ? 0.0018 to ? 0.0030. Major contributors to the overall uncertainty include uncertainty in the extrapolation to the uranium critical mass and the uranium density. Results are summarized in Figure 1. Figure 1. Experimental, Benchmark-Model, and MCNP/KENO Calculated Results The 32 configurations described and evaluated under ICSBEP Identifier HEU-MET-FAST-084 are judged to be acceptable for use as criticality safety benchmark experiments and should be valuable integral benchmarks for nuclear data testing of the various reflector materials. Details of the benchmark models, uncertainty analyses, and final results are given in this paper.« less

  9. Benchmark Evaluation of Start-Up and Zero-Power Measurements at the High-Temperature Engineering Test Reactor

    DOE PAGES

    Bess, John D.; Fujimoto, Nozomu

    2014-10-09

    Benchmark models were developed to evaluate six cold-critical and two warm-critical, zero-power measurements of the HTTR. Additional measurements of a fully-loaded subcritical configuration, core excess reactivity, shutdown margins, six isothermal temperature coefficients, and axial reaction-rate distributions were also evaluated as acceptable benchmark experiments. Insufficient information is publicly available to develop finely-detailed models of the HTTR as much of the design information is still proprietary. However, the uncertainties in the benchmark models are judged to be of sufficient magnitude to encompass any biases and bias uncertainties incurred through the simplification process used to develop the benchmark models. Dominant uncertainties in themore » experimental keff for all core configurations come from uncertainties in the impurity content of the various graphite blocks that comprise the HTTR. Monte Carlo calculations of keff are between approximately 0.9 % and 2.7 % greater than the benchmark values. Reevaluation of the HTTR models as additional information becomes available could improve the quality of this benchmark and possibly reduce the computational biases. High-quality characterization of graphite impurities would significantly improve the quality of the HTTR benchmark assessment. Simulation of the other reactor physics measurements are in good agreement with the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  10. Reactor Physics Measurements and Benchmark Specifications for Oak Ridge Highly Enriched Uranium Sphere (ORSphere)

    DOE PAGES

    Marshall, Margaret A.

    2014-11-04

    In the early 1970s Dr. John T. Mihalczo (team leader), J.J. Lynn, and J.R. Taylor performed experiments at the Oak Ridge Critical Experiments Facility (ORCEF) with highly enriched uranium (HEU) metal (called Oak Ridge Alloy or ORALLOY) in an effort to recreate GODIVA I results with greater accuracy than those performed at Los Alamos National Laboratory in the 1950s. The purpose of the Oak Ridge ORALLOY Sphere (ORSphere) experiments was to estimate the unreflected and unmoderated critical mass of an idealized sphere of uranium metal corrected to a density, purity, and enrichment such that it could be compared with themore » GODIVA I experiments. Additionally, various material reactivity worths, the surface material worth coefficient, the delayed neutron fraction, the prompt neutron decay constant, relative fission density, and relative neutron importance were all measured. The critical assembly, material reactivity worths, the surface material worth coefficient, and the delayed neutron fraction were all evaluated as benchmark experiment measurements. The reactor physics measurements are the focus of this paper; although for clarity the critical assembly benchmark specifications are briefly discussed.« less

  11. Educating Next Generation Nuclear Criticality Safety Engineers at the Idaho National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. D. Bess; J. B. Briggs; A. S. Garcia

    2011-09-01

    One of the challenges in educating our next generation of nuclear safety engineers is the limitation of opportunities to receive significant experience or hands-on training prior to graduation. Such training is generally restricted to on-the-job-training before this new engineering workforce can adequately provide assessment of nuclear systems and establish safety guidelines. Participation in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) can provide students and young professionals the opportunity to gain experience and enhance critical engineering skills. The ICSBEP and IRPhEP publish annual handbooks that contain evaluations of experiments along withmore » summarized experimental data and peer-reviewed benchmark specifications to support the validation of neutronics codes, nuclear cross-section data, and the validation of reactor designs. Participation in the benchmark process not only benefits those who use these Handbooks within the international community, but provides the individual with opportunities for professional development, networking with an international community of experts, and valuable experience to be used in future employment. Traditionally students have participated in benchmarking activities via internships at national laboratories, universities, or companies involved with the ICSBEP and IRPhEP programs. Additional programs have been developed to facilitate the nuclear education of students while participating in the benchmark projects. These programs include coordination with the Center for Space Nuclear Research (CSNR) Next Degree Program, the Collaboration with the Department of Energy Idaho Operations Office to train nuclear and criticality safety engineers, and student evaluations as the basis for their Master's thesis in nuclear engineering.« less

  12. Fast Neutron Spectrum Potassium Worth for Space Power Reactor Design Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bess, John D.; Marshall, Margaret A.; Briggs, J. Blair

    2015-03-01

    A variety of critical experiments were constructed of enriched uranium metal (oralloy ) during the 1960s and 1970s at the Oak Ridge Critical Experiments Facility (ORCEF) in support of criticality safety operations at the Y-12 Plant. The purposes of these experiments included the evaluation of storage, casting, and handling limits for the Y-12 Plant and providing data for verification of calculation methods and cross-sections for nuclear criticality safety applications. These included solid cylinders of various diameters, annuli of various inner and outer diameters, two and three interacting cylinders of various diameters, and graphite and polyethylene reflected cylinders and annuli. Ofmore » the hundreds of delayed critical experiments, one was performed that consisted of uranium metal annuli surrounding a potassium-filled, stainless steel can. The outer diameter of the annuli was approximately 13 inches (33.02 cm) with an inner diameter of 7 inches (17.78 cm). The diameter of the stainless steel can was 7 inches (17.78 cm). The critical height of the configurations was approximately 5.6 inches (14.224 cm). The uranium annulus consisted of multiple stacked rings, each with radial thicknesses of 1 inch (2.54 cm) and varying heights. A companion measurement was performed using empty stainless steel cans; the primary purpose of these experiments was to test the fast neutron cross sections of potassium as it was a candidate for coolant in some early space power reactor designs.The experimental measurements were performed on July 11, 1963, by J. T. Mihalczo and M. S. Wyatt (Ref. 1) with additional information in its corresponding logbook. Unreflected and unmoderated experiments with the same set of highly enriched uranium metal parts were performed at the Oak Ridge Critical Experiments Facility in the 1960s and are evaluated in the International Handbook for Evaluated Criticality Safety Benchmark Experiments (ICSBEP Handbook) with the identifier HEU MET FAST 051. Thin graphite reflected (2 inches or less) experiments also using the same set of highly enriched uranium metal parts are evaluated in HEU MET FAST 071. Polyethylene-reflected configurations are evaluated in HEU-MET-FAST-076. A stack of highly enriched metal discs with a thick beryllium top reflector is evaluated in HEU-MET-FAST-069, and two additional highly enriched uranium annuli with beryllium cores are evaluated in HEU-MET-FAST-059. Both detailed and simplified model specifications are provided in this evaluation. Both of these fast neutron spectra assemblies were determined to be acceptable benchmark experiments. The calculated eigenvalues for both the detailed and the simple benchmark models are within ~0.26 % of the benchmark values for Configuration 1 (calculations performed using MCNP6 with ENDF/B-VII.1 neutron cross section data), but under-calculate the benchmark values by ~7s because the uncertainty in the benchmark is very small: ~0.0004 (1s); for Configuration 2, the under-calculation is ~0.31 % and ~8s. Comparison of detailed and simple model calculations for the potassium worth measurement and potassium mass coefficient yield results approximately 70 – 80 % lower (~6s to 10s) than the benchmark values for the various nuclear data libraries utilized. Both the potassium worth and mass coefficient are also deemed to be acceptable benchmark experiment measurements.« less

  13. INTEGRAL BENCHMARK DATA FOR NUCLEAR DATA TESTING THROUGH THE ICSBEP AND THE NEWLY ORGANIZED IRPHEP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. Blair Briggs; Lori Scott; Yolanda Rugama

    The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) was last reported in a nuclear data conference at the International Conference on Nuclear Data for Science and Technology, ND-2004, in Santa Fe, New Mexico. Since that time the number and type of integral benchmarks have increased significantly. Included in the ICSBEP Handbook are criticality-alarm / shielding and fundamental physic benchmarks in addition to the traditional critical / subcritical benchmark data. Since ND 2004, a reactor physics counterpart to the ICSBEP, the International Reactor Physics Experiment Evaluation Project (IRPhEP) was initiated. The IRPhEP is patterned after the ICSBEP, butmore » focuses on other integral measurements, such as buckling, spectral characteristics, reactivity effects, reactivity coefficients, kinetics measurements, reaction-rate and power distributions, nuclide compositions, and other miscellaneous-type measurements in addition to the critical configuration. The status of these two projects is discussed and selected benchmarks highlighted in this paper.« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marck, Steven C. van der, E-mail: vandermarck@nrg.eu

    Recent releases of three major world nuclear reaction data libraries, ENDF/B-VII.1, JENDL-4.0, and JEFF-3.1.1, have been tested extensively using benchmark calculations. The calculations were performed with the latest release of the continuous energy Monte Carlo neutronics code MCNP, i.e. MCNP6. Three types of benchmarks were used, viz. criticality safety benchmarks, (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 2000 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), tomore » mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for {sup 6}Li, {sup 7}Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D2O, H2O, concrete, polyethylene and teflon). The new functionality in MCNP6 to calculate the effective delayed neutron fraction was tested by comparison with more than thirty measurements in widely varying systems. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. The performance of the three libraries, in combination with MCNP6, is shown to be good. The results for the LEU-COMP-THERM category are on average very close to the benchmark value. Also for most other categories the results are satisfactory. Deviations from the benchmark values do occur in certain benchmark series, or in isolated cases within benchmark series. Such instances can often be related to nuclear data for specific non-fissile elements, such as C, Fe, or Gd. Indications are that the intermediate and mixed spectrum cases are less well described. The results for the shielding benchmarks are generally good, with very similar results for the three libraries in the majority of cases. Nevertheless there are, in certain cases, strong deviations between calculated and benchmark values, such as for Co and Mg. Also, the results show discrepancies at certain energies or angles for e.g. C, N, O, Mo, and W. The functionality of MCNP6 to calculate the effective delayed neutron fraction yields very good results for all three libraries.« less

  15. Evaluation of Neutron Radiography Reactor LEU-Core Start-Up Measurements

    DOE PAGES

    Bess, John D.; Maddock, Thomas L.; Smolinski, Andrew T.; ...

    2014-11-04

    Benchmark models were developed to evaluate the cold-critical start-up measurements performed during the fresh core reload of the Neutron Radiography (NRAD) reactor with Low Enriched Uranium (LEU) fuel. Experiments include criticality, control-rod worth measurements, shutdown margin, and excess reactivity for four core loadings with 56, 60, 62, and 64 fuel elements. The worth of four graphite reflector block assemblies and an empty dry tube used for experiment irradiations were also measured and evaluated for the 60-fuel-element core configuration. Dominant uncertainties in the experimental k eff come from uncertainties in the manganese content and impurities in the stainless steel fuel claddingmore » as well as the 236U and erbium poison content in the fuel matrix. Calculations with MCNP5 and ENDF/B-VII.0 neutron nuclear data are approximately 1.4% (9σ) greater than the benchmark model eigenvalues, which is commonly seen in Monte Carlo simulations of other TRIGA reactors. Simulations of the worth measurements are within the 2σ uncertainty for most of the benchmark experiment worth values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  16. Evaluation of Neutron Radiography Reactor LEU-Core Start-Up Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bess, John D.; Maddock, Thomas L.; Smolinski, Andrew T.

    Benchmark models were developed to evaluate the cold-critical start-up measurements performed during the fresh core reload of the Neutron Radiography (NRAD) reactor with Low Enriched Uranium (LEU) fuel. Experiments include criticality, control-rod worth measurements, shutdown margin, and excess reactivity for four core loadings with 56, 60, 62, and 64 fuel elements. The worth of four graphite reflector block assemblies and an empty dry tube used for experiment irradiations were also measured and evaluated for the 60-fuel-element core configuration. Dominant uncertainties in the experimental k eff come from uncertainties in the manganese content and impurities in the stainless steel fuel claddingmore » as well as the 236U and erbium poison content in the fuel matrix. Calculations with MCNP5 and ENDF/B-VII.0 neutron nuclear data are approximately 1.4% (9σ) greater than the benchmark model eigenvalues, which is commonly seen in Monte Carlo simulations of other TRIGA reactors. Simulations of the worth measurements are within the 2σ uncertainty for most of the benchmark experiment worth values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Der Marck, S. C.

    Three nuclear data libraries have been tested extensively using criticality safety benchmark calculations. The three libraries are the new release of the US library ENDF/B-VII.1 (2011), the new release of the Japanese library JENDL-4.0 (2011), and the OECD/NEA library JEFF-3.1 (2006). All calculations were performed with the continuous-energy Monte Carlo code MCNP (version 4C3, as well as version 6-beta1). Around 2000 benchmark cases from the International Handbook of Criticality Safety Benchmark Experiments (ICSBEP) were used. The results were analyzed per ICSBEP category, and per element. Overall, the three libraries show similar performance on most criticality safety benchmarks. The largest differencesmore » are probably caused by elements such as Be, C, Fe, Zr, W. (authors)« less

  18. Scale/TSUNAMI Sensitivity Data for ICSBEP Evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T; Reed, Davis Allan; Lefebvre, Robert A

    2011-01-01

    The Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) software developed at Oak Ridge National Laboratory (ORNL) as part of the Scale code system provide unique methods for code validation, gap analysis, and experiment design. For TSUNAMI analysis, sensitivity data are generated for each application and each existing or proposed experiment used in the assessment. The validation of diverse sets of applications requires potentially thousands of data files to be maintained and organized by the user, and a growing number of these files are available through the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE) distributed through themore » International Criticality Safety Benchmark Evaluation Program (ICSBEP). To facilitate the use of the IHECSBE benchmarks in rigorous TSUNAMI validation and gap analysis techniques, ORNL generated SCALE/TSUNAMI sensitivity data files (SDFs) for several hundred benchmarks for distribution with the IHECSBE. For the 2010 edition of IHECSBE, the sensitivity data were generated using 238-group cross-section data based on ENDF/B-VII.0 for 494 benchmark experiments. Additionally, ORNL has developed a quality assurance procedure to guide the generation of Scale inputs and sensitivity data, as well as a graphical user interface to facilitate the use of sensitivity data in identifying experiments and applying them in validation studies.« less

  19. ARRAYS OF BOTTLES OF PLUTONIUM NITRATE SOLUTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Margaret A. Marshall

    2012-09-01

    In October and November of 1981 thirteen approaches-to-critical were performed on a remote split table machine (RSTM) in the Critical Mass Laboratory of Pacific Northwest Laboratory (PNL) in Richland, Washington using planar arrays of polyethylene bottles filled with plutonium (Pu) nitrate solution. Arrays of up to sixteen bottles were used to measure the critical number of bottles and critical array spacing with a tight fitting Plexiglas® reflector on all sides of the arrays except the top. Some experiments used Plexiglas shells fitted around each bottles to determine the effect of moderation on criticality. Each bottle contained approximately 2.4 L ofmore » Pu(NO3)4 solution with a Pu content of 105 g Pu/L and a free acid molarity H+ of 5.1. The plutonium was of low 240Pu (2.9 wt.%) content. These experiments were sponsored by Rockwell Hanford Operations because of the lack of experimental data on the criticality of arrays of bottles of Pu solution such as might be found in storage and handling at the Purex Facility at Hanford. The results of these experiments were used “to provide benchmark data to validate calculational codes used in criticality safety assessments of [the] plant configurations” (Ref. 1). Data for this evaluation were collected from the published report (Ref. 1), the approach to critical logbook, the experimenter’s logbook, and communication with the primary experimenter, B. Michael Durst. Of the 13 experiments preformed 10 were evaluated. One of the experiments was not evaluated because it had been thrown out by the experimenter, one was not evaluated because it was a repeat of another experiment and the third was not evaluated because it reported the critical number of bottles as being greater than 25. Seven of the thirteen evaluated experiments were determined to be acceptable benchmark experiments. A similar experiment using uranyl nitrate was benchmarked as U233-SOL-THERM-014.« less

  20. ICSBEP Benchmarks For Nuclear Data Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Briggs, J. Blair

    2005-05-24

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organization for Economic Cooperation and Development (OECD) -- Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Serbia and Montenegro (formerly Yugoslavia), Kazakhstan, Spain, Israel, Brazil, Poland, and the Czech Republic are now participating. South Africa, India, China, and Germany are considering participation. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive andmore » internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled ''International Handbook of Evaluated Criticality Safety Benchmark Experiments.'' The 2004 Edition of the Handbook contains benchmark specifications for 3331 critical or subcritical configurations that are intended for use in validation efforts and for testing basic nuclear data. New to the 2004 Edition of the Handbook is a draft criticality alarm / shielding type benchmark that should be finalized in 2005 along with two other similar benchmarks. The Handbook is being used extensively for nuclear data testing and is expected to be a valuable resource for code and data validation and improvement efforts for decades to come. Specific benchmarks that are useful for testing structural materials such as iron, chromium, nickel, and manganese; beryllium; lead; thorium; and 238U are highlighted.« less

  1. Neutron Activation and Thermoluminescent Detector Responses to a Bare Pulse of the CEA Valduc SILENE Critical Assembly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Thomas Martin; Celik, Cihangir; McMahan, Kimberly L.

    This benchmark experiment was conducted as a joint venture between the US Department of Energy (DOE) and the French Commissariat à l'Energie Atomique (CEA). Staff at the Oak Ridge National Laboratory (ORNL) in the US and the Centre de Valduc in France planned this experiment. The experiment was conducted on October 11, 2010 in the SILENE critical assembly facility at Valduc. Several other organizations contributed to this experiment and the subsequent evaluation, including CEA Saclay, Lawrence Livermore National Laboratory (LLNL), the Y-12 National Security Complex (NSC), Babcock International Group in the United Kingdom, and Los Alamos National Laboratory (LANL). Themore » goal of this experiment was to measure neutron activation and thermoluminescent dosimeter (TLD) doses from a source similar to a fissile solution critical excursion. The resulting benchmark can be used for validation of computer codes and nuclear data libraries as required when performing analysis of criticality accident alarm systems (CAASs). A secondary goal of this experiment was to qualitatively test performance of two CAAS detectors similar to those currently and formerly in use in some US DOE facilities. The detectors tested were the CIDAS MkX and the Rocky Flats NCD-91. These detectors were being evaluated to determine whether they would alarm, so they were not expected to generate benchmark quality data.« less

  2. The MCNP6 Analytic Criticality Benchmark Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling)more » and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.« less

  3. ZPR-3 Assembly 11 : A cylindrical sssembly of highly enriched uranium and depleted uranium with an average {sup 235}U enrichment of 12 atom % and a depleted uranium reflector.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lell, R. M.; McKnight, R. D.; Tsiboulia, A.

    2010-09-30

    Over a period of 30 years, more than a hundred Zero Power Reactor (ZPR) critical assemblies were constructed at Argonne National Laboratory. The ZPR facilities, ZPR-3, ZPR-6, ZPR-9 and ZPPR, were all fast critical assembly facilities. The ZPR critical assemblies were constructed to support fast reactor development, but data from some of these assemblies are also well suited for nuclear data validation and to form the basis for criticality safety benchmarks. A number of the Argonne ZPR/ZPPR critical assemblies have been evaluated as ICSBEP and IRPhEP benchmarks. Of the three classes of ZPR assemblies, engineering mockups, engineering benchmarks and physicsmore » benchmarks, the last group tends to be most useful for criticality safety. Because physics benchmarks were designed to test fast reactor physics data and methods, they were as simple as possible in geometry and composition. The principal fissile species was {sup 235}U or {sup 239}Pu. Fuel enrichments ranged from 9% to 95%. Often there were only one or two main core diluent materials, such as aluminum, graphite, iron, sodium or stainless steel. The cores were reflected (and insulated from room return effects) by one or two layers of materials such as depleted uranium, lead or stainless steel. Despite their more complex nature, a small number of assemblies from the other two classes would make useful criticality safety benchmarks because they have features related to criticality safety issues, such as reflection by soil-like material. ZPR-3 Assembly 11 (ZPR-3/11) was designed as a fast reactor physics benchmark experiment with an average core {sup 235}U enrichment of approximately 12 at.% and a depleted uranium reflector. Approximately 79.7% of the total fissions in this assembly occur above 100 keV, approximately 20.3% occur below 100 keV, and essentially none below 0.625 eV - thus the classification as a 'fast' assembly. This assembly is Fast Reactor Benchmark No. 8 in the Cross Section Evaluation Working Group (CSEWG) Benchmark Specificationsa and has historically been used as a data validation benchmark assembly. Loading of ZPR-3 Assembly 11 began in early January 1958, and the Assembly 11 program ended in late January 1958. The core consisted of highly enriched uranium (HEU) plates and depleted uranium plates loaded into stainless steel drawers, which were inserted into the central square stainless steel tubes of a 31 x 31 matrix on a split table machine. The core unit cell consisted of two columns of 0.125 in.-wide (3.175 mm) HEU plates, six columns of 0.125 in.-wide (3.175 mm) depleted uranium plates and one column of 1.0 in.-wide (25.4 mm) depleted uranium plates. The length of each column was 10 in. (254.0 mm) in each half of the core. The axial blanket consisted of 12 in. (304.8 mm) of depleted uranium behind the core. The thickness of the depleted uranium radial blanket was approximately 14 in. (355.6 mm), and the length of the radial blanket in each half of the matrix was 22 in. (558.8 mm). The assembly geometry approximated a right circular cylinder as closely as the square matrix tubes allowed. According to the logbook and loading records for ZPR-3/11, the reference critical configuration was loading 10 which was critical on January 21, 1958. Subsequent loadings were very similar but less clean for criticality because there were modifications made to accommodate reactor physics measurements other than criticality. Accordingly, ZPR-3/11 loading 10 was selected as the only configuration for this benchmark. As documented below, it was determined to be acceptable as a criticality safety benchmark experiment. A very accurate transformation to a simplified model is needed to make any ZPR assembly a practical criticality-safety benchmark. There is simply too much geometric detail in an exact (as-built) model of a ZPR assembly, even a clean core such as ZPR-3/11 loading 10. The transformation must reduce the detail to a practical level without masking any of the important features of the critical experiment. And it must do this without increasing the total uncertainty far beyond that of the original experiment. Such a transformation is described in Section 3. It was obtained using a pair of continuous-energy Monte Carlo calculations. First, the critical configuration was modeled in full detail - every plate, drawer, matrix tube, and air gap was modeled explicitly. Then the regionwise compositions and volumes from the detailed as-built model were used to construct a homogeneous, two-dimensional (RZ) model of ZPR-3/11 that conserved the mass of each nuclide and volume of each region. The simple cylindrical model is the criticality-safety benchmark model. The difference in the calculated k{sub eff} values between the as-built three-dimensional model and the homogeneous two-dimensional benchmark model was used to adjust the measured excess reactivity of ZPR-3/11 loading 10 to obtain the k{sub eff} for the benchmark model.« less

  4. Benchmark Evaluation of the HTR-PROTEUS Absorber Rod Worths (Core 4)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John D. Bess; Leland M. Montierth

    2014-06-01

    PROTEUS was a zero-power research reactor at the Paul Scherrer Institute (PSI) in Switzerland. The critical assembly was constructed from a large graphite annulus surrounding a central cylindrical cavity. Various experimental programs were investigated in PROTEUS; during the years 1992 through 1996, it was configured as a pebble-bed reactor and designated HTR-PROTEUS. Various critical configurations were assembled with each accompanied by an assortment of reactor physics experiments including differential and integral absorber rod measurements, kinetics, reaction rate distributions, water ingress effects, and small sample reactivity effects [1]. Four benchmark reports were previously prepared and included in the March 2013 editionmore » of the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook) [2] evaluating eleven critical configurations. A summary of that effort was previously provided [3] and an analysis of absorber rod worth measurements for Cores 9 and 10 have been performed prior to this analysis and included in PROTEUS-GCR-EXP-004 [4]. In the current benchmark effort, absorber rod worths measured for Core Configuration 4, which was the only core with a randomly-packed pebble loading, have been evaluated for inclusion as a revision to the HTR-PROTEUS benchmark report PROTEUS-GCR-EXP-002.« less

  5. Contributions to Integral Nuclear Data in ICSBEP and IRPhEP since ND 2013

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bess, John D.; Briggs, J. Blair; Gulliford, Jim

    2016-09-01

    The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) was last discussed directly with the international nuclear data community at ND2013. Since ND2013, integral benchmark data that are available for nuclear data testing has continued to increase. The status of the international benchmark efforts and the latest contributions to integral nuclear data for testing is discussed. Select benchmark configurations that have been added to the ICSBEP and IRPhEP Handbooks since ND2013 are highlighted. The 2015 edition of the ICSBEP Handbook now contains 567 evaluations with benchmark specifications for 4,874more » critical, near-critical, or subcritical configurations, 31 criticality alarm placement/shielding configuration with multiple dose points apiece, and 207 configurations that have been categorized as fundamental physics measurements that are relevant to criticality safety applications. The 2015 edition of the IRPhEP Handbook contains data from 143 different experimental series that were performed at 50 different nuclear facilities. Currently 139 of the 143 evaluations are published as approved benchmarks with the remaining four evaluations published in draft format only. Measurements found in the IRPhEP Handbook include criticality, buckling and extrapolation length, spectral characteristics, reactivity effects, reactivity coefficients, kinetics, reaction-rate distributions, power distributions, isotopic compositions, and/or other miscellaneous types of measurements for various types of reactor systems. Annual technical review meetings for both projects were held in April 2016; additional approved benchmark evaluations will be included in the 2016 editions of these handbooks.« less

  6. Evaluation of the concrete shield compositions from the 2010 criticality accident alarm system benchmark experiments at the CEA Valduc SILENE facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Thomas Martin; Celik, Cihangir; Dunn, Michael E

    In October 2010, a series of benchmark experiments were conducted at the French Commissariat a l'Energie Atomique et aux Energies Alternatives (CEA) Valduc SILENE facility. These experiments were a joint effort between the United States Department of Energy Nuclear Criticality Safety Program and the CEA. The purpose of these experiments was to create three benchmarks for the verification and validation of radiation transport codes and evaluated nuclear data used in the analysis of criticality accident alarm systems. This series of experiments consisted of three single-pulsed experiments with the SILENE reactor. For the first experiment, the reactor was bare (unshielded), whereasmore » in the second and third experiments, it was shielded by lead and polyethylene, respectively. The polyethylene shield of the third experiment had a cadmium liner on its internal and external surfaces, which vertically was located near the fuel region of SILENE. During each experiment, several neutron activation foils and thermoluminescent dosimeters (TLDs) were placed around the reactor. Nearly half of the foils and TLDs had additional high-density magnetite concrete, high-density barite concrete, standard concrete, and/or BoroBond shields. CEA Saclay provided all the concrete, and the US Y-12 National Security Complex provided the BoroBond. Measurement data from the experiments were published at the 2011 International Conference on Nuclear Criticality (ICNC 2011) and the 2013 Nuclear Criticality Safety Division (NCSD 2013) topical meeting. Preliminary computational results for the first experiment were presented in the ICNC 2011 paper, which showed poor agreement between the computational results and the measured values of the foils shielded by concrete. Recently the hydrogen content, boron content, and density of these concrete shields were further investigated within the constraints of the previously available data. New computational results for the first experiment are now available that show much better agreement with the measured values.« less

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, Margaret A.

    In the early 1970s Dr. John T. Mihalczo (team leader), J.J. Lynn, and J.R. Taylor performed experiments at the Oak Ridge Critical Experiments Facility (ORCEF) with highly enriched uranium (HEU) metal (called Oak Ridge Alloy or ORALLOY) in an effort to recreate GODIVA I results with greater accuracy than those performed at Los Alamos National Laboratory in the 1950s. The purpose of the Oak Ridge ORALLOY Sphere (ORSphere) experiments was to estimate the unreflected and unmoderated critical mass of an idealized sphere of uranium metal corrected to a density, purity, and enrichment such that it could be compared with themore » GODIVA I experiments. Additionally, various material reactivity worths, the surface material worth coefficient, the delayed neutron fraction, the prompt neutron decay constant, relative fission density, and relative neutron importance were all measured. The critical assembly, material reactivity worths, the surface material worth coefficient, and the delayed neutron fraction were all evaluated as benchmark experiment measurements. The reactor physics measurements are the focus of this paper; although for clarity the critical assembly benchmark specifications are briefly discussed.« less

  8. ZPR-6 assembly 7 high {sup 240} PU core : a cylindrical assemby with mixed (PU, U)-oxide fuel and a central high {sup 240} PU zone.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lell, R. M.; Schaefer, R. W.; McKnight, R. D.

    Over a period of 30 years more than a hundred Zero Power Reactor (ZPR) critical assemblies were constructed at Argonne National Laboratory. The ZPR facilities, ZPR-3, ZPR-6, ZPR-9 and ZPPR, were all fast critical assembly facilities. The ZPR critical assemblies were constructed to support fast reactor development, but data from some of these assemblies are also well suited to form the basis for criticality safety benchmarks. Of the three classes of ZPR assemblies, engineering mockups, engineering benchmarks and physics benchmarks, the last group tends to be most useful for criticality safety. Because physics benchmarks were designed to test fast reactormore » physics data and methods, they were as simple as possible in geometry and composition. The principal fissile species was {sup 235}U or {sup 239}Pu. Fuel enrichments ranged from 9% to 95%. Often there were only one or two main core diluent materials, such as aluminum, graphite, iron, sodium or stainless steel. The cores were reflected (and insulated from room return effects) by one or two layers of materials such as depleted uranium, lead or stainless steel. Despite their more complex nature, a small number of assemblies from the other two classes would make useful criticality safety benchmarks because they have features related to criticality safety issues, such as reflection by soil-like material. The term 'benchmark' in a ZPR program connotes a particularly simple loading aimed at gaining basic reactor physics insight, as opposed to studying a reactor design. In fact, the ZPR-6/7 Benchmark Assembly (Reference 1) had a very simple core unit cell assembled from plates of depleted uranium, sodium, iron oxide, U3O8, and plutonium. The ZPR-6/7 core cell-average composition is typical of the interior region of liquid-metal fast breeder reactors (LMFBRs) of the era. It was one part of the Demonstration Reactor Benchmark Program,a which provided integral experiments characterizing the important features of demonstration-size LMFBRs. As a benchmark, ZPR-6/7 was devoid of many 'real' reactor features, such as simulated control rods and multiple enrichment zones, in its reference form. Those kinds of features were investigated experimentally in variants of the reference ZPR-6/7 or in other critical assemblies in the Demonstration Reactor Benchmark Program.« less

  9. Neutron Activation Foil and Thermoluminescent Dosimeter Responses to a Polyethylene Reflected Pulse of the CEA Valduc SILENE Critical Assembly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Thomas Martin; Celik, Cihangir; McMahan, Kimberly L.

    This benchmark experiment was conducted as a joint venture between the US Department of Energy (DOE) and the French Commissariat à l'Energie Atomique (CEA). Staff at the Oak Ridge National Laboratory (ORNL) in the US and the Centre de Valduc in France planned this experiment. The experiment was conducted on October 19, 2010 in the SILENE critical assembly facility at Valduc. Several other organizations contributed to this experiment and the subsequent evaluation, including CEA Saclay, Lawrence Livermore National Laboratory (LLNL), the Y-12 National Security Complex (NSC), Babcock International Group in the United Kingdom, and Los Alamos National Laboratory (LANL). Themore » goal of this experiment was to measure neutron activation and thermoluminescent dosimeter (TLD) doses from a source similar to a fissile solution critical excursion. The resulting benchmark can be used for validation of computer codes and nuclear data libraries as required when performing analysis of criticality accident alarm systems (CAASs). A secondary goal of this experiment was to qualitatively test performance of two CAAS detectors similar to those currently and formerly in use in some US DOE facilities. The detectors tested were the CIDAS MkX and the Rocky Flats NCD-91. The CIDAS detects gammas with a Geiger-Muller tube and the Rocky Flats detects neutrons via charged particles produced in a thin 6LiF disc depositing energy in a Si solid state detector. These detectors were being evaluated to determine whether they would alarm, so they were not expected to generate benchmark quality data.« less

  10. Neutron Activation Foil and Thermoluminescent Dosimeter Responses to a Lead Reflected Pulse of the CEA Valduc SILENE Critical Assembly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Thomas Martin; Celik, Cihangir; Isbell, Kimberly McMahan

    This benchmark experiment was conducted as a joint venture between the US Department of Energy (DOE) and the French Commissariat à l'Energie Atomique (CEA). Staff at the Oak Ridge National Laboratory (ORNL) in the US and the Centre de Valduc in France planned this experiment. The experiment was conducted on October 13, 2010 in the SILENE critical assembly facility at Valduc. Several other organizations contributed to this experiment and the subsequent evaluation, including CEA Saclay, Lawrence Livermore National Laboratory (LLNL), the Y-12 National Security Complex (NSC), Babcock International Group in the United Kingdom, and Los Alamos National Laboratory (LANL). Themore » goal of this experiment was to measure neutron activation and thermoluminescent dosimeter (TLD) doses from a source similar to a fissile solution critical excursion. The resulting benchmark can be used for validation of computer codes and nuclear data libraries as required when performing analysis of criticality accident alarm systems (CAASs). A secondary goal of this experiment was to qualitatively test performance of two CAAS detectors similar to those currently and formerly in use in some US DOE facilities. The detectors tested were the CIDAS MkX and the Rocky Flats NCD-91. The CIDAS detects gammas with a Geiger-Muller tube, and the Rocky Flats detects neutrons via charged particles produced in a thin 6LiF disc, depositing energy in a Si solid-state detector. These detectors were being evaluated to determine whether they would alarm, so they were not expected to generate benchmark quality data.« less

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bess, John D.; Sterbentz, James W.; Snoj, Luka

    PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen criticalmore » configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.« less

  12. Benchmarking and Threshold Standards in Higher Education. Staff and Educational Development Series.

    ERIC Educational Resources Information Center

    Smith, Helen, Ed.; Armstrong, Michael, Ed.; Brown, Sally, Ed.

    This book explores the issues involved in developing standards in higher education, examining the practical issues involved in benchmarking and offering a critical analysis of the problems associated with this developmental tool. The book focuses primarily on experience in the United Kingdom (UK), but looks also at international activity in this…

  13. Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program

    DOE PAGES

    Bess, John D.; Montierth, Leland; Köberl, Oliver; ...

    2014-10-09

    Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the ²³⁵U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of k eff with MCNP5 and ENDF/B-VII.0 neutron nuclear data aremore » greater than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of k eff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  14. HTR-PROTEUS pebble bed experimental program cores 9 & 10: columnar hexagonal point-on-point packing with a 1:1 moderator-to-fuel pebble ratio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bess, John D.

    2014-03-01

    PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen criticalmore » configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.« less

  15. HTR-PROTEUS PEBBLE BED EXPERIMENTAL PROGRAM CORES 5, 6, 7, & 8: COLUMNAR HEXAGONAL POINT-ON-POINT PACKING WITH A 1:2 MODERATOR-TO-FUEL PEBBLE RATIO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John D. Bess

    2013-03-01

    PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen criticalmore » configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.« less

  16. HTR-PROTEUS PEBBLE BED EXPERIMENTAL PROGRAM CORES 9 & 10: COLUMNAR HEXAGONAL POINT-ON-POINT PACKING WITH A 1:1 MODERATOR-TO-FUEL PEBBLE RATIO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John D. Bess

    2013-03-01

    PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen criticalmore » configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.« less

  17. Benchmark gas core critical experiment.

    NASA Technical Reports Server (NTRS)

    Kunze, J. F.; Lofthouse, J. H.; Cooper, C. G.; Hyland, R. E.

    1972-01-01

    A critical experiment with spherical symmetry has been conducted on the gas core nuclear reactor concept. The nonspherical perturbations in the experiment were evaluated experimentally and produce corrections to the observed eigenvalue of approximately 1% delta k. The reactor consisted of a low density, central uranium hexafluoride gaseous core, surrounded by an annulus of void or low density hydrocarbon, which in turn was surrounded with a 97-cm-thick heavy water reflector.

  18. Criticality calculations of the Very High Temperature reactor Critical Assembly benchmark with Serpent and SCALE/KENO-VI

    DOE PAGES

    Bostelmann, Friederike; Hammer, Hans R.; Ortensi, Javier; ...

    2015-12-30

    Within the framework of the IAEA Coordinated Research Project on HTGR Uncertainty Analysis in Modeling, criticality calculations of the Very High Temperature Critical Assembly experiment were performed as the validation reference to the prismatic MHTGR-350 lattice calculations. Criticality measurements performed at several temperature points at this Japanese graphite-moderated facility were recently included in the International Handbook of Evaluated Reactor Physics Benchmark Experiments, and represent one of the few data sets available for the validation of HTGR lattice physics. Here, this work compares VHTRC criticality simulations utilizing the Monte Carlo codes Serpent and SCALE/KENO-VI. Reasonable agreement was found between Serpent andmore » KENO-VI, but only the use of the latest ENDF cross section library release, namely the ENDF/B-VII.1 library, led to an improved match with the measured data. Furthermore, the fourth beta release of SCALE 6.2/KENO-VI showed significant improvements from the current SCALE 6.1.2 version, compared to the experimental values and Serpent.« less

  19. Availability of Neutronics Benchmarks in the ICSBEP and IRPhEP Handbooks for Computational Tools Testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bess, John D.; Briggs, J. Blair; Ivanova, Tatiana

    2017-02-01

    In the past several decades, numerous experiments have been performed worldwide to support reactor operations, measurements, design, and nuclear safety. Those experiments represent an extensive international investment in infrastructure, expertise, and cost, representing significantly valuable resources of data supporting past, current, and future research activities. Those valuable assets represent the basis for recording, development, and validation of our nuclear methods and integral nuclear data [1]. The loss of these experimental data, which has occurred all too much in the recent years, is tragic. The high cost to repeat many of these measurements can be prohibitive, if not impossible, to surmount.more » Two international projects were developed, and are under the direction of the Organisation for Co-operation and Development Nuclear Energy Agency (OECD NEA) to address the challenges of not just data preservation, but evaluation of the data to determine its merit for modern and future use. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was established to identify and verify comprehensive critical benchmark data sets; evaluate the data, including quantification of biases and uncertainties; compile the data and calculations in a standardized format; and formally document the effort into a single source of verified benchmark data [2]. Similarly, the International Reactor Physics Experiment Evaluation Project (IRPhEP) was established to preserve integral reactor physics experimental data, including separate or special effects data for nuclear energy and technology applications [3]. Annually, contributors from around the world continue to collaborate in the evaluation and review of select benchmark experiments for preservation and dissemination. The extensively peer-reviewed integral benchmark data can then be utilized to support nuclear design and safety analysts to validate the analytical tools, methods, and data needed for next-generation reactor design, safety analysis requirements, and all other front- and back-end activities contributing to the overall nuclear fuel cycle where quality neutronics calculations are paramount.« less

  20. Benchmark Evaluation of Fuel Effect and Material Worth Measurements for a Beryllium-Reflected Space Reactor Mockup

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, Margaret A.; Bess, John D.

    2015-02-01

    The critical configuration of the small, compact critical assembly (SCCA) experiments performed at the Oak Ridge Critical Experiments Facility (ORCEF) in 1962-1965 have been evaluated as acceptable benchmark experiments for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. The initial intent of these experiments was to support the design of the Medium Power Reactor Experiment (MPRE) program, whose purpose was to study “power plants for the production of electrical power in space vehicles.” The third configuration in this series of experiments was a beryllium-reflected assembly of stainless-steel-clad, highly enriched uranium (HEU)-O 2 fuel mockup of a potassium-cooledmore » space power reactor. Reactivity measurements cadmium ratio spectral measurements and fission rate measurements were measured through the core and top reflector. Fuel effect worth measurements and neutron moderating and absorbing material worths were also measured in the assembly fuel region. The cadmium ratios, fission rate, and worth measurements were evaluated for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. The fuel tube effect and neutron moderating and absorbing material worth measurements are the focus of this paper. Additionally, a measurement of the worth of potassium filling the core region was performed but has not yet been evaluated Pellets of 93.15 wt.% enriched uranium dioxide (UO 2) were stacked in 30.48 cm tall stainless steel fuel tubes (0.3 cm tall end caps). Each fuel tube had 26 pellets with a total mass of 295.8 g UO 2 per tube. 253 tubes were arranged in 1.506-cm triangular lattice. An additional 7-tube cluster critical configuration was also measured but not used for any physics measurements. The core was surrounded on all side by a beryllium reflector. The fuel effect worths were measured by removing fuel tubes at various radius. An accident scenario was also simulated by moving outward twenty fuel rods from the periphery of the core so they were touching the core tank. The change in the system reactivity when the fuel tube(s) were removed/moved compared with the base configuration was the worth of the fuel tubes or accident scenario. The worth of neutron absorbing and moderating materials was measured by inserting material rods into the core at regular intervals or placing lids at the top of the core tank. Stainless steel 347, tungsten, niobium, polyethylene, graphite, boron carbide, aluminum and cadmium rods and/or lid worths were all measured. The change in the system reactivity when a material was inserted into the core is the worth of the material.« less

  1. Little Boy replication: justification and construction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malenfant, R.E.

    A reconstruction of the Little Boy weapon allowed experiments to evaluate yield, leakage measurements for comparison with calculations, and phenomenological measurements to evaluate various in-situ dosimeters. The reconstructed weapon was operated at sustained delayed critical at the Los Alamos Critical Assembly Facility. The present experiments provide a wealth of information to benchmark calculations and demonstrate that the 1965 measurements on the Ichiban assembly (a spherical mockup of Little Boy) were in error.

  2. Overview of the 2014 Edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John D. Bess; J. Blair Briggs; Jim Gulliford

    2014-10-01

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) is a widely recognized world class program. The work of the IRPhEP is documented in the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook). Integral data from the IRPhEP Handbook is used by reactor safety and design, nuclear data, criticality safety, and analytical methods development specialists, worldwide, to perform necessary validations of their calculational techniques. The IRPhEP Handbook is among the most frequently quoted reference in the nuclear industry and is expected to be a valuable resource for future decades.

  3. Aluminum Data Measurements and Evaluation for Criticality Safety Applications

    NASA Astrophysics Data System (ADS)

    Leal, L. C.; Guber, K. H.; Spencer, R. R.; Derrien, H.; Wright, R. Q.

    2002-12-01

    The Defense Nuclear Facility Safety Board (DNFSB) Recommendation 93-2 motivated the US Department of Energy (DOE) to develop a comprehensive criticality safety program to maintain and to predict the criticality of systems throughout the DOE complex. To implement the response to the DNFSB Recommendation 93-2, a Nuclear Criticality Safety Program (NCSP) was created including the following tasks: Critical Experiments, Criticality Benchmarks, Training, Analytical Methods, and Nuclear Data. The Nuclear Data portion of the NCSP consists of a variety of differential measurements performed at the Oak Ridge Electron Linear Accelerator (ORELA) at the Oak Ridge National Laboratory (ORNL), data analysis and evaluation using the generalized least-squares fitting code SAMMY in the resolved, unresolved, and high energy ranges, and the development and benchmark testing of complete evaluations for a nuclide for inclusion into the Evaluated Nuclear Data File (ENDF/B). This paper outlines the work performed at ORNL to measure, evaluate, and test the nuclear data for aluminum for applications in criticality safety problems.

  4. Methodology and issues of integral experiments selection for nuclear data validation

    NASA Astrophysics Data System (ADS)

    Tatiana, Ivanova; Ivanov, Evgeny; Hill, Ian

    2017-09-01

    Nuclear data validation involves a large suite of Integral Experiments (IEs) for criticality, reactor physics and dosimetry applications. [1] Often benchmarks are taken from international Handbooks. [2, 3] Depending on the application, IEs have different degrees of usefulness in validation, and usually the use of a single benchmark is not advised; indeed, it may lead to erroneous interpretation and results. [1] This work aims at quantifying the importance of benchmarks used in application dependent cross section validation. The approach is based on well-known General Linear Least Squared Method (GLLSM) extended to establish biases and uncertainties for given cross sections (within a given energy interval). The statistical treatment results in a vector of weighting factors for the integral benchmarks. These factors characterize the value added by a benchmark for nuclear data validation for the given application. The methodology is illustrated by one example, selecting benchmarks for 239Pu cross section validation. The studies were performed in the framework of Subgroup 39 (Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files) established at the Working Party on International Nuclear Data Evaluation Cooperation (WPEC) of the Nuclear Science Committee under the Nuclear Energy Agency (NEA/OECD).

  5. Little Boy replication: justification and construction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malenfant, R.E.

    A reconstruction of the Little Boy weapon allowed experiments to evaluate yield, leakage measurements for comparison with calculations, and phenomenological measurements to evaluate various in-situ dosimeters. The reconstructed weapon was operated at sustained delayed critical at the Los Alamos Critical Assembly Facility. The present experiments provide a wealth of information to benchmark calculations and demonstrate that the 1965 measurements on the Ichiban assembly (a spherical mockup of Little Boy) were in error. 5 references, 2 figures.

  6. Evaluation of the Pool Critical Assembly Benchmark with Explicitly-Modeled Geometry using MCNP6

    DOE PAGES

    Kulesza, Joel A.; Martz, Roger Lee

    2017-03-01

    Despite being one of the most widely used benchmarks for qualifying light water reactor (LWR) radiation transport methods and data, no benchmark calculation of the Oak Ridge National Laboratory (ORNL) Pool Critical Assembly (PCA) pressure vessel wall benchmark facility (PVWBF) using MCNP6 with explicitly modeled core geometry exists. As such, this paper provides results for such an analysis. First, a criticality calculation is used to construct the fixed source term. Next, ADVANTG-generated variance reduction parameters are used within the final MCNP6 fixed source calculations. These calculations provide unadjusted dosimetry results using three sets of dosimetry reaction cross sections of varyingmore » ages (those packaged with MCNP6, from the IRDF-2002 multi-group library, and from the ACE-formatted IRDFF v1.05 library). These results are then compared to two different sets of measured reaction rates. The comparison agrees in an overall sense within 2% and on a specific reaction- and dosimetry location-basis within 5%. Except for the neptunium dosimetry, the individual foil raw calculation-to-experiment comparisons usually agree within 10% but is typically greater than unity. Finally, in the course of developing these calculations, geometry that has previously not been completely specified is provided herein for the convenience of future analysts.« less

  7. Orsphere: Physics Measurments For Bare, HEU(93.2)-Metal Sphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, Margaret A.; Bess, John D.; Briggs, J. Blair

    In the early 1970s Dr. John T. Mihalczo (team leader), J.J. Lynn, and J.R. Taylor performed experiments at the Oak Ridge Critical Experiments Facility (ORCEF) with highly enriched uranium (HEU) metal (called Oak Ridge Alloy or ORALLOY) in an attempt to recreate GODIVA I results with greater accuracy than those performed at Los Alamos National Laboratory in the 1950s (HEU-MET-FAST-001). The purpose of the Oak Ridge ORALLOY Sphere (ORSphere) experiments was to estimate the unreflected and unmoderated critical mass of an idealized sphere of uranium metal corrected to a density, purity, and enrichment such that it could be compared withmore » the GODIVA I experiments. “The very accurate description of this sphere, as assembled, establishes it as an ideal benchmark for calculational methods and cross-section data files” (Reference 1). While performing the ORSphere experiments care was taken to accurately document component dimensions (±0.0001 inches), masses (±0.01 g), and material data. The experiment was also set up to minimize the amount of structural material in the sphere proximity. Two, correlated spheres were evaluated and judged to be acceptable as criticality benchmark experiments. This evaluation is given in HEU-MET-FAST-100. The second, smaller sphere was used for additional reactor physics measurements. Worth measurements (Reference 1, 2, 3 and 4), the delayed neutron fraction (Reference 3, 4 and 5) and surface material worth coefficient (Reference 1 and 2) are all measured and judged to be acceptable as benchmark data. The prompt neutron decay (Reference 6), relative fission density (Reference 7) and relative neutron importance (Reference 7) were measured, but are not evaluated. Information for the evaluation was compiled from References 1 through 7, the experimental logbooks 8 and 9 ; additional drawings and notes provided by the experimenter; and communication with the lead experimenter, John T. Mihalczo.« less

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, Margaret A.

    In the early 1970s Dr. John T. Mihalczo (team leader), J.J. Lynn, and J.R. Taylor performed experiments at the Oak Ridge Critical Experiments Facility (ORCEF) with highly enriched uranium (HEU) metal (called Oak Ridge Alloy or ORALLOY) in an attempt to recreate GODIVA I results with greater accuracy than those performed at Los Alamos National Laboratory in the 1950s (HEU-MET-FAST-001). The purpose of the Oak Ridge ORALLOY Sphere (ORSphere) experiments was to estimate the unreflected and unmoderated critical mass of an idealized sphere of uranium metal corrected to a density, purity, and enrichment such that it could be compared withmore » the GODIVA I experiments. “The very accurate description of this sphere, as assembled, establishes it as an ideal benchmark for calculational methods and cross-section data files” (Reference 1). While performing the ORSphere experiments care was taken to accurately document component dimensions (±0.0001 inches), masses (±0.01 g), and material data. The experiment was also set up to minimize the amount of structural material in the sphere proximity. Two, correlated spheres were evaluated and judged to be acceptable as criticality benchmark experiments. This evaluation is given in HEU-MET-FAST-100. The second, smaller sphere was used for additional reactor physics measurements. Worth measurements (Reference 1, 2, 3 and 4), the delayed neutron fraction (Reference 3, 4 and 5) and surface material worth coefficient (Reference 1 and 2) are all measured and judged to be acceptable as benchmark data. The prompt neutron decay (Reference 6), relative fission density (Reference 7) and relative neutron importance (Reference 7) were measured, but are not evaluated. Information for the evaluation was compiled from References 1 through 7, the experimental logbooks 8 and 9 ; additional drawings and notes provided by the experimenter; and communication with the lead experimenter, John T. Mihalczo.« less

  9. ENDF/B-VII.1 Neutron Cross Section Data Testing with Critical Assembly Benchmarks and Reactor Experiments

    NASA Astrophysics Data System (ADS)

    Kahler, A. C.; MacFarlane, R. E.; Mosteller, R. D.; Kiedrowski, B. C.; Frankle, S. C.; Chadwick, M. B.; McKnight, R. D.; Lell, R. M.; Palmiotti, G.; Hiruta, H.; Herman, M.; Arcilla, R.; Mughabghab, S. F.; Sublet, J. C.; Trkov, A.; Trumbull, T. H.; Dunn, M.

    2011-12-01

    The ENDF/B-VII.1 library is the latest revision to the United States' Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 423 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [M. B. Chadwick et al., "ENDF/B-VII.1 Nuclear Data for Science and Technology: Cross Sections, Covariances, Fission Product Yields and Decay Data," Nuclear Data Sheets, 112, 2887 (2011)]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unmoderated and uranium reflected 235U and 239Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also confirmed for selected actinide reaction rates such as 236U, 238,242Pu and 241,243Am capture in fast systems. Other deficiencies, such as the overprediction of Pu solution system critical eigenvalues and a decreasing trend in calculated eigenvalue for 233U fueled systems as a function of Above-Thermal Fission Fraction remain. The comprehensive nature of this critical benchmark suite and the generally accurate calculated eigenvalues obtained with ENDF/B-VII.1 neutron cross sections support the conclusion that this is the most accurate general purpose ENDF/B cross section library yet released to the technical community.

  10. Overview of Experiments for Physics of Fast Reactors from the International Handbooks of Evaluated Criticality Safety Benchmark Experiments and Evaluated Reactor Physics Benchmark Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bess, J. D.; Briggs, J. B.; Gulliford, J.

    Overview of Experiments to Study the Physics of Fast Reactors Represented in the International Directories of Critical and Reactor Experiments John D. Bess Idaho National Laboratory Jim Gulliford, Tatiana Ivanova Nuclear Energy Agency of the Organisation for Economic Cooperation and Development E.V.Rozhikhin, M.Yu.Sem?nov, A.M.Tsibulya Institute of Physics and Power Engineering The study the physics of fast reactors traditionally used the experiments presented in the manual labor of the Working Group on Evaluation of sections CSEWG (ENDF-202) issued by the Brookhaven National Laboratory in 1974. This handbook presents simplified homogeneous model experiments with relevant experimental data, as amended. The Nuclear Energymore » Agency of the Organization for Economic Cooperation and Development coordinates the activities of two international projects on the collection, evaluation and documentation of experimental data - the International Project on the assessment of critical experiments (1994) and the International Project on the assessment of reactor experiments (since 2005). The result of the activities of these projects are replenished every year, an international directory of critical (ICSBEP Handbook) and reactor (IRPhEP Handbook) experiments. The handbooks present detailed models of experiments with minimal amendments. Such models are of particular interest in terms of the settlements modern programs. The directories contain a large number of experiments which are suitable for the study of physics of fast reactors. Many of these experiments were performed at specialized critical stands, such as BFS (Russia), ZPR and ZPPR (USA), the ZEBRA (UK) and the experimental reactor JOYO (Japan), FFTF (USA). Other experiments, such as compact metal assembly, is also of interest in terms of the physics of fast reactors, they have been carried out on the universal critical stands in Russian institutes (VNIITF and VNIIEF) and the US (LANL, LLNL, and others.). Also worth mentioning is the critical experiments with fast reactor fuel rods in water, interesting in terms of justification of nuclear safety during transportation and storage of fresh and spent fuel. These reports provide a detailed review of the experiment, designate the area of their application and include results of calculations on modern systems of constants in comparison with the estimated experimental data.« less

  11. Proceedings of the Nuclear Criticality Technology Safety Workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rene G. Sanchez

    1998-04-01

    This document contains summaries of most of the papers presented at the 1995 Nuclear Criticality Technology Safety Project (NCTSP) meeting, which was held May 16 and 17 at San Diego, Ca. The meeting was broken up into seven sessions, which covered the following topics: (1) Criticality Safety of Project Sapphire; (2) Relevant Experiments For Criticality Safety; (3) Interactions with the Former Soviet Union; (4) Misapplications and Limitations of Monte Carlo Methods Directed Toward Criticality Safety Analyses; (5) Monte Carlo Vulnerabilities of Execution and Interpretation; (6) Monte Carlo Vulnerabilities of Representation; and (7) Benchmark Comparisons.

  12. Gadolinia depletion analysis by CASMO-4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kobayashi, Y.; Saji, E.; Toba, A.

    1993-01-01

    CASMO-4 is the most recent version of the lattice physics code CASMO introduced by Studsvik. The principal aspects of the CASMO-4 model that differ from the models in previous CASMO versions are as follows: (1) heterogeneous model for two-dimensional transport theory calculations; and (2) microregion depletion model for burnable absorbers, such as gadolinia. Of these aspects, the first has previously been benchmarked against measured data of critical experiments and Monte Carlo calculations, verifying the high degree of accuracy. To proceed with CASMO-4 benchmarking, it is desirable to benchmark the microregion depletion model, which enables CASMO-4 to calculate gadolinium depletion directlymore » without the need for precalculated MICBURN cross-section data. This paper presents the benchmarking results for the microregion depletion model in CASMO-4 using the measured data of depleted gadolinium rods.« less

  13. Quality management benchmarking: FDA compliance in pharmaceutical industry.

    PubMed

    Jochem, Roland; Landgraf, Katja

    2010-01-01

    By analyzing and comparing industry and business best practice, processes can be optimized and become more successful mainly because efficiency and competitiveness increase. This paper aims to focus on some examples. Case studies are used to show knowledge exchange in the pharmaceutical industry. Best practice solutions were identified in two companies using a benchmarking method and five-stage model. Despite large administrations, there is much potential regarding business process organization. This project makes it possible for participants to fully understand their business processes. The benchmarking method gives an opportunity to critically analyze value chains (a string of companies or players working together to satisfy market demands for a special product). Knowledge exchange is interesting for companies that like to be global players. Benchmarking supports information exchange and improves competitive ability between different enterprises. Findings suggest that the five-stage model improves efficiency and effectiveness. Furthermore, the model increases the chances for reaching targets. The method gives security to partners that did not have benchmarking experience. The study identifies new quality management procedures. Process management and especially benchmarking is shown to support pharmaceutical industry improvements.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Margaret A. Marshall

    In the early 1970’s Dr. John T. Mihalczo (team leader), J.J. Lynn, and J.R. Taylor performed experiments at the Oak Ridge Critical Experiments Facility (ORCEF) with highly enriched uranium (HEU) metal (called Oak Ridge Alloy or ORALLOY) in an attempt to recreate GODIVA I results with greater accuracy than those performed at Los Alamos National Laboratory in the 1950’s (HEU-MET-FAST-001). The purpose of the Oak Ridge ORALLOY Sphere (ORSphere) experiments was to estimate the unreflected and unmoderated critical mass of an idealized sphere of uranium metal corrected to a density, purity, and enrichment such that it could be compared withmore » the GODIVA I experiments. “The very accurate description of this sphere, as assembled, establishes it as an ideal benchmark for calculational methods and cross-section data files.” (Reference 1) While performing the ORSphere experiments care was taken to accurately document component dimensions (±0. 0001 in. for non-spherical parts), masses (±0.01 g), and material data The experiment was also set up to minimize the amount of structural material in the sphere proximity. A three part sphere was initially assembled with an average radius of 3.4665 in. and was then machined down to an average radius of 3.4420 in. (3.4425 in. nominal). These two spherical configurations were evaluated and judged to be acceptable benchmark experiments; however, the two experiments are highly correlated.« less

  15. The Paucity Problem: Where Have All the Space Reactor Experiments Gone?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bess, John D.; Marshall, Margaret A.

    2016-10-01

    The Handbooks of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) together contain a plethora of documented and evaluated experiments essential in the validation of nuclear data, neutronics codes, and modeling of various nuclear systems. Unfortunately, only a minute selection of handbook data (twelve evaluations) are of actual experimental facilities and mockups designed specifically for space nuclear research. There is a paucity problem, such that the multitude of space nuclear experimental activities performed in the past several decades have yet to be recovered and made available in such detail that themore » international community could benefit from these valuable historical research efforts. Those experiments represent extensive investments in infrastructure, expertise, and cost, as well as constitute significantly valuable resources of data supporting past, present, and future research activities. The ICSBEP and IRPhEP were established to identify and verify comprehensive sets of benchmark data; evaluate the data, including quantification of biases and uncertainties; compile the data and calculations in a standardized format; and formally document the effort into a single source of verified benchmark data. See full abstract in attached document.« less

  16. Covariance Applications in Criticality Safety, Light Water Reactor Analysis, and Spent Fuel Characterization

    DOE PAGES

    Williams, M. L.; Wiarda, D.; Ilas, G.; ...

    2014-06-15

    Recently, we processed a new covariance data library based on ENDF/B-VII.1 for the SCALE nuclear analysis code system. The multigroup covariance data are discussed here, along with testing and application results for critical benchmark experiments. Moreover, the cross section covariance library, along with covariances for fission product yields and decay data, is used to compute uncertainties in the decay heat produced by a burned reactor fuel assembly.

  17. The challenges of numerically simulating analogue brittle thrust wedges

    NASA Astrophysics Data System (ADS)

    Buiter, Susanne; Ellis, Susan

    2017-04-01

    Fold-and-thrust belts and accretionary wedges form when sedimentary and crustal rocks are compressed into thrusts and folds in the foreland of an orogen or at a subduction trench. For over a century, analogue models have been used to investigate the deformation characteristics of such brittle wedges. These models predict wedge shapes that agree with analytical critical taper theory and internal deformation structures that well resemble natural observations. In a series of comparison experiments for thrust wedges, called the GeoMod2004 (1,2) and GeoMod2008 (3,4) experiments, it was shown that different numerical solution methods successfully reproduce sandbox thrust wedges. However, the GeoMod2008 benchmark also pointed to the difficulties of representing frictional boundary conditions and sharp velocity discontinuities with continuum numerical methods, in addition to the well-known challenges of numerical plasticity. Here we show how details in the numerical implementation of boundary conditions can substantially impact numerical wedge deformation. We consider experiment 1 of the GeoMod2008 brittle thrust wedge benchmarks. This experiment examines a triangular thrust wedge in the stable field of critical taper theory that should remain stable, that is, without internal deformation, when sliding over a basal frictional surface. The thrust wedge is translated by lateral displacement of a rigid mobile wall. The corner between the mobile wall and the subsurface is a velocity discontinuity. Using our finite-element code SULEC, we show how different approaches to implementing boundary friction (boundary layer or contact elements) and the velocity discontinuity (various smoothing schemes) can cause the wedge to indeed translate in a stable manner or to undergo internal deformation (which is a fail). We recommend that numerical studies of sandbox setups not only report the details of their implementation of boundary conditions, but also document the modelling attempts that failed. References 1. Buiter and the GeoMod2004 Team, 2006. The numerical sandbox: comparison of model results for a shortening and an extension experiment. Geol. Soc. Lond. Spec. Publ. 253, 29-64 2. Schreurs and the GeoMod2004 Team, 2006. Analogue benchmarks of shortening and extension experiments. Geol. Soc. Lond. Spec. Publ. 253, 1-27 3. Buiter, Schreurs and the GeoMod2008 Team, 2016. Benchmarking numerical models of brittle thrust wedges, J. Struct. Geol. 92, 140-177 4. Schreurs, Buiter and the GeoMod2008 Team, 2016. Benchmarking analogue models of brittle thrust wedges, J. Struct. Geol. 92, 116-13

  18. Revisiting the PLUMBER Experiments from a Process-Diagnostics Perspective

    NASA Astrophysics Data System (ADS)

    Nearing, G. S.; Ruddell, B. L.; Clark, M. P.; Nijssen, B.; Peters-Lidard, C. D.

    2017-12-01

    The PLUMBER benchmarking experiments [1] showed that some of the most sophisticated land models (CABLE, CH-TESSEL, COLA-SSiB, ISBA-SURFEX, JULES, Mosaic, Noah, ORCHIDEE) were outperformed - in simulations of half-hourly surface energy fluxes - by instantaneous, out-of-sample, and globally-stationary regressions with no state memory. One criticism of PLUMBER is that the benchmarking methodology was not derived formally, so that applying a similar methodology with different performance metrics can result in qualitatively different results. Another common criticism of model intercomparison projects in general is that they offer little insight into process-level deficiencies in the models, and therefore are of marginal value for helping to improve the models. We address both of these issues by proposing a formal benchmarking methodology that also yields a formal and quantitative method for process-level diagnostics. We apply this to the PLUMBER experiments to show that (1) the PLUMBER conclusions were generally correct - the models use only a fraction of the information available to them from met forcing data (<50% by our analysis), and (2) all of the land models investigated by PLUMBER have similar process-level error structures, and therefore together do not represent a meaningful sample of structural or epistemic uncertainty. We conclude by suggesting two ways to improve the experimental design of model intercomparison and/or model benchmarking studies like PLUMBER. First, PLUMBER did not report model parameter values, and it is necessary to know these values to separate parameter uncertainty from structural uncertainty. This is a first order requirement if we want to use intercomparison studies to provide feedback to model development. Second, technical documentation of land models is inadequate. Future model intercomparison projects should begin with a collaborative effort by model developers to document specific differences between model structures. This could be done in a reproducible way using a unified, process-flexible system like SUMMA [2]. [1] Best, M.J. et al. (2015) 'The plumbing of land surface models: benchmarking model performance', J. Hydrometeor. [2] Clark, M.P. et al. (2015) 'A unified approach for process-based hydrologic modeling: 1. Modeling concept', Water Resour. Res.

  19. Development and Testing of Neutron Cross Section Covariance Data for SCALE 6.2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, William BJ J; Williams, Mark L; Wiarda, Dorothea

    2015-01-01

    Neutron cross-section covariance data are essential for many sensitivity/uncertainty and uncertainty quantification assessments performed both within the TSUNAMI suite and more broadly throughout the SCALE code system. The release of ENDF/B-VII.1 included a more complete set of neutron cross-section covariance data: these data form the basis for a new cross-section covariance library to be released in SCALE 6.2. A range of testing is conducted to investigate the properties of these covariance data and ensure that the data are reasonable. These tests include examination of the uncertainty in critical experiment benchmark model k eff values due to nuclear data uncertainties, asmore » well as similarity assessments of irradiated pressurized water reactor (PWR) and boiling water reactor (BWR) fuel with suites of critical experiments. The contents of the new covariance library, the testing performed, and the behavior of the new covariance data are described in this paper. The neutron cross-section covariances can be combined with a sensitivity data file generated using the TSUNAMI suite of codes within SCALE to determine the uncertainty in system k eff caused by nuclear data uncertainties. The Verified, Archived Library of Inputs and Data (VALID) maintained at Oak Ridge National Laboratory (ORNL) contains over 400 critical experiment benchmark models, and sensitivity data are generated for each of these models. The nuclear data uncertainty in k eff is generated for each experiment, and the resulting uncertainties are tabulated and compared to the differences in measured and calculated results. The magnitude of the uncertainty for categories of nuclides (such as actinides, fission products, and structural materials) is calculated for irradiated PWR and BWR fuel to quantify the effect of covariance library changes between the SCALE 6.1 and 6.2 libraries. One of the primary applications of sensitivity/uncertainty methods within SCALE is the assessment of similarities between benchmark experiments and safety applications. This is described by a c k value for each experiment with each application. Several studies have analyzed typical c k values for a range of critical experiments compared with hypothetical irradiated fuel applications. The c k value is sensitive to the cross-section covariance data because the contribution of each nuclide is influenced by its uncertainty; large uncertainties indicate more likely bias sources and are thus given more weight. Changes in c k values resulting from different covariance data can be used to examine and assess underlying data changes. These comparisons are performed for PWR and BWR fuel in storage and transportation systems.« less

  20. Implementation of Benchmarking Transportation Logistics Practices and Future Benchmarking Organizations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thrower, A.W.; Patric, J.; Keister, M.

    2008-07-01

    The purpose of the Office of Civilian Radioactive Waste Management's (OCRWM) Logistics Benchmarking Project is to identify established government and industry practices for the safe transportation of hazardous materials which can serve as a yardstick for design and operation of OCRWM's national transportation system for shipping spent nuclear fuel and high-level radioactive waste to the proposed repository at Yucca Mountain, Nevada. The project will present logistics and transportation practices and develop implementation recommendations for adaptation by the national transportation system. This paper will describe the process used to perform the initial benchmarking study, highlight interim findings, and explain how thesemore » findings are being implemented. It will also provide an overview of the next phase of benchmarking studies. The benchmarking effort will remain a high-priority activity throughout the planning and operational phases of the transportation system. The initial phase of the project focused on government transportation programs to identify those practices which are most clearly applicable to OCRWM. These Federal programs have decades of safe transportation experience, strive for excellence in operations, and implement effective stakeholder involvement, all of which parallel OCRWM's transportation mission and vision. The initial benchmarking project focused on four business processes that are critical to OCRWM's mission success, and can be incorporated into OCRWM planning and preparation in the near term. The processes examined were: transportation business model, contract management/out-sourcing, stakeholder relations, and contingency planning. More recently, OCRWM examined logistics operations of AREVA NC's Business Unit Logistics in France. The next phase of benchmarking will focus on integrated domestic and international commercial radioactive logistic operations. The prospective companies represent large scale shippers and have vast experience in safely and efficiently shipping spent nuclear fuel and other radioactive materials. Additional business processes may be examined in this phase. The findings of these benchmarking efforts will help determine the organizational structure and requirements of the national transportation system. (authors)« less

  1. [The OPTIMISE study (Optimal Type 2 Diabetes Management Including Benchmarking and Standard Treatment]. Results for Luxembourg].

    PubMed

    Michel, G

    2012-01-01

    The OPTIMISE study (NCT00681850) has been run in six European countries, including Luxembourg, to prospectively assess the effect of benchmarking on the quality of primary care in patients with type 2 diabetes, using major modifiable vascular risk factors as critical quality indicators. Primary care centers treating type 2 diabetic patients were randomized to give standard care (control group) or standard care with feedback benchmarked against other centers in each country (benchmarking group). Primary endpoint was percentage of patients in the benchmarking group achieving pre-set targets of the critical quality indicators: glycated hemoglobin (HbAlc), systolic blood pressure (SBP) and low-density lipoprotein (LDL) cholesterol after 12 months follow-up. In Luxembourg, in the benchmarking group, more patients achieved target for SBP (40.2% vs. 20%) and for LDL-cholesterol (50.4% vs. 44.2%). 12.9% of patients in the benchmarking group met all three targets compared with patients in the control group (8.3%). In this randomized, controlled study, benchmarking was shown to be an effective tool for improving critical quality indicator targets, which are the principal modifiable vascular risk factors in diabetes type 2.

  2. Leniency and Halo Bias in Industry-Based Assessments of Student Competencies: A Critical, Sector-Based Analysis

    ERIC Educational Resources Information Center

    Wolf, Katharina

    2015-01-01

    Industry placements are popular means to provide students with an opportunity to apply their skills, knowledge and experience in a "real world" setting. Within this context, supervisor feedback allows educators to measure students' performance beyond academic objectives, by benchmarking it against industry expectations. However, industry…

  3. Evaluating real-time Java for mission-critical large-scale embedded systems

    NASA Technical Reports Server (NTRS)

    Sharp, D. C.; Pla, E.; Luecke, K. R.; Hassan, R. J.

    2003-01-01

    This paper describes benchmarking results on an RT JVM. This paper extends previously published results by including additional tests, by being run on a recently available pre-release version of the first commercially supported RTSJ implementation, and by assessing results based on our experience with avionics systems in other languages.

  4. Shift Verification and Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pandya, Tara M.; Evans, Thomas M.; Davidson, Gregory G

    2016-09-07

    This documentation outlines the verification and validation of Shift for the Consortium for Advanced Simulation of Light Water Reactors (CASL). Five main types of problems were used for validation: small criticality benchmark problems; full-core reactor benchmarks for light water reactors; fixed-source coupled neutron-photon dosimetry benchmarks; depletion/burnup benchmarks; and full-core reactor performance benchmarks. We compared Shift results to measured data and other simulated Monte Carlo radiation transport code results, and found very good agreement in a variety of comparison measures. These include prediction of critical eigenvalue, radial and axial pin power distributions, rod worth, leakage spectra, and nuclide inventories over amore » burn cycle. Based on this validation of Shift, we are confident in Shift to provide reference results for CASL benchmarking.« less

  5. Benchmarking the Physical Therapist Academic Environment to Understand the Student Experience.

    PubMed

    Shields, Richard K; Dudley-Javoroski, Shauna; Sass, Kelly J; Becker, Marcie

    2018-04-19

    Identifying excellence in physical therapist academic environments is complicated by the lack of nationally available benchmarking data. The objective of this study was to compare a physical therapist academic environment to another health care profession (medicine) academic environment using the Association of American Medical Colleges Graduation Questionnaire (GQ) survey. The design consisted of longitudinal benchmarking. Between 2009 and 2017, the GQ was administered to graduates of a physical therapist education program (Department of Physical Therapy and Rehabilitation Science, Carver College of Medicine, The University of Iowa [PTRS]). Their ratings of the educational environment were compared to nationwide data for a peer health care profession (medicine) educational environment. Benchmarking to the GQ capitalizes on a large, psychometrically validated database of academic domains that may be broadly applicable to health care education. The GQ captures critical information about the student experience (eg, faculty professionalism, burnout, student mistreatment) that can be used to characterize the educational environment. This study hypothesized that the ratings provided by 9 consecutive cohorts of PTRS students (n = 316) would reveal educational environment differences from academic medical education. PTRS students reported significantly higher ratings of the educational emotional climate and student-faculty interactions than medical students. PTRS and medical students did not differ on ratings of empathy and tolerance for ambiguity. PTRS students reported significantly lower ratings of burnout than medical students. PTRS students descriptively reported observing greater faculty professionalism and experiencing less mistreatment than medical students. The generalizability of these findings to other physical therapist education environments has not been established. Selected elements of the GQ survey revealed differences in the educational environments experienced by physical therapist students and medical students. All physical therapist academic programs should adopt a universal method to benchmark the educational environment to understand the student experience.

  6. A Critical Thinking Benchmark for a Department of Agricultural Education and Studies

    ERIC Educational Resources Information Center

    Perry, Dustin K.; Retallick, Michael S.; Paulsen, Thomas H.

    2014-01-01

    Due to an ever changing world where technology seemingly provides endless answers, today's higher education students must master a new skill set reflecting an emphasis on critical thinking, problem solving, and communications. The purpose of this study was to establish a departmental benchmark for critical thinking abilities of students majoring…

  7. Laboratory Instruction in the Service of Science Teaching and Learning: Reinventing and Reinvigorating the Laboratory Experience

    ERIC Educational Resources Information Center

    McComas, William

    2005-01-01

    The Benchmarks for Science Literacy and the National Science Education Standards strongly suggest that students should be engaged in hands-on learning. However, from many corners, the original "mental training" rationale for school labs has been criticized, the "cookbook" nature of laboratory exercises condemned, and the prevalence of using…

  8. Summer 2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendoza, Paul Michael

    2016-08-31

    The project goals seek to develop applications in order to automate MCNP criticality benchmark execution; create a dataset containing static benchmark information; combine MCNP output with benchmark information; and fit and visually represent data.

  9. Benchmarking of venous thromboembolism prophylaxis practice with ENT.UK guidelines.

    PubMed

    Al-Qahtani, Ali S

    2017-05-01

    The aim of this study was to benchmark our guidelines of prevention of venous thromboembolism (VTE) in ENT surgical population against ENT.UK guidelines, and also to encourage healthcare providers to utilize benchmarking as an effective method of improving performance. The study design is prospective descriptive analysis. The setting of this study is tertiary referral centre (Assir Central Hospital, Abha, Saudi Arabia). In this study, we are benchmarking our practice guidelines of the prevention of VTE in the ENT surgical population against that of ENT.UK guidelines to mitigate any gaps. ENT guidelines 2010 were downloaded from the ENT.UK Website. Our guidelines were compared with the possibilities that either our performance meets or fall short of ENT.UK guidelines. Immediate corrective actions will take place if there is quality chasm between the two guidelines. ENT.UK guidelines are evidence-based and updated which may serve as role-model for adoption and benchmarking. Our guidelines were accordingly amended to contain all factors required in providing a quality service to ENT surgical patients. While not given appropriate attention, benchmarking is a useful tool in improving quality of health care. It allows learning from others' practices and experiences, and works towards closing any quality gaps. In addition, benchmarking clinical outcomes is critical for quality improvement and informing decisions concerning service provision. It is recommended to be included on the list of quality improvement methods of healthcare services.

  10. Validation of the WIMSD4M cross-section generation code with benchmark results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deen, J.R.; Woodruff, W.L.; Leal, L.E.

    1995-01-01

    The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment Research and Test Reactor (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the WIMSD4M cross-section librariesmore » for reactor modeling of fresh water moderated cores. The results of calculations performed with multigroup cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory (ORNL) unreflected HEU critical spheres, the TRX LEU critical experiments, and calculations of a modified Los Alamos HEU D{sub 2}O moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.« less

  11. Validation of the WIMSD4M cross-section generation code with benchmark results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leal, L.C.; Deen, J.R.; Woodruff, W.L.

    1995-02-01

    The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment for Research and Test (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the procedure to generatemore » cross-section libraries for reactor analyses and calculations utilizing the WIMSD4M code. To do so, the results of calculations performed with group cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory(ORNL) unreflected critical spheres, the TRX critical experiments, and calculations of a modified Los Alamos highly-enriched heavy-water moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.« less

  12. Ensuring the validity of calculated subcritical limits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, H.K.

    1977-01-01

    The care taken at the Savannah River Laboratory and Plant to ensure the validity of calculated subcritical limits is described. Close attention is given to ANSI N16.1-1975, ''Validation of Calculational Methods for Nuclear Criticality Safety.'' The computer codes used for criticality safety computations, which are listed and are briefly described, have been placed in the SRL JOSHUA system to facilitate calculation and to reduce input errors. A driver module, KOKO, simplifies and standardizes input and links the codes together in various ways. For any criticality safety evaluation, correlations of the calculational methods are made with experiment to establish bias. Occasionallymore » subcritical experiments are performed expressly to provide benchmarks. Calculated subcritical limits contain an adequate but not excessive margin to allow for uncertainty in the bias. The final step in any criticality safety evaluation is the writing of a report describing the calculations and justifying the margin.« less

  13. Resonance Parameter Adjustment Based on Integral Experiments

    DOE PAGES

    Sobes, Vladimir; Leal, Luiz; Arbanas, Goran; ...

    2016-06-02

    Our project seeks to allow coupling of differential and integral data evaluation in a continuous-energy framework and to use the generalized linear least-squares (GLLS) methodology in the TSURFER module of the SCALE code package to update the parameters of a resolved resonance region evaluation. We recognize that the GLLS methodology in TSURFER is identical to the mathematical description of a Bayesian update in SAMMY, the SAMINT code was created to use the mathematical machinery of SAMMY to update resolved resonance parameters based on integral data. Traditionally, SAMMY used differential experimental data to adjust nuclear data parameters. Integral experimental data, suchmore » as in the International Criticality Safety Benchmark Experiments Project, remain a tool for validation of completed nuclear data evaluations. SAMINT extracts information from integral benchmarks to aid the nuclear data evaluation process. Later, integral data can be used to resolve any remaining ambiguity between differential data sets, highlight troublesome energy regions, determine key nuclear data parameters for integral benchmark calculations, and improve the nuclear data covariance matrix evaluation. Moreover, SAMINT is not intended to bias nuclear data toward specific integral experiments but should be used to supplement the evaluation of differential experimental data. Using GLLS ensures proper weight is given to the differential data.« less

  14. Benchmarking reference services: step by step.

    PubMed

    Buchanan, H S; Marshall, J G

    1996-01-01

    This article is a companion to an introductory article on benchmarking published in an earlier issue of Medical Reference Services Quarterly. Librarians interested in benchmarking often ask the following questions: How do I determine what to benchmark; how do I form a benchmarking team; how do I identify benchmarking partners; what's the best way to collect and analyze benchmarking information; and what will I do with the data? Careful planning is a critical success factor of any benchmarking project, and these questions must be answered before embarking on a benchmarking study. This article summarizes the steps necessary to conduct benchmarking research. Relevant examples of each benchmarking step are provided.

  15. A comprehensive benchmarking study of protocols and sequencing platforms for 16S rRNA community profiling

    DOE PAGES

    Podar, Mircea; Shakya, Migun; D'Amore, Rosalinda; ...

    2016-01-14

    In the last 5 years, the rapid pace of innovations and improvements in sequencing technologies has completely changed the landscape of metagenomic and metagenetic experiments. Therefore, it is critical to benchmark the various methodologies for interrogating the composition of microbial communities, so that we can assess their strengths and limitations. Here, the most common phylogenetic marker for microbial community diversity studies is the 16S ribosomal RNA gene and in the last 10 years the field has moved from sequencing a small number of amplicons and samples to more complex studies where thousands of samples and multiple different gene regions aremore » interrogated.« less

  16. Thought Experiment to Examine Benchmark Performance for Fusion Nuclear Data

    NASA Astrophysics Data System (ADS)

    Murata, Isao; Ohta, Masayuki; Kusaka, Sachie; Sato, Fuminobu; Miyamaru, Hiroyuki

    2017-09-01

    There are many benchmark experiments carried out so far with DT neutrons especially aiming at fusion reactor development. These integral experiments seemed vaguely to validate the nuclear data below 14 MeV. However, no precise studies exist now. The author's group thus started to examine how well benchmark experiments with DT neutrons can play a benchmarking role for energies below 14 MeV. Recently, as a next phase, to generalize the above discussion, the energy range was expanded to the entire region. In this study, thought experiments with finer energy bins have thus been conducted to discuss how to generally estimate performance of benchmark experiments. As a result of thought experiments with a point detector, the sensitivity for a discrepancy appearing in the benchmark analysis is "equally" due not only to contribution directly conveyed to the deterctor, but also due to indirect contribution of neutrons (named (A)) making neutrons conveying the contribution, indirect controbution of neutrons (B) making the neutrons (A) and so on. From this concept, it would become clear from a sensitivity analysis in advance how well and which energy nuclear data could be benchmarked with a benchmark experiment.

  17. Criticality experiments and benchmarks for cross section evaluation: the neptunium case

    NASA Astrophysics Data System (ADS)

    Leong, L. S.; Tassan-Got, L.; Audouin, L.; Paradela, C.; Wilson, J. N.; Tarrio, D.; Berthier, B.; Duran, I.; Le Naour, C.; Stéphan, C.

    2013-03-01

    The 237Np neutron-induced fission cross section has been recently measured in a large energy range (from eV to GeV) at the n_TOF facility at CERN. When compared to previous measurement the n_TOF fission cross section appears to be higher by 5-7% beyond the fission threshold. To check the relevance of n_TOF data, we apply a criticality experiment performed at Los Alamos with a 6 kg sphere of 237Np, surrounded by enriched uranium 235U so as to approach criticality with fast neutrons. The multiplication factor ke f f of the calculation is in better agreement with the experiment (the deviation of 750 pcm is reduced to 250 pcm) when we replace the ENDF/B-VII.0 evaluation of the 237Np fission cross section by the n_TOF data. We also explore the hypothesis of deficiencies of the inelastic cross section in 235U which has been invoked by some authors to explain the deviation of 750 pcm. With compare to inelastic large distortion calculation, it is incompatible with existing measurements. Also we show that the v of 237Np can hardly be incriminated because of the high accuracy of the existing data. Fission rate ratios or averaged fission cross sections measured in several fast neutron fields seem to give contradictory results on the validation of the 237Np cross section but at least one of the benchmark experiments, where the active deposits have been well calibrated for the number of atoms, favors the n_TOF data set. These outcomes support the hypothesis of a higher fission cross section of 237Np.

  18. The Department of Energy Nuclear Criticality Safety Program

    NASA Astrophysics Data System (ADS)

    Felty, James R.

    2005-05-01

    This paper broadly covers key events and activities from which the Department of Energy Nuclear Criticality Safety Program (NCSP) evolved. The NCSP maintains fundamental infrastructure that supports operational criticality safety programs. This infrastructure includes continued development and maintenance of key calculational tools, differential and integral data measurements, benchmark compilation, development of training resources, hands-on training, and web-based systems to enhance information preservation and dissemination. The NCSP was initiated in response to Defense Nuclear Facilities Safety Board Recommendation 97-2, Criticality Safety, and evolved from a predecessor program, the Nuclear Criticality Predictability Program, that was initiated in response to Defense Nuclear Facilities Safety Board Recommendation 93-2, The Need for Critical Experiment Capability. This paper also discusses the role Dr. Sol Pearlstein played in helping the Department of Energy lay the foundation for a robust and enduring criticality safety infrastructure.

  19. The MIRTE Experimental Program: An Opportunity to Test Structural Materials in Various Configurations in Thermal Energy Spectrum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leclaire, Nicolas; Le Dauphin, Francois-Xavier; Duhamel, Isabelle

    2014-11-04

    The MIRTE (Materials in Interacting and Reflecting configurations, all Thicknesses) program was established to answer the needs of criticality safety practitioners in terms of experimental validation of structural materials and to possibly contribute to nuclear data improvement, which ultimately supports reactor safety analysis as well. MIRTE took the shape of a collaboration between the AREVA and ANDRA French industrialists and a noncommercial international funding partner such as the U.S. Department of Energy. The aim of this paper is to present the configurations of the MIRTE 1 and MIRTE 2 programs and to highlight the results of the titanium experiments recentlymore » published in the International Handbook of Evaluated Criticality Safety Benchmark Experiments.« less

  20. A review of the current state-of-the-art methodology for handling bias and uncertainty in performing criticality safety evaluations. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Disney, R.K.

    1994-10-01

    The methodology for handling bias and uncertainty when calculational methods are used in criticality safety evaluations (CSE`s) is a rapidly evolving technology. The changes in the methodology are driven by a number of factors. One factor responsible for changes in the methodology for handling bias and uncertainty in CSE`s within the overview of the US Department of Energy (DOE) is a shift in the overview function from a ``site`` perception to a more uniform or ``national`` perception. Other causes for change or improvement in the methodology for handling calculational bias and uncertainty are; (1) an increased demand for benchmark criticalsmore » data to expand the area (range) of applicability of existing data, (2) a demand for new data to supplement existing benchmark criticals data, (3) the increased reliance on (or need for) computational benchmarks which supplement (or replace) experimental measurements in critical assemblies, and (4) an increased demand for benchmark data applicable to the expanded range of conditions and configurations encountered in DOE site restoration and remediation.« less

  1. Modeling of turbulent separated flows for aerodynamic applications

    NASA Technical Reports Server (NTRS)

    Marvin, J. G.

    1983-01-01

    Steady, high speed, compressible separated flows modeled through numerical simulations resulting from solutions of the mass-averaged Navier-Stokes equations are reviewed. Emphasis is placed on benchmark flows that represent simplified (but realistic) aerodynamic phenomena. These include impinging shock waves, compression corners, glancing shock waves, trailing edge regions, and supersonic high angle of attack flows. A critical assessment of modeling capabilities is provided by comparing the numerical simulations with experiment. The importance of combining experiment, numerical algorithm, grid, and turbulence model to effectively develop this potentially powerful simulation technique is stressed.

  2. Validation of light water reactor calculation methods and JEF-1-based data libraries by TRX and BAPL critical experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paratte, J.M.; Pelloni, S.; Grimm, P.

    1991-04-01

    This paper analyzes the capability of various code systems and JEF-1-based nuclear data libraries to compute light water reactor lattices by comparing calculations with results from thermal reactor benchmark experiments TRX and BAPL and with previously published values. With the JEF-1 evaluation, eigenvalues are generally well predicted within 8 mk (1 mk = 0.001) or less by all code systems, and all methods give reasonable results for the measured reaction rate ratios within, or not too far from, the experimental uncertainty.

  3. Benchmark for Strategic Performance Improvement.

    ERIC Educational Resources Information Center

    Gohlke, Annette

    1997-01-01

    Explains benchmarking, a total quality management tool used to measure and compare the work processes in a library with those in other libraries to increase library performance. Topics include the main groups of upper management, clients, and staff; critical success factors for each group; and benefits of benchmarking. (Author/LRW)

  4. Assessing the critical thinking skills of faculty: What do the findings mean for nursing education?

    PubMed

    Zygmont, Dolores M; Schaefer, Karen Moore

    2006-01-01

    The purpose of this study was twofold: to determine the critical thinking skills of nurse faculty and to examine the relationship between epistemological position and critical thinking. Most participants reported having no education on critical thinking. Data were collected using the California Critical Thinking Skills Test (CCTST) and the Learning Environment Preferences (LEP). Findings from the CCTST indicated that faculty varied considerably in their ability to think critically; LEP findings suggested that participants had not reached the intellectual level needed for critical thinking. In addition, 12 faculty participated in one-hour telephone interviews in which they described experiences in which students demonstrated critical thinking. Despite a lack of clarity on the definition of critical thinking, faculty described clinical examples where students engaged in analysis, inference, and evaluation. Based on these findings, it is recommended that faculty transfer their ability to engage students in critical thinking in the clinical setting to the classroom setting. Benchmarks can be established based on the ability of faculty to engage in critical thinking.

  5. Use the results of measurements on KBR facility for testing of neutron data of main structural materials for fast reactors

    NASA Astrophysics Data System (ADS)

    Koscheev, Vladimir; Manturov, Gennady; Pronyaev, Vladimir; Rozhikhin, Evgeny; Semenov, Mikhail; Tsibulya, Anatoly

    2017-09-01

    Several k∞ experiments were performed on the KBR critical facility at the Institute of Physics and Power Engineering (IPPE), Obninsk, Russia during the 1970s and 80s for study of neutron absorption properties of Cr, Mn, Fe, Ni, Zr, and Mo. Calculations of these benchmarks with almost any modern evaluated nuclear data libraries demonstrate bad agreement with the experiment. Neutron capture cross sections of the odd isotopes of Cr, Mn, Fe, and Ni in the ROSFOND-2010 library have been reevaluated and another evaluation of the Zr nuclear data has been adopted. Use of the modified nuclear data for Cr, Mn, Fe, Ni, and Zr leads to significant improvement of the C/E ratio for the KBR assemblies. Also a significant improvement in agreement between calculated and evaluated values for benchmarks with Fe reflectors was observed. C/E results obtained with the modified ROSFOND library for complex benchmark models that are highly sensitive to the cross sections of structural materials are no worse than results obtained with other major evaluated data libraries. Possible improvement in results by decreasing the capture cross section for Zr and Mo at the energies above 1 keV is indicated.

  6. Benchmarking is associated with improved quality of care in type 2 diabetes: the OPTIMISE randomized, controlled trial.

    PubMed

    Hermans, Michel P; Elisaf, Moses; Michel, Georges; Muls, Erik; Nobels, Frank; Vandenberghe, Hans; Brotons, Carlos

    2013-11-01

    To assess prospectively the effect of benchmarking on quality of primary care for patients with type 2 diabetes by using three major modifiable cardiovascular risk factors as critical quality indicators. Primary care physicians treating patients with type 2 diabetes in six European countries were randomized to give standard care (control group) or standard care with feedback benchmarked against other centers in each country (benchmarking group). In both groups, laboratory tests were performed every 4 months. The primary end point was the percentage of patients achieving preset targets of the critical quality indicators HbA1c, LDL cholesterol, and systolic blood pressure (SBP) after 12 months of follow-up. Of 4,027 patients enrolled, 3,996 patients were evaluable and 3,487 completed 12 months of follow-up. Primary end point of HbA1c target was achieved in the benchmarking group by 58.9 vs. 62.1% in the control group (P = 0.398) after 12 months; 40.0 vs. 30.1% patients met the SBP target (P < 0.001); 54.3 vs. 49.7% met the LDL cholesterol target (P = 0.006). Percentages of patients meeting all three targets increased during the study in both groups, with a statistically significant increase observed in the benchmarking group. The percentage of patients achieving all three targets at month 12 was significantly larger in the benchmarking group than in the control group (12.5 vs. 8.1%; P < 0.001). In this prospective, randomized, controlled study, benchmarking was shown to be an effective tool for increasing achievement of critical quality indicators and potentially reducing patient cardiovascular residual risk profile.

  7. Benchmarking Is Associated With Improved Quality of Care in Type 2 Diabetes

    PubMed Central

    Hermans, Michel P.; Elisaf, Moses; Michel, Georges; Muls, Erik; Nobels, Frank; Vandenberghe, Hans; Brotons, Carlos

    2013-01-01

    OBJECTIVE To assess prospectively the effect of benchmarking on quality of primary care for patients with type 2 diabetes by using three major modifiable cardiovascular risk factors as critical quality indicators. RESEARCH DESIGN AND METHODS Primary care physicians treating patients with type 2 diabetes in six European countries were randomized to give standard care (control group) or standard care with feedback benchmarked against other centers in each country (benchmarking group). In both groups, laboratory tests were performed every 4 months. The primary end point was the percentage of patients achieving preset targets of the critical quality indicators HbA1c, LDL cholesterol, and systolic blood pressure (SBP) after 12 months of follow-up. RESULTS Of 4,027 patients enrolled, 3,996 patients were evaluable and 3,487 completed 12 months of follow-up. Primary end point of HbA1c target was achieved in the benchmarking group by 58.9 vs. 62.1% in the control group (P = 0.398) after 12 months; 40.0 vs. 30.1% patients met the SBP target (P < 0.001); 54.3 vs. 49.7% met the LDL cholesterol target (P = 0.006). Percentages of patients meeting all three targets increased during the study in both groups, with a statistically significant increase observed in the benchmarking group. The percentage of patients achieving all three targets at month 12 was significantly larger in the benchmarking group than in the control group (12.5 vs. 8.1%; P < 0.001). CONCLUSIONS In this prospective, randomized, controlled study, benchmarking was shown to be an effective tool for increasing achievement of critical quality indicators and potentially reducing patient cardiovascular residual risk profile. PMID:23846810

  8. Testing variations of the GW approximation on strongly correlated transition metal oxides: hematite (α-Fe2O3) as a benchmark.

    PubMed

    Liao, Peilin; Carter, Emily A

    2011-09-07

    Quantitative characterization of low-lying excited electronic states in materials is critical for the development of solar energy conversion materials. The many-body Green's function method known as the GW approximation (GWA) directly probes states corresponding to photoemission and inverse photoemission experiments, thereby determining the associated band structure. Several versions of the GW approximation with different levels of self-consistency exist in the field. While the GWA based on density functional theory (DFT) works well for conventional semiconductors, less is known about its reliability for strongly correlated semiconducting materials. Here we present a systematic study of the GWA using hematite (α-Fe(2)O(3)) as the benchmark material. We analyze its performance in terms of the calculated photoemission/inverse photoemission band gaps, densities of states, and dielectric functions. Overall, a non-self-consistent G(0)W(0) using input from DFT+U theory produces physical observables in best agreement with experiments. This journal is © the Owner Societies 2011

  9. EBR-II Reactor Physics Benchmark Evaluation Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, Chad L.; Lum, Edward S; Stewart, Ryan

    This report provides a reactor physics benchmark evaluation with associated uncertainty quantification for the critical configuration of the April 1986 Experimental Breeder Reactor II Run 138B core configuration.

  10. Benchmarks and Quality Assurance for Online Course Development in Higher Education

    ERIC Educational Resources Information Center

    Wang, Hong

    2008-01-01

    As online education has entered the main stream of the U.S. higher education, quality assurance in online course development has become a critical topic in distance education. This short article summarizes the major benchmarks related to online course development, listing and comparing the benchmarks of the National Education Association (NEA),…

  11. Experimental physics characteristics of a heavy-metal-reflected fast-spectrum critical assembly

    NASA Technical Reports Server (NTRS)

    Heneveld, W. H.; Paschall, R. K.; Springer, T. H.; Swanson, V. A.; Thiele, A. W.; Tuttle, R. J.

    1971-01-01

    A zero-power critical assembly was designed, constructed, and operated for the purpose of conducting a series of benchmark experiments dealing with the physics characteristics of a UN-fueled, Li-7 cooled, Mo-reflected, drum-controlled compact fast reactor for use with a space-power electric conversion system. The experimental program consisted basically of measuring the differential neutron spectra and the changes in critical mass that accompanied the stepwise addition of (Li-7)3N, Hf, Ta, and W to a basic core fueled with U metal in a pin-type Ta honeycomb structure. In addition, experimental results were obtained on power distributions, control characteristics, neutron lifetime, and reactivity worths of numerous absorber, structural, and scattering materials.

  12. Optimal type 2 diabetes mellitus management: the randomised controlled OPTIMISE benchmarking study: baseline results from six European countries.

    PubMed

    Hermans, Michel P; Brotons, Carlos; Elisaf, Moses; Michel, Georges; Muls, Erik; Nobels, Frank

    2013-12-01

    Micro- and macrovascular complications of type 2 diabetes have an adverse impact on survival, quality of life and healthcare costs. The OPTIMISE (OPtimal Type 2 dIabetes Management Including benchmarking and Standard trEatment) trial comparing physicians' individual performances with a peer group evaluates the hypothesis that benchmarking, using assessments of change in three critical quality indicators of vascular risk: glycated haemoglobin (HbA1c), low-density lipoprotein-cholesterol (LDL-C) and systolic blood pressure (SBP), may improve quality of care in type 2 diabetes in the primary care setting. This was a randomised, controlled study of 3980 patients with type 2 diabetes. Six European countries participated in the OPTIMISE study (NCT00681850). Quality of care was assessed by the percentage of patients achieving pre-set targets for the three critical quality indicators over 12 months. Physicians were randomly assigned to receive either benchmarked or non-benchmarked feedback. All physicians received feedback on six of their patients' modifiable outcome indicators (HbA1c, fasting glycaemia, total cholesterol, high-density lipoprotein-cholesterol (HDL-C), LDL-C and triglycerides). Physicians in the benchmarking group additionally received information on levels of control achieved for the three critical quality indicators compared with colleagues. At baseline, the percentage of evaluable patients (N = 3980) achieving pre-set targets was 51.2% (HbA1c; n = 2028/3964); 34.9% (LDL-C; n = 1350/3865); 27.3% (systolic blood pressure; n = 911/3337). OPTIMISE confirms that target achievement in the primary care setting is suboptimal for all three critical quality indicators. This represents an unmet but modifiable need to revisit the mechanisms and management of improving care in type 2 diabetes. OPTIMISE will help to assess whether benchmarking is a useful clinical tool for improving outcomes in type 2 diabetes.

  13. HTR-PROTEUS Pebble Bed Experimental Program Cores 1, 1A, 2, and 3: Hexagonal Close Packing with a 1:2 Moderator-to-Fuel Pebble Ratio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John D. Bess; Barbara H. Dolphin; James W. Sterbentz

    2013-03-01

    In its deployment as a pebble bed reactor (PBR) critical facility from 1992 to 1996, the PROTEUS facility was designated as HTR-PROTEUS. This experimental program was performed as part of an International Atomic Energy Agency (IAEA) Coordinated Research Project (CRP) on the Validation of Safety Related Physics Calculations for Low Enriched HTGRs. Within this project, critical experiments were conducted for graphite moderated LEU systems to determine core reactivity, flux and power profiles, reaction-rate ratios, the worth of control rods, both in-core and reflector based, the worth of burnable poisons, kinetic parameters, and the effects of moisture ingress on these parameters.more » Four benchmark experiments were evaluated in this report: Cores 1, 1A, 2, and 3. These core configurations represent the hexagonal close packing (HCP) configurations of the HTR-PROTEUS experiment with a moderator-to-fuel pebble ratio of 1:2. Core 1 represents the only configuration utilizing ZEBRA control rods. Cores 1A, 2, and 3 use withdrawable, hollow, stainless steel control rods. Cores 1 and 1A are similar except for the use of different control rods; Core 1A also has one less layer of pebbles (21 layers instead of 22). Core 2 retains the first 16 layers of pebbles from Cores 1 and 1A and has 16 layers of moderator pebbles stacked above the fueled layers. Core 3 retains the first 17 layers of pebbles but has polyethylene rods inserted between pebbles to simulate water ingress. The additional partial pebble layer (layer 18) for Core 3 was not included as it was used for core operations and not the reported critical configuration. Cores 1, 1A, 2, and 3 were determined to be acceptable benchmark experiments.« less

  14. HTR-PROTEUS Pebble Bed Experimental Program Cores 1, 1A, 2, and 3: Hexagonal Close Packing with a 1:2 Moderator-to-Fuel Pebble Ratio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John D. Bess; Barbara H. Dolphin; James W. Sterbentz

    2012-03-01

    In its deployment as a pebble bed reactor (PBR) critical facility from 1992 to 1996, the PROTEUS facility was designated as HTR-PROTEUS. This experimental program was performed as part of an International Atomic Energy Agency (IAEA) Coordinated Research Project (CRP) on the Validation of Safety Related Physics Calculations for Low Enriched HTGRs. Within this project, critical experiments were conducted for graphite moderated LEU systems to determine core reactivity, flux and power profiles, reaction-rate ratios, the worth of control rods, both in-core and reflector based, the worth of burnable poisons, kinetic parameters, and the effects of moisture ingress on these parameters.more » Four benchmark experiments were evaluated in this report: Cores 1, 1A, 2, and 3. These core configurations represent the hexagonal close packing (HCP) configurations of the HTR-PROTEUS experiment with a moderator-to-fuel pebble ratio of 1:2. Core 1 represents the only configuration utilizing ZEBRA control rods. Cores 1A, 2, and 3 use withdrawable, hollow, stainless steel control rods. Cores 1 and 1A are similar except for the use of different control rods; Core 1A also has one less layer of pebbles (21 layers instead of 22). Core 2 retains the first 16 layers of pebbles from Cores 1 and 1A and has 16 layers of moderator pebbles stacked above the fueled layers. Core 3 retains the first 17 layers of pebbles but has polyethylene rods inserted between pebbles to simulate water ingress. The additional partial pebble layer (layer 18) for Core 3 was not included as it was used for core operations and not the reported critical configuration. Cores 1, 1A, 2, and 3 were determined to be acceptable benchmark experiments.« less

  15. 2 x 2 Polyethylene Reflected and Moderated Highly Enriched Uranium System with Rhenium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    A. Nichole Ellis; Jesson Hutchinson; John D. Bess

    2010-09-01

    The 2 × 2 array HEU-Re experiment was performed on the Planet universal critical assembly machine on November 4th, 2003 at the Los Alamos Critical Experiments Facility (LACEF) at Los Alamos National Laboratory (LANL). For this experiment, there were 10 ½ units, each full unit containing four HEU foils and two rhenium foils. The top unit contained only two HEU foils and two rhenium foils. A total of 42 HEU foils were used for this experiment. Rhenium is a desirable cladding material for space nuclear power applications. This experiment consisted of HEU foils interleaved with rhenium foils and is moderatedmore » and reflected by polyethylene plates. A unit consisted of a polyethylene plate, which has a recess for rhenium foils, and four HEU foils in a single layer in the top recess of each polyethylene plate. The Planet universal criticality assembly machine has been previously used in experiments containing HEU foils interspersed with SiO2 (HEU-MET-THERM-001), Al (HEU-MET-THERM-008), MgO (HEU-MET-THERM-009), Gd foils (HEU-MET-THERM-010), 2 × 2 × 26 Al (HEU-MET-THERM-012), Fe (HEU-MET-THERM-013 and HEU-MET-THERM-015), 2 × 2 × 23 SiO2 (HEU-MET-THERM-014), 2 × 2 × 11 hastalloy plates (HEU-MET-THERM-016), and concrete (HEU-MET-THERM-018). The 2 × 2 array of HEU-Re is considered acceptable for use as a benchmark critical experiment.« less

  16. Higher Education Ranking and Leagues Tables: Lessons Learned from Benchmarking

    ERIC Educational Resources Information Center

    Proulx, Roland

    2007-01-01

    The paper intends to contribute to the debate on ranking and league tables by adopting a critical approach to ranking methodologies from the point of view of a university benchmarking exercise. The absence of a strict benchmarking exercise in the ranking process has been, in the opinion of the author, one of the major problems encountered in the…

  17. Integral Full Core Multi-Physics PWR Benchmark with Measured Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forget, Benoit; Smith, Kord; Kumar, Shikhar

    In recent years, the importance of modeling and simulation has been highlighted extensively in the DOE research portfolio with concrete examples in nuclear engineering with the CASL and NEAMS programs. These research efforts and similar efforts worldwide aim at the development of high-fidelity multi-physics analysis tools for the simulation of current and next-generation nuclear power reactors. Like all analysis tools, verification and validation is essential to guarantee proper functioning of the software and methods employed. The current approach relies mainly on the validation of single physic phenomena (e.g. critical experiment, flow loops, etc.) and there is a lack of relevantmore » multiphysics benchmark measurements that are necessary to validate high-fidelity methods being developed today. This work introduces a new multi-cycle full-core Pressurized Water Reactor (PWR) depletion benchmark based on two operational cycles of a commercial nuclear power plant that provides a detailed description of fuel assemblies, burnable absorbers, in-core fission detectors, core loading and re-loading patterns. This benchmark enables analysts to develop extremely detailed reactor core models that can be used for testing and validation of coupled neutron transport, thermal-hydraulics, and fuel isotopic depletion. The benchmark also provides measured reactor data for Hot Zero Power (HZP) physics tests, boron letdown curves, and three-dimensional in-core flux maps from 58 instrumented assemblies. The benchmark description is now available online and has been used by many groups. However, much work remains to be done on the quantification of uncertainties and modeling sensitivities. This work aims to address these deficiencies and make this benchmark a true non-proprietary international benchmark for the validation of high-fidelity tools. This report details the BEAVRS uncertainty quantification for the first two cycle of operations and serves as the final report of the project.« less

  18. Scale-4 Analysis of Pressurized Water Reactor Critical Configurations: Volume 2-Sequoyah Unit 2 Cycle 3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowman, S.M.

    1995-01-01

    The requirements of ANSI/ANS 8.1 specify that calculational methods for away-from-reactor criticality safety analyses be validated against experimental measurements. If credit for the negative reactivity of the depleted (or spent) fuel isotopics is desired, it is necessary to benchmark computational methods against spent fuel critical configurations. This report summarizes a portion of the ongoing effort to benchmark away-from-reactor criticality analysis methods using critical configurations from commercial pressurized-water reactors. The analysis methodology selected for all the calculations reported herein is based on the codes and data provided in the SCALE-4 code system. The isotopic densities for the spent fuel assemblies inmore » the critical configurations were calculated using the SAS2H analytical sequence of the SCALE-4 system. The sources of data and the procedures for deriving SAS2H input parameters are described in detail. The SNIKR code module was used to extract the necessary isotopic densities from the SAS2H results and provide the data in the format required by the SCALE criticality analysis modules. The CSASN analytical sequence in SCALE-4 was used to perform resonance processing of the cross sections. The KENO V.a module of SCALE-4 was used to calculate the effective multiplication factor (k{sub eff}) of each case. The SCALE-4 27-group burnup library containing ENDF/B-IV (actinides) and ENDF/B-V (fission products) data was used for all the calculations. This volume of the report documents the SCALE system analysis of three reactor critical configurations for the Sequoyah Unit 2 Cycle 3. This unit and cycle were chosen because of the relevance in spent fuel benchmark applications: (1) the unit had a significantly long downtime of 2.7 years during the middle of cycle (MOC) 3, and (2) the core consisted entirely of burned fuel at the MOC restart. The first benchmark critical calculation was the MOC restart at hot, full-power (HFP) critical conditions. The other two benchmark critical calculations were the beginning-of-cycle (BOC) startup at both hot, zero-power (HZP) and HFP critical conditions. These latter calculations were used to check for consistency in the calculated results for different burnups and downtimes. The k{sub eff} results were in the range of 1.00014 to 1.00259 with a standard deviation of less than 0.001.« less

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    MOSTELLER, RUSSELL D.

    Previous studies have indicated that ENDF/B-VII preliminary releases {beta}-2 and {beta}-3, predecessors to the recent initial release of ENDF/B-VII.0, produce significantly better overall agreement with criticality benchmarks than does ENDF/B-VI. However, one of those studies also suggests that improvements still may be needed for thermal plutonium cross sections. The current study substantiates that concern by examining criticality benchmarks for unreflected spheres of plutonium-nitrate solutions and for slightly and heavily borated mixed-oxide (MOX) lattices. Results are presented for the JEFF-3.1 and JENDL-3.3 nuclear data libraries as well as ENDF/B-VII.0 and ENDF/B-VI. It is shown that ENDF/B-VII.0 tends to overpredict reactivity formore » thermal plutonium benchmarks over at least a portion of the thermal range. In addition, it is found that additional benchmark data are needed for the deep thermal range.« less

  20. Seismo-acoustic ray model benchmarking against experimental tank data.

    PubMed

    Camargo Rodríguez, Orlando; Collis, Jon M; Simpson, Harry J; Ey, Emanuel; Schneiderwind, Joseph; Felisberto, Paulo

    2012-08-01

    Acoustic predictions of the recently developed traceo ray model, which accounts for bottom shear properties, are benchmarked against tank experimental data from the EPEE-1 and EPEE-2 (Elastic Parabolic Equation Experiment) experiments. Both experiments are representative of signal propagation in a Pekeris-like shallow-water waveguide over a non-flat isotropic elastic bottom, where significant interaction of the signal with the bottom can be expected. The benchmarks show, in particular, that the ray model can be as accurate as a parabolic approximation model benchmarked in similar conditions. The results of benchmarking are important, on one side, as a preliminary experimental validation of the model and, on the other side, demonstrates the reliability of the ray approach for seismo-acoustic applications.

  1. Measuring Distribution Performance? Benchmarking Warrants Your Attention

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ericson, Sean J; Alvarez, Paul

    Identifying, designing, and measuring performance metrics is critical to securing customer value, but can be a difficult task. This article examines the use of benchmarks based on publicly available performance data to set challenging, yet fair, metrics and targets.

  2. Analysis of benchmark critical experiments with ENDF/B-VI data sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hardy, J. Jr.; Kahler, A.C.

    1991-12-31

    Several clean critical experiments were analyzed with ENDF/B-VI data to assess the adequacy of the data for U{sup 235}, U{sup 238} and oxygen. These experiments were (1) a set of homogeneous U{sup 235}-H{sub 2}O assemblies spanning a wide range of hydrogen/uranium ratio, and (2) TRX-1, a simple, H{sub 2}O-moderated Bettis lattice of slightly-enriched uranium metal rods. The analyses used the Monte Carlo program RCP01, with explicit three-dimensional geometry and detailed representation of cross sections. For the homogeneous criticals, calculated k{sub crit} values for large, thermal assemblies show good agreement with experiment. This supports the evaluated thermal criticality parameters for U{supmore » 235}. However, for assemblies with smaller H/U ratios, k{sub crit} values increase significantly with increasing leakage and flux-spectrum hardness. These trends suggest that leakage is underpredicted and that the resonance eta of the ENDF/B-VI U{sup 235} is too large. For TRX-1, reasonably good agreement is found with measured lattice parameters (reaction-rate ratios). Of primary interest is rho28, the ratio of above-thermal to thermal U{sup 238} capture. Calculated rho28 is 2.3 ({+-} 1.7) % above measurement, suggesting that U{sup 238} resonance capture remains slightly overpredicted with ENDF/B-VI. However, agreement is better than observed with earlier versions of ENDF/B.« less

  3. Uncertainty Quantification Techniques of SCALE/TSUNAMI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T; Mueller, Don

    2011-01-01

    The Standardized Computer Analysis for Licensing Evaluation (SCALE) code system developed at Oak Ridge National Laboratory (ORNL) includes Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI). The TSUNAMI code suite can quantify the predicted change in system responses, such as k{sub eff}, reactivity differences, or ratios of fluxes or reaction rates, due to changes in the energy-dependent, nuclide-reaction-specific cross-section data. Where uncertainties in the neutron cross-section data are available, the sensitivity of the system to the cross-section data can be applied to propagate the uncertainties in the cross-section data to an uncertainty in the system response. Uncertainty quantification ismore » useful for identifying potential sources of computational biases and highlighting parameters important to code validation. Traditional validation techniques often examine one or more average physical parameters to characterize a system and identify applicable benchmark experiments. However, with TSUNAMI correlation coefficients are developed by propagating the uncertainties in neutron cross-section data to uncertainties in the computed responses for experiments and safety applications through sensitivity coefficients. The bias in the experiments, as a function of their correlation coefficient with the intended application, is extrapolated to predict the bias and bias uncertainty in the application through trending analysis or generalized linear least squares techniques, often referred to as 'data adjustment.' Even with advanced tools to identify benchmark experiments, analysts occasionally find that the application models include some feature or material for which adequately similar benchmark experiments do not exist to support validation. For example, a criticality safety analyst may want to take credit for the presence of fission products in spent nuclear fuel. In such cases, analysts sometimes rely on 'expert judgment' to select an additional administrative margin to account for gap in the validation data or to conclude that the impact on the calculated bias and bias uncertainty is negligible. As a result of advances in computer programs and the evolution of cross-section covariance data, analysts can use the sensitivity and uncertainty analysis tools in the TSUNAMI codes to estimate the potential impact on the application-specific bias and bias uncertainty resulting from nuclides not represented in available benchmark experiments. This paper presents the application of methods described in a companion paper.« less

  4. Dynamic Positioning at Sea Using the Global Positioning System.

    DTIC Science & Technology

    1987-06-01

    the Global Positioning System (GPS) acquired in Phase II of the Seafloor Benchmark Experiment on R/V Point Sur in August 1986. CPS position...data from the Global Positioning System (GPS) acquired in Phase 11 of the Seafloor Benchmark Experiment on R,:V Point Sur in August 1986. GPS position...The Seafloor Benchmark Experiment, a project of the Hydrographic Sciences Group of the Oceanography Department at the Naval Postgraduate School (NPS

  5. Flowing gas, non-nuclear experiments on the gas core reactor

    NASA Technical Reports Server (NTRS)

    Kunze, J. F.; Suckling, D. H.; Copper, C. G.

    1972-01-01

    Flow tests were conducted on models of the gas core (cavity) reactor. Variations in cavity wall and injection configurations were aimed at establishing flow patterns that give a maximum of the nuclear criticality eigenvalue. Correlation with the nuclear effect was made using multigroup diffusion theory normalized by previous benchmark critical experiments. Air was used to simulate the hydrogen propellant in the flow tests, and smoked air, argon, or freon to simulate the central nuclear fuel gas. All tests were run in the down-firing direction so that gravitational effects simulated the acceleration effect of a rocket. Results show that acceptable flow patterns with high volume fraction for the simulated nuclear fuel gas and high flow rate ratios of propellant to fuel can be obtained. Using a point injector for the fuel, good flow patterns are obtained by directing the outer gas at high velocity along the cavity wall, using louvered or oblique-angle-honeycomb injection schemes.

  6. Nuclear Data Activities in Support of the DOE Nuclear Criticality Safety Program

    NASA Astrophysics Data System (ADS)

    Westfall, R. M.; McKnight, R. D.

    2005-05-01

    The DOE Nuclear Criticality Safety Program (NCSP) provides the technical infrastructure maintenance for those technologies applied in the evaluation and performance of safe fissionable-material operations in the DOE complex. These technologies include an Analytical Methods element for neutron transport as well as the development of sensitivity/uncertainty methods, the performance of Critical Experiments, evaluation and qualification of experiments as Benchmarks, and a comprehensive Nuclear Data program coordinated by the NCSP Nuclear Data Advisory Group (NDAG). The NDAG gathers and evaluates differential and integral nuclear data, identifies deficiencies, and recommends priorities on meeting DOE criticality safety needs to the NCSP Criticality Safety Support Group (CSSG). Then the NDAG identifies the required resources and unique capabilities for meeting these needs, not only for performing measurements but also for data evaluation with nuclear model codes as well as for data processing for criticality safety applications. The NDAG coordinates effort with the leadership of the National Nuclear Data Center, the Cross Section Evaluation Working Group (CSEWG), and the Working Party on International Evaluation Cooperation (WPEC) of the OECD/NEA Nuclear Science Committee. The overall objective is to expedite the issuance of new data and methods to the DOE criticality safety user. This paper describes these activities in detail, with examples based upon special studies being performed in support of criticality safety for a variety of DOE operations.

  7. Using Machine Learning to Predict MCNP Bias

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grechanuk, Pavel Aleksandrovi

    For many real-world applications in radiation transport where simulations are compared to experimental measurements, like in nuclear criticality safety, the bias (simulated - experimental k eff) in the calculation is an extremely important quantity used for code validation. The objective of this project is to accurately predict the bias of MCNP6 [1] criticality calculations using machine learning (ML) algorithms, with the intention of creating a tool that can complement the current nuclear criticality safety methods. In the latest release of MCNP6, the Whisper tool is available for criticality safety analysts and includes a large catalogue of experimental benchmarks, sensitivity profiles,more » and nuclear data covariance matrices. This data, coming from 1100+ benchmark cases, is used in this study of ML algorithms for criticality safety bias predictions.« less

  8. Experimental Data from the Benchmark SuperCritical Wing Wind Tunnel Test on an Oscillating Turntable

    NASA Technical Reports Server (NTRS)

    Heeg, Jennifer; Piatak, David J.

    2013-01-01

    The Benchmark SuperCritical Wing (BSCW) wind tunnel model served as a semi-blind testcase for the 2012 AIAA Aeroelastic Prediction Workshop (AePW). The BSCW was chosen as a testcase due to its geometric simplicity and flow physics complexity. The data sets examined include unforced system information and forced pitching oscillations. The aerodynamic challenges presented by this AePW testcase include a strong shock that was observed to be unsteady for even the unforced system cases, shock-induced separation and trailing edge separation. The current paper quantifies these characteristics at the AePW test condition and at a suggested benchmarking test condition. General characteristics of the model's behavior are examined for the entire available data set.

  9. Development and testing of the VITAMIN-B7/BUGLE-B7 coupled neutron-gamma multigroup cross-section libraries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Risner, J.M.; Wiarda, D.; Miller, T.M.

    2011-07-01

    The U.S. Nuclear Regulatory Commission's Regulatory Guide 1.190 states that calculational methods used to estimate reactor pressure vessel (RPV) fluence should use the latest version of the evaluated nuclear data file (ENDF). The VITAMIN-B6 fine-group library and BUGLE-96 broad-group library, which are widely used for RPV fluence calculations, were generated using ENDF/B-VI.3 data, which was the most current data when Regulatory Guide 1.190 was issued. We have developed new fine-group (VITAMIN-B7) and broad-group (BUGLE-B7) libraries based on ENDF/B-VII.0. These new libraries, which were processed using the AMPX code system, maintain the same group structures as the VITAMIN-B6 and BUGLE-96 libraries.more » Verification and validation of the new libraries were accomplished using diagnostic checks in AMPX, 'unit tests' for each element in VITAMIN-B7, and a diverse set of benchmark experiments including critical evaluations for fast and thermal systems, a set of experimental benchmarks that are used for SCALE regression tests, and three RPV fluence benchmarks. The benchmark evaluation results demonstrate that VITAMIN-B7 and BUGLE-B7 are appropriate for use in RPV fluence calculations and meet the calculational uncertainty criterion in Regulatory Guide 1.190. (authors)« less

  10. Development and Testing of the VITAMIN-B7/BUGLE-B7 Coupled Neutron-Gamma Multigroup Cross-Section Libraries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Risner, Joel M; Wiarda, Dorothea; Miller, Thomas Martin

    2011-01-01

    The U.S. Nuclear Regulatory Commission s Regulatory Guide 1.190 states that calculational methods used to estimate reactor pressure vessel (RPV) fluence should use the latest version of the Evaluated Nuclear Data File (ENDF). The VITAMIN-B6 fine-group library and BUGLE-96 broad-group library, which are widely used for RPV fluence calculations, were generated using ENDF/B-VI data, which was the most current data when Regulatory Guide 1.190 was issued. We have developed new fine-group (VITAMIN-B7) and broad-group (BUGLE-B7) libraries based on ENDF/B-VII. These new libraries, which were processed using the AMPX code system, maintain the same group structures as the VITAMIN-B6 and BUGLE-96more » libraries. Verification and validation of the new libraries was accomplished using diagnostic checks in AMPX, unit tests for each element in VITAMIN-B7, and a diverse set of benchmark experiments including critical evaluations for fast and thermal systems, a set of experimental benchmarks that are used for SCALE regression tests, and three RPV fluence benchmarks. The benchmark evaluation results demonstrate that VITAMIN-B7 and BUGLE-B7 are appropriate for use in LWR shielding applications, and meet the calculational uncertainty criterion in Regulatory Guide 1.190.« less

  11. Test Facilities and Experience on Space Nuclear System Developments at the Kurchatov Institute

    NASA Astrophysics Data System (ADS)

    Ponomarev-Stepnoi, Nikolai N.; Garin, Vladimir P.; Glushkov, Evgeny S.; Kompaniets, George V.; Kukharkin, Nikolai E.; Madeev, Vicktor G.; Papin, Vladimir K.; Polyakov, Dmitry N.; Stepennov, Boris S.; Tchuniyaev, Yevgeny I.; Tikhonov, Lev Ya.; Uksusov, Yevgeny I.

    2004-02-01

    The complexity of space fission systems and rigidity of requirement on minimization of weight and dimension characteristics along with the wish to decrease expenditures on their development demand implementation of experimental works which results shall be used in designing, safety substantiation, and licensing procedures. Experimental facilities are intended to solve the following tasks: obtainment of benchmark data for computer code validations, substantiation of design solutions when computational efforts are too expensive, quality control in a production process, and ``iron'' substantiation of criticality safety design solutions for licensing and public relations. The NARCISS and ISKRA critical facilities and unique ORM facility on shielding investigations at the operating OR nuclear research reactor were created in the Kurchatov Institute to solve the mentioned tasks. The range of activities performed at these facilities within the implementation of the previous Russian nuclear power system programs is briefly described in the paper. This experience shall be analyzed in terms of methodological approach to development of future space nuclear systems (this analysis is beyond this paper). Because of the availability of these facilities for experiments, the brief description of their critical assemblies and characteristics is given in this paper.

  12. Design and Implementation of a Web-Based Reporting and Benchmarking Center for Inpatient Glucometrics

    PubMed Central

    Schnipper, Jeffrey Lawrence; Messler, Jordan; Ramos, Pedro; Kulasa, Kristen; Nolan, Ann; Rogers, Kendall

    2014-01-01

    Background: Insulin is a top source of adverse drug events in the hospital, and glycemic control is a focus of improvement efforts across the country. Yet, the majority of hospitals have no data to gauge their performance on glycemic control, hypoglycemia rates, or hypoglycemic management. Current tools to outsource glucometrics reports are limited in availability or function. Methods: Society of Hospital Medicine (SHM) faculty designed and implemented a web-based data and reporting center that calculates glucometrics on blood glucose data files securely uploaded by users. Unit labels, care type (critical care, non–critical care), and unit type (eg, medical, surgical, mixed, pediatrics) are defined on upload allowing for robust, flexible reporting. Reports for any date range, care type, unit type, or any combination of units are available on demand for review or downloading into a variety of file formats. Four reports with supporting graphics depict glycemic control, hypoglycemia, and hypoglycemia management by patient day or patient stay. Benchmarking and performance ranking reports are generated periodically for all hospitals in the database. Results: In all, 76 hospitals have uploaded at least 12 months of data for non–critical care areas and 67 sites have uploaded critical care data. Critical care benchmarking reveals wide variability in performance. Some hospitals achieve top quartile performance in both glycemic control and hypoglycemia parameters. Conclusions: This new web-based glucometrics data and reporting tool allows hospitals to track their performance with a flexible reporting system, and provides them with external benchmarking. Tools like this help to establish standardized glucometrics and performance standards. PMID:24876426

  13. Design and implementation of a web-based reporting and benchmarking center for inpatient glucometrics.

    PubMed

    Maynard, Greg; Schnipper, Jeffrey Lawrence; Messler, Jordan; Ramos, Pedro; Kulasa, Kristen; Nolan, Ann; Rogers, Kendall

    2014-07-01

    Insulin is a top source of adverse drug events in the hospital, and glycemic control is a focus of improvement efforts across the country. Yet, the majority of hospitals have no data to gauge their performance on glycemic control, hypoglycemia rates, or hypoglycemic management. Current tools to outsource glucometrics reports are limited in availability or function. Society of Hospital Medicine (SHM) faculty designed and implemented a web-based data and reporting center that calculates glucometrics on blood glucose data files securely uploaded by users. Unit labels, care type (critical care, non-critical care), and unit type (eg, medical, surgical, mixed, pediatrics) are defined on upload allowing for robust, flexible reporting. Reports for any date range, care type, unit type, or any combination of units are available on demand for review or downloading into a variety of file formats. Four reports with supporting graphics depict glycemic control, hypoglycemia, and hypoglycemia management by patient day or patient stay. Benchmarking and performance ranking reports are generated periodically for all hospitals in the database. In all, 76 hospitals have uploaded at least 12 months of data for non-critical care areas and 67 sites have uploaded critical care data. Critical care benchmarking reveals wide variability in performance. Some hospitals achieve top quartile performance in both glycemic control and hypoglycemia parameters. This new web-based glucometrics data and reporting tool allows hospitals to track their performance with a flexible reporting system, and provides them with external benchmarking. Tools like this help to establish standardized glucometrics and performance standards. © 2014 Diabetes Technology Society.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cohen, J; Dossa, D; Gokhale, M

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe:more » (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows: SuperMicro X7DBE Xeon Dual Socket Blackford Server Motherboard; 2 Intel Xeon Dual-Core 2.66 GHz processors; 1 GB DDR2 PC2-5300 RAM (2 x 512); 80GB Hard Drive (Seagate SATA II Barracuda). The Fusion board is presently capable of 4X in a PCIe slot. The image resampling benchmark was run on a dual Xeon workstation with NVIDIA graphics card (see Chapter 5 for full specification). An XtremeData Opteron+FPGA was used for the language classification application. We observed that these benchmarks are not uniformly I/O intensive. The only benchmark that showed greater that 50% of the time in I/O was the graph algorithm when it accessed data files over NFS. When local disk was used, the graph benchmark spent at most 40% of its time in I/O. The other benchmarks were CPU dominated. The image resampling benchmark and language classification showed order of magnitude speedup over software by using co-processor technology to offload the CPU-intensive kernels. Our experiments to date suggest that emerging hardware technologies offer significant benefit to boosting the performance of data-intensive algorithms. Using GPU and FPGA co-processors, we were able to improve performance by more than an order of magnitude on the benchmark algorithms, eliminating the processor bottleneck of CPU-bound tasks. Experiments with a prototype solid state nonvolative memory available today show 10X better throughput on random reads than disk, with a 2X speedup on a graph processing benchmark when compared to the use of local SATA disk.« less

  15. Toward Establishing a Realistic Benchmark for Airframe Noise Research: Issues and Challenges

    NASA Technical Reports Server (NTRS)

    Khorrami, Mehdi R.

    2010-01-01

    The availability of realistic benchmark configurations is essential to enable the validation of current Computational Aeroacoustic (CAA) methodologies and to further the development of new ideas and concepts that will foster the technologies of the next generation of CAA tools. The selection of a real-world configuration, the subsequent design and fabrication of an appropriate model for testing, and the acquisition of the necessarily comprehensive aeroacoustic data base are critical steps that demand great care and attention. In this paper, a brief account of the nose landing-gear configuration, being proposed jointly by NASA and the Gulfstream Aerospace Company as an airframe noise benchmark, is provided. The underlying thought processes and the resulting building block steps that were taken during the development of this benchmark case are given. Resolution of critical, yet conflicting issues is discussed - the desire to maintain geometric fidelity versus model modifications required to accommodate instrumentation; balancing model scale size versus Reynolds number effects; and time, cost, and facility availability versus important parameters like surface finish and installation effects. The decisions taken during the experimental phase of a study can significantly affect the ability of a CAA calculation to reproduce the prevalent flow conditions and associated measurements. For the nose landing gear, the most critical of such issues are highlighted and the compromises made to resolve them are discussed. The results of these compromises will be summarized by examining the positive attributes and shortcomings of this particular benchmark case.

  16. Critical experiments at Sandia National Laboratories : technical meeting on low-power critical facilities and small reactors.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harms, Gary A.; Ford, John T.; Barber, Allison Delo

    2010-11-01

    Sandia National Laboratories (SNL) has conducted radiation effects testing for the Department of Energy (DOE) and other contractors supporting the DOE since the 1960's. Over this period, the research reactor facilities at Sandia have had a primary mission to provide appropriate nuclear radiation environments for radiation testing and qualification of electronic components and other devices. The current generation of reactors includes the Annular Core Research Reactor (ACRR), a water-moderated pool-type reactor, fueled by elements constructed from UO2-BeO ceramic fuel pellets, and the Sandia Pulse Reactor III (SPR-III), a bare metal fast burst reactor utilizing a uranium-molybdenum alloy fuel. The SPR-IIImore » is currently defueled. The SPR Facility (SPRF) has hosted a series of critical experiments. A purpose-built critical experiment was first operated at the SPRF in the late 1980's. This experiment, called the Space Nuclear Thermal Propulsion Critical Experiment (CX), was designed to explore the reactor physics of a nuclear thermal rocket motor. This experiment was fueled with highly-enriched uranium carbide fuel in annular water-moderated fuel elements. The experiment program was completed and the fuel for the experiment was moved off-site. A second critical experiment, the Burnup Credit Critical Experiment (BUCCX) was operated at Sandia in 2002. The critical assembly for this experiment was based on the assembly used in the CX modified to accommodate low-enriched pin-type fuel in water moderator. This experiment was designed as a platform in which the reactivity effects of specific fission product poisons could be measured. Experiments were carried out on rhodium, an important fission product poison. The fuel and assembly hardware for the BUCCX remains at Sandia and is available for future experimentation. The critical experiment currently in operation at the SPRF is the Seven Percent Critical Experiment (7uPCX). This experiment is designed to provide benchmark reactor physics data to support validation of the reactor physics codes used to design commercial reactor fuel elements in an enrichment range above the current 5% enrichment cap. A first set of critical experiments in the 7uPCX has been completed. More experiments are planned in the 7uPCX series. The critical experiments at Sandia National Laboratories are currently funded by the US Department of Energy Nuclear Criticality Safety Program (NCSP). The NCSP has committed to maintain the critical experiment capability at Sandia and to support the development of a critical experiments training course at the facility. The training course is intended to provide hands-on experiment experience for the training of new and re-training of practicing Nuclear Criticality Safety Engineers. The current plans are for the development of the course to continue through the first part of fiscal year 2011 with the development culminating is the delivery of a prototype of the course in the latter part of the fiscal year. The course will be available in fiscal year 2012.« less

  17. Opto-Electronic and Interconnects Hierarchical Design Automation System (OE-IDEAS)

    DTIC Science & Technology

    2004-05-01

    NETBOOK WEBSITE............................................................71 8.2 SIMULATION OF CRITICAL PATH FROM THE MAYO “10G” SYSTEM MCM BOARD...Benchmarks from the DaVinci Netbook website In May 2002, CFDRC downloaded all the materials from the DaVinci Netbook website containing the benchmark

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, Timothy P.; Martz, Roger L.; Kiedrowski, Brian C.

    New unstructured mesh capabilities in MCNP6 (developmental version during summer 2012) show potential for conducting multi-physics analyses by coupling MCNP to a finite element solver such as Abaqus/CAE[2]. Before these new capabilities can be utilized, the ability of MCNP to accurately estimate eigenvalues and pin powers using an unstructured mesh must first be verified. Previous work to verify the unstructured mesh capabilities in MCNP was accomplished using the Godiva sphere [1], and this work attempts to build on that. To accomplish this, a criticality benchmark and a fuel assembly benchmark were used for calculations in MCNP using both the Constructivemore » Solid Geometry (CSG) native to MCNP and the unstructured mesh geometry generated using Abaqus/CAE. The Big Ten criticality benchmark [3] was modeled due to its geometry being similar to that of a reactor fuel pin. The C5G7 3-D Mixed Oxide (MOX) Fuel Assembly Benchmark [4] was modeled to test the unstructured mesh capabilities on a reactor-type problem.« less

  19. Nutrient cycle benchmarks for earth system land model

    NASA Astrophysics Data System (ADS)

    Zhu, Q.; Riley, W. J.; Tang, J.; Zhao, L.

    2017-12-01

    Projecting future biosphere-climate feedbacks using Earth system models (ESMs) relies heavily on robust modeling of land surface carbon dynamics. More importantly, soil nutrient (particularly, nitrogen (N) and phosphorus (P)) dynamics strongly modulate carbon dynamics, such as plant sequestration of atmospheric CO2. Prevailing ESM land models all consider nitrogen as a potentially limiting nutrient, and several consider phosphorus. However, including nutrient cycle processes in ESM land models potentially introduces large uncertainties that could be identified and addressed by improved observational constraints. We describe the development of two nutrient cycle benchmarks for ESM land models: (1) nutrient partitioning between plants and soil microbes inferred from 15N and 33P tracers studies and (2) nutrient limitation effects on carbon cycle informed by long-term fertilization experiments. We used these benchmarks to evaluate critical hypotheses regarding nutrient cycling and their representation in ESMs. We found that a mechanistic representation of plant-microbe nutrient competition based on relevant functional traits best reproduced observed plant-microbe nutrient partitioning. We also found that for multiple-nutrient models (i.e., N and P), application of Liebig's law of the minimum is often inaccurate. Rather, the Multiple Nutrient Limitation (MNL) concept better reproduces observed carbon-nutrient interactions.

  20. Looking Past Primary Productivity: Benchmarking System Processes that Drive Ecosystem Level Responses in Models

    NASA Astrophysics Data System (ADS)

    Cowdery, E.; Dietze, M.

    2017-12-01

    As atmospheric levels of carbon dioxide levels continue to increase, it is critical that terrestrial ecosystem models can accurately predict ecological responses to the changing environment. Current predictions of net primary productivity (NPP) in response to elevated atmospheric CO2 concentration are highly variable and contain a considerable amount of uncertainty. Benchmarking model predictions against data are necessary to assess their ability to replicate observed patterns, but also to identify and evaluate the assumptions causing inter-model differences. We have implemented a novel benchmarking workflow as part of the Predictive Ecosystem Analyzer (PEcAn) that is automated, repeatable, and generalized to incorporate different sites and ecological models. Building on the recent Free-Air CO2 Enrichment Model Data Synthesis (FACE-MDS) project, we used observational data from the FACE experiments to test this flexible, extensible benchmarking approach aimed at providing repeatable tests of model process representation that can be performed quickly and frequently. Model performance assessments are often limited to traditional residual error analysis; however, this can result in a loss of critical information. Models that fail tests of relative measures of fit may still perform well under measures of absolute fit and mathematical similarity. This implies that models that are discounted as poor predictors of ecological productivity may still be capturing important patterns. Conversely, models that have been found to be good predictors of productivity may be hiding error in their sub-process that result in the right answers for the wrong reasons. Our suite of tests have not only highlighted process based sources of uncertainty in model productivity calculations, they have also quantified the patterns and scale of this error. Combining these findings with PEcAn's model sensitivity analysis and variance decomposition strengthen our ability to identify which processes need further study and additional data constraints. This can be used to inform future experimental design and in turn can provide an informative starting point for data assimilation.

  1. OECD/NEA expert group on uncertainty analysis for criticality safety assessment: Results of benchmark on sensitivity calculation (phase III)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ivanova, T.; Laville, C.; Dyrda, J.

    2012-07-01

    The sensitivities of the k{sub eff} eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplificationsmore » impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods. (authors)« less

  2. Quality in E-Learning--A Conceptual Framework Based on Experiences from Three International Benchmarking Projects

    ERIC Educational Resources Information Center

    Ossiannilsson, E.; Landgren, L.

    2012-01-01

    Between 2008 and 2010, Lund University took part in three international benchmarking projects, "E-xcellence+," the "eLearning Benchmarking Exercise 2009," and the "First Dual-Mode Distance Learning Benchmarking Club." A comparison of these models revealed a rather high level of correspondence. From this finding and…

  3. DE-NE0008277_PROTEUS final technical report 2018

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Enqvist, Andreas

    This project details re-evaluations of experiments of gas-cooled fast reactor (GCFR) core designs performed in the 1970s at the PROTEUS reactor and create a series of International Reactor Physics Experiment Evaluation Project (IRPhEP) benchmarks. Currently there are no gas-cooled fast reactor (GCFR) experiments available in the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook). These experiments are excellent candidates for reanalysis and development of multiple benchmarks because these experiments provide high-quality integral nuclear data relevant to the validation and refinement of thorium, neptunium, uranium, plutonium, iron, and graphite cross sections. It would be cost prohibitive to reproduce suchmore » a comprehensive suite of experimental data to support any future GCFR endeavors.« less

  4. Benchmarking Outcomes in the Critically Injured Burn Patient

    PubMed Central

    Klein, Matthew B.; Goverman, Jeremy; Hayden, Douglas L.; Fagan, Shawn P.; McDonald-Smith, Grace P.; Alexander, Andrew K.; Gamelli, Richard L.; Gibran, Nicole S.; Finnerty, Celeste C.; Jeschke, Marc G.; Arnoldo, Brett; Wispelwey, Bram; Mindrinos, Michael N.; Xiao, Wenzhong; Honari, Shari E.; Mason, Philip H.; Schoenfeld, David A.; Herndon, David N.; Tompkins, Ronald G.

    2014-01-01

    Objective To determine and compare outcomes with accepted benchmarks in burn care at six academic burn centers. Background Since the 1960s, U.S. morbidity and mortality rates have declined tremendously for burn patients, likely related to improvements in surgical and critical care treatment. We describe the baseline patient characteristics and well-defined outcomes for major burn injuries. Methods We followed 300 adults and 241 children from 2003–2009 through hospitalization using standard operating procedures developed at study onset. We created an extensive database on patient and injury characteristics, anatomic and physiological derangement, clinical treatment, and outcomes. These data were compared with existing benchmarks in burn care. Results Study patients were critically injured as demonstrated by mean %TBSA (41.2±18.3 for adults and 57.8±18.2 for children) and presence of inhalation injury in 38% of the adults and 54.8% of the children. Mortality in adults was 14.1% for those less than 55 years old and 38.5% for those age ≥55 years. Mortality in patients less than 17 years old was 7.9%. Overall, the multiple organ failure rate was 27%. When controlling for age and %TBSA, presence of inhalation injury was not significant. Conclusions This study provides the current benchmark for major burn patients. Mortality rates, notwithstanding significant % TBSA and presence of inhalation injury, have significantly declined compared to previous benchmarks. Modern day surgical and medically intensive management has markedly improved to the point where we can expect patients less than 55 years old with severe burn injuries and inhalation injury to survive these devastating conditions. PMID:24722222

  5. Measurement, Standards, and Peer Benchmarking: One Hospital's Journey.

    PubMed

    Martin, Brian S; Arbore, Mark

    2016-04-01

    Peer-to-peer benchmarking is an important component of rapid-cycle performance improvement in patient safety and quality-improvement efforts. Institutions should carefully examine critical success factors before engagement in peer-to-peer benchmarking in order to maximize growth and change opportunities. Solutions for Patient Safety has proven to be a high-yield engagement for Children's Hospital of Pittsburgh of University of Pittsburgh Medical Center, with measureable improvement in both organizational process and culture. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Deterministic Modeling of the High Temperature Test Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ortensi, J.; Cogliati, J. J.; Pope, M. A.

    2010-06-01

    Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine INL’s current prismatic reactor deterministic analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 column thin annular core, and the fully loaded core critical condition with 30 columns. Special emphasis is devoted to the annular core modeling, which shares more characteristics with the NGNP base design. The DRAGON code is usedmore » in this study because it offers significant ease and versatility in modeling prismatic designs. Despite some geometric limitations, the code performs quite well compared to other lattice physics codes. DRAGON can generate transport solutions via collision probability (CP), method of characteristics (MOC), and discrete ordinates (Sn). A fine group cross section library based on the SHEM 281 energy structure is used in the DRAGON calculations. HEXPEDITE is the hexagonal z full core solver used in this study and is based on the Green’s Function solution of the transverse integrated equations. In addition, two Monte Carlo (MC) based codes, MCNP5 and PSG2/SERPENT, provide benchmarking capability for the DRAGON and the nodal diffusion solver codes. The results from this study show a consistent bias of 2–3% for the core multiplication factor. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement stems from the fact that during the experiments the control rods were adjusted to maintain criticality, whereas in the model, the rod positions were fixed. In addition, this work includes a brief study of a cross section generation approach that seeks to decouple the domain in order to account for neighbor effects. This spectral interpenetration is a dominant effect in annular HTR physics. This analysis methodology should be further explored in order to reduce the error that is systematically propagated in the traditional generation of cross sections.« less

  7. JEFF-3.1, ENDF/B-VII and JENDL-3.3 Critical Assemblies Benchmarking With the Monte Carlo Code TRIPOLI

    NASA Astrophysics Data System (ADS)

    Sublet, Jean-Christophe

    2008-02-01

    ENDF/B-VII.0, the first release of the ENDF/B-VII nuclear data library, was formally released in December 2006. Prior to this event the European JEFF-3.1 nuclear data library was distributed in April 2005, while the Japanese JENDL-3.3 library has been available since 2002. The recent releases of these neutron transport libraries and special purpose files, the updates of the processing tools and the significant progress in computer power and potency, allow today far better leaner Monte Carlo code and pointwise library integration leading to enhanced benchmarking studies. A TRIPOLI-4.4 critical assembly suite has been set up as a collection of 86 benchmarks taken principally from the International Handbook of Evaluated Criticality Benchmarks Experiments (2006 Edition). It contains cases for a variety of U and Pu fuels and systems, ranging from fast to deep thermal solutions and assemblies. It covers cases with a variety of moderators, reflectors, absorbers, spectra and geometries. The results presented show that while the most recent library ENDF/B-VII.0, which benefited from the timely development of JENDL-3.3 and JEFF-3.1, produces better overall results, it suggest clearly also that improvements are still needed. This is true in particular in Light Water Reactor applications for thermal and epithermal plutonium data for all libraries and fast uranium data for JEFF-3.1 and JENDL-3.3. It is also true to state that other domains, in which Monte Carlo code are been used, such as astrophysics, fusion, high-energy or medical, radiation transport in general benefit notably from such enhanced libraries. It is particularly noticeable in term of the number of isotopes, materials available, the overall quality of the data and the much broader energy range for which evaluated (as opposed to modeled) data are available, spanning from meV to hundreds of MeV. In pointing out the impact of the different nuclear data at the library but also the isotopic levels one could not help noticing the importance and difference of the compensating effects that result from their single usage. Library differences are still important but tend to diminish due to the ever increasing and beneficial worldwide collaboration in the field of nuclear data measurement and evaluations.

  8. Reactivity impact of {sup 16}O thermal elastic-scattering nuclear data for some numerical and critical benchmark systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kozier, K. S.; Roubtsov, D.; Plompen, A. J. M.

    2012-07-01

    The thermal neutron-elastic-scattering cross-section data for {sup 16}O used in various modern evaluated-nuclear-data libraries were reviewed and found to be generally too high compared with the best available experimental measurements. Some of the proposed revisions to the ENDF/B-VII.0 {sup 16}O data library and recent results from the TENDL system increase this discrepancy further. The reactivity impact of revising the {sup 16}O data downward to be consistent with the best measurements was tested using the JENDL-3.3 {sup 16}O cross-section values and was found to be very small in MCNP5 simulations of the UO{sub 2} and reactor-recycle MOX-fuel cases of the ANSmore » Doppler-defect numerical benchmark. However, large reactivity differences of up to about 14 mk (1400 pcm) were observed using {sup 16}O data files from several evaluated-nuclear-data libraries in MCNP5 simulations of the Los Alamos National Laboratory HEU heavy-water solution thermal critical experiments, which were performed in the 1950's. The latter result suggests that new measurements using HEU in a heavy-water-moderated critical facility, such as the ZED-2 zero-power reactor at the Chalk River Laboratories, might help to resolve the discrepancy between the {sup 16}O thermal elastic-scattering cross-section values and thereby reduce or better define its uncertainty, although additional assessment work would be needed to confirm this. (authors)« less

  9. Detecting many-body-localization lengths with cold atoms

    NASA Astrophysics Data System (ADS)

    Guo, Xuefei; Li, Xiaopeng

    2018-03-01

    Considering ultracold atoms in optical lattices, we propose experimental protocols to study many-body-localization (MBL) length and criticality in quench dynamics. Through numerical simulations with exact diagonalization, we show that in the MBL phase the perturbed density profile following a local quench remains exponentially localized in postquench dynamics. The size of this density profile after long-time-dynamics defines a localization length, which tends to diverge at the MBL-to-ergodic transition as we increase the system size. The determined localization transition point agrees with previous exact diagonalization calculations using other diagnostics. Our numerical results provide evidence for violation of the Harris-Chayes bound for the MBL criticality. The critical exponent ν can be extracted from our proposed dynamical procedure, which can then be used directly in experiments to determine whether the Harris-Chayes-bound holds for the MBL transition. These proposed protocols to detect localization criticality are justified by benchmarking to the well-established results for the noninteracting three-dimensional Anderson localization.

  10. Monte Carlo analyses of TRX slightly enriched uranium-H/sub 2/O critical experiments with ENDF/B-IV and related data sets (AWBA Development Program)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hardy, J. Jr.

    1977-12-01

    Four H/sub 2/O-moderated, slightly-enriched-uranium critical experiments were analyzed by Monte Carlo methods with ENDF/B-IV data. These were simple metal-rod lattices comprising Cross Section Evaluation Working Group thermal reactor benchmarks TRX-1 through TRX-4. Generally good agreement with experiment was obtained for calculated integral parameters: the epi-thermal/thermal ratio of U238 capture (rho/sup 28/) and of U235 fission (delta/sup 25/), the ratio of U238 capture to U235 fission (CR*), and the ratio of U238 fission to U235 fission (delta/sup 28/). Full-core Monte Carlo calculations for two lattices showed good agreement with cell Monte Carlo-plus-multigroup P/sub l/ leakage corrections. Newly measured parameters for themore » low energy resonances of U238 significantly improved rho/sup 28/. In comparison with other CSEWG analyses, the strong correlation between K/sub eff/ and rho/sup 28/ suggests that U238 resonance capture is the major problem encountered in analyzing these lattices.« less

  11. Assessment of Static Delamination Propagation Capabilities in Commercial Finite Element Codes Using Benchmark Analysis

    NASA Technical Reports Server (NTRS)

    Orifici, Adrian C.; Krueger, Ronald

    2010-01-01

    With capabilities for simulating delamination growth in composite materials becoming available, the need for benchmarking and assessing these capabilities is critical. In this study, benchmark analyses were performed to assess the delamination propagation simulation capabilities of the VCCT implementations in Marc TM and MD NastranTM. Benchmark delamination growth results for Double Cantilever Beam, Single Leg Bending and End Notched Flexure specimens were generated using a numerical approach. This numerical approach was developed previously, and involves comparing results from a series of analyses at different delamination lengths to a single analysis with automatic crack propagation. Specimens were analyzed with three-dimensional and two-dimensional models, and compared with previous analyses using Abaqus . The results demonstrated that the VCCT implementation in Marc TM and MD Nastran(TradeMark) was capable of accurately replicating the benchmark delamination growth results and that the use of the numerical benchmarks offers advantages over benchmarking using experimental and analytical results.

  12. Preliminary Results for the OECD/NEA Time Dependent Benchmark using Rattlesnake, Rattlesnake-IQS and TDKENO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeHart, Mark D.; Mausolff, Zander; Weems, Zach

    2016-08-01

    One goal of the MAMMOTH M&S project is to validate the analysis capabilities within MAMMOTH. Historical data has shown limited value for validation of full three-dimensional (3D) multi-physics methods. Initial analysis considered the TREAT startup minimum critical core and one of the startup transient tests. At present, validation is focusing on measurements taken during the M8CAL test calibration series. These exercises will valuable in preliminary assessment of the ability of MAMMOTH to perform coupled multi-physics calculations; calculations performed to date are being used to validate the neutron transport solver Rattlesnake\\cite{Rattlesnake} and the fuels performance code BISON. Other validation projects outsidemore » of TREAT are available for single-physics benchmarking. Because the transient solution capability of Rattlesnake is one of the key attributes that makes it unique for TREAT transient simulations, validation of the transient solution of Rattlesnake using other time dependent kinetics benchmarks has considerable value. The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has recently developed a computational benchmark for transient simulations. This benchmark considered both two-dimensional (2D) and 3D configurations for a total number of 26 different transients. All are negative reactivity insertions, typically returning to the critical state after some time.« less

  13. Development of Benchmark Examples for Delamination Onset and Fatigue Growth Prediction

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2011-01-01

    An approach for assessing the delamination propagation and growth capabilities in commercial finite element codes was developed and demonstrated for the Virtual Crack Closure Technique (VCCT) implementations in ABAQUS. The Double Cantilever Beam (DCB) specimen was chosen as an example. First, benchmark results to assess delamination propagation capabilities under static loading were created using models simulating specimens with different delamination lengths. For each delamination length modeled, the load and displacement at the load point were monitored. The mixed-mode strain energy release rate components were calculated along the delamination front across the width of the specimen. A failure index was calculated by correlating the results with the mixed-mode failure criterion of the graphite/epoxy material. The calculated critical loads and critical displacements for delamination onset for each delamination length modeled were used as a benchmark. The load/displacement relationship computed during automatic propagation should closely match the benchmark case. Second, starting from an initially straight front, the delamination was allowed to propagate based on the algorithms implemented in the commercial finite element software. The load-displacement relationship obtained from the propagation analysis results and the benchmark results were compared. Good agreements could be achieved by selecting the appropriate input parameters, which were determined in an iterative procedure.

  14. Benchmarking Equity in Transfer Policies for Career and Technical Associate's Degrees

    ERIC Educational Resources Information Center

    Chase, Megan M.

    2011-01-01

    Using critical policy analysis, this study considers state policies that impede technical credit transfer from public 2-year colleges to 4-year institutions of higher education. The states of Ohio, Texas, Washington, and Wisconsin are considered, and seven policy benchmarks for facilitating the transfer of technical credits are proposed. (Contains…

  15. Quality Enhancement on E-Learning

    ERIC Educational Resources Information Center

    Ossiannilsson, E. S. I.

    2012-01-01

    Purpose: Benchmarking, a method for quality assurance has not been very commonly used in higher education with regard to e-learning. Today, e-learning is an integral part of higher education, and so should also be an integral part of quality assurance systems. However, quality indicators, benchmarks and critical success factors on e-learning have…

  16. Critical Assessment of Metagenome Interpretation – a benchmark of computational metagenomics software

    PubMed Central

    Sczyrba, Alexander; Hofmann, Peter; Belmann, Peter; Koslicki, David; Janssen, Stefan; Dröge, Johannes; Gregor, Ivan; Majda, Stephan; Fiedler, Jessika; Dahms, Eik; Bremges, Andreas; Fritz, Adrian; Garrido-Oter, Ruben; Jørgensen, Tue Sparholt; Shapiro, Nicole; Blood, Philip D.; Gurevich, Alexey; Bai, Yang; Turaev, Dmitrij; DeMaere, Matthew Z.; Chikhi, Rayan; Nagarajan, Niranjan; Quince, Christopher; Meyer, Fernando; Balvočiūtė, Monika; Hansen, Lars Hestbjerg; Sørensen, Søren J.; Chia, Burton K. H.; Denis, Bertrand; Froula, Jeff L.; Wang, Zhong; Egan, Robert; Kang, Dongwan Don; Cook, Jeffrey J.; Deltel, Charles; Beckstette, Michael; Lemaitre, Claire; Peterlongo, Pierre; Rizk, Guillaume; Lavenier, Dominique; Wu, Yu-Wei; Singer, Steven W.; Jain, Chirag; Strous, Marc; Klingenberg, Heiner; Meinicke, Peter; Barton, Michael; Lingner, Thomas; Lin, Hsin-Hung; Liao, Yu-Chieh; Silva, Genivaldo Gueiros Z.; Cuevas, Daniel A.; Edwards, Robert A.; Saha, Surya; Piro, Vitor C.; Renard, Bernhard Y.; Pop, Mihai; Klenk, Hans-Peter; Göker, Markus; Kyrpides, Nikos C.; Woyke, Tanja; Vorholt, Julia A.; Schulze-Lefert, Paul; Rubin, Edward M.; Darling, Aaron E.; Rattei, Thomas; McHardy, Alice C.

    2018-01-01

    In metagenome analysis, computational methods for assembly, taxonomic profiling and binning are key components facilitating downstream biological data interpretation. However, a lack of consensus about benchmarking datasets and evaluation metrics complicates proper performance assessment. The Critical Assessment of Metagenome Interpretation (CAMI) challenge has engaged the global developer community to benchmark their programs on datasets of unprecedented complexity and realism. Benchmark metagenomes were generated from ~700 newly sequenced microorganisms and ~600 novel viruses and plasmids, including genomes with varying degrees of relatedness to each other and to publicly available ones and representing common experimental setups. Across all datasets, assembly and genome binning programs performed well for species represented by individual genomes, while performance was substantially affected by the presence of related strains. Taxonomic profiling and binning programs were proficient at high taxonomic ranks, with a notable performance decrease below the family level. Parameter settings substantially impacted performances, underscoring the importance of program reproducibility. While highlighting current challenges in computational metagenomics, the CAMI results provide a roadmap for software selection to answer specific research questions. PMID:28967888

  17. Benchmarking Multilayer-HySEA model for landslide generated tsunami. HTHMP validation process.

    NASA Astrophysics Data System (ADS)

    Macias, J.; Escalante, C.; Castro, M. J.

    2017-12-01

    Landslide tsunami hazard may be dominant along significant parts of the coastline around the world, in particular in the USA, as compared to hazards from other tsunamigenic sources. This fact motivated NTHMP about the need of benchmarking models for landslide generated tsunamis, following the same methodology already used for standard tsunami models when the source is seismic. To perform the above-mentioned validation process, a set of candidate benchmarks were proposed. These benchmarks are based on a subset of available laboratory data sets for solid slide experiments and deformable slide experiments, and include both submarine and subaerial slides. A benchmark based on a historic field event (Valdez, AK, 1964) close the list of proposed benchmarks. A total of 7 benchmarks. The Multilayer-HySEA model including non-hydrostatic effects has been used to perform all the benchmarking problems dealing with laboratory experiments proposed in the workshop that was organized at Texas A&M University - Galveston, on January 9-11, 2017 by NTHMP. The aim of this presentation is to show some of the latest numerical results obtained with the Multilayer-HySEA (non-hydrostatic) model in the framework of this validation effort.Acknowledgements. This research has been partially supported by the Spanish Government Research project SIMURISK (MTM2015-70490-C02-01-R) and University of Malaga, Campus de Excelencia Internacional Andalucía Tech. The GPU computations were performed at the Unit of Numerical Methods (University of Malaga).

  18. The art and science of using routine outcome measurement in mental health benchmarking.

    PubMed

    McKay, Roderick; Coombs, Tim; Duerden, David

    2014-02-01

    To report and critique the application of routine outcome measurement data when benchmarking Australian mental health services. The experience of the authors as participants and facilitators of benchmarking activities is augmented by a review of the literature regarding mental health benchmarking in Australia. Although the published literature is limited, in practice, routine outcome measures, in particular the Health of the National Outcomes Scales (HoNOS) family of measures, are used in a variety of benchmarking activities. Use in exploring similarities and differences in consumers between services and the outcomes of care are illustrated. This requires the rigour of science in data management and interpretation, supplemented by the art that comes from clinical experience, a desire to reflect on clinical practice and the flexibility to use incomplete data to explore clinical practice. Routine outcome measurement data can be used in a variety of ways to support mental health benchmarking. With the increasing sophistication of information development in mental health, the opportunity to become involved in benchmarking will continue to increase. The techniques used during benchmarking and the insights gathered may prove useful to support reflection on practice by psychiatrists and other senior mental health clinicians.

  19. Targeting the affordability of cigarettes: a new benchmark for taxation policy in low-income and-middle-income countries.

    PubMed

    Blecher, Evan

    2010-08-01

    To investigate the appropriateness of tax incidence (the percentage of the retail price occupied by taxes) benchmarking in low-income and-middle-income countries (LMICs) with rapidly growing economies and to explore the viability of an alternative tax policy rule based on the affordability of cigarettes. The paper outlines criticisms of tax incidence benchmarking, particularly in the context of LMICs. It then considers an affordability-based benchmark using relative income price (RIP) as a measure of affordability. The RIP measures the percentage of annual per capita GDP required to purchase 100 packs of cigarettes. Using South Africa as a case study of an LMIC, future consumption is simulated using both tax incidence benchmarks and affordability benchmarks. I show that a tax incidence benchmark is not an optimal policy tool in South Africa and that an affordability benchmark could be a more effective means of reducing tobacco consumption in the future. Although a tax incidence benchmark was successful in increasing prices and reducing tobacco consumption in South Africa in the past, this approach has drawbacks, particularly in the context of a rapidly growing LMIC economy. An affordability benchmark represents an appropriate alternative that would be more effective in reducing future cigarette consumption.

  20. Benchmark On Sensitivity Calculation (Phase III)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ivanova, Tatiana; Laville, Cedric; Dyrda, James

    2012-01-01

    The sensitivities of the keff eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impactmore » the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods.« less

  1. Benchmark results and theoretical treatments for valence-to-core x-ray emission spectroscopy in transition metal compounds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mortensen, D. R.; Seidler, G. T.; Kas, Joshua J.

    We report measurement of the valence-to-core (VTC) region of the K-shell x-ray emission spectra from several Zn and Fe inorganic compounds, and their critical comparison with several existing theoretical treatments. We find generally good agreement between the respective theories and experiment, and in particular find an important admixture of dipole and quadrupole character for Zn materials that is much weaker in Fe-based systems. These results on materials whose simple crystal structures should not, a prior, pose deep challenges to theory, will prove useful in guiding the further development of DFT and time-dependent DFT methods for VTC-XES predictions and their comparisonmore » to experiment.« less

  2. Evaluating Productivity Predictions Under Elevated CO2 Conditions: Multi-Model Benchmarking Across FACE Experiments

    NASA Astrophysics Data System (ADS)

    Cowdery, E.; Dietze, M.

    2016-12-01

    As atmospheric levels of carbon dioxide levels continue to increase, it is critical that terrestrial ecosystem models can accurately predict ecological responses to the changing environment. Current predictions of net primary productivity (NPP) in response to elevated atmospheric CO2 concentration are highly variable and contain a considerable amount of uncertainty.The Predictive Ecosystem Analyzer (PEcAn) is an informatics toolbox that wraps around an ecosystem model and can be used to help identify which factors drive uncertainty. We tested a suite of models (LPJ-GUESS, MAESPA, GDAY, CLM5, DALEC, ED2), which represent a range from low to high structural complexity, across a range of Free-Air CO2 Enrichment (FACE) experiments: the Kennedy Space Center Open Top Chamber Experiment, the Rhinelander FACE experiment, the Duke Forest FACE experiment and the Oak Ridge Experiment on CO2 Enrichment. These tests were implemented in a novel benchmarking workflow that is automated, repeatable, and generalized to incorporate different sites and ecological models. Observational data from the FACE experiments represent a first test of this flexible, extensible approach aimed at providing repeatable tests of model process representation.To identify and evaluate the assumptions causing inter-model differences we used PEcAn to perform model sensitivity and uncertainty analysis, not only to assess the components of NPP, but also to examine system processes such nutrient uptake and and water use. Combining the observed patterns of uncertainty between multiple models with results of the recent FACE-model data synthesis project (FACE-MDS) can help identify which processes need further study and additional data constraints. These findings can be used to inform future experimental design and in turn can provide informative starting point for data assimilation.

  3. 75 FR 75628 - Final Guidance for Federal Departments and Agencies on Establishing, Applying, and Revising...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-06

    ... and facilitate the use of documentation in future evaluations and benchmarking. Extraordinary.... Benchmarking Other Agencies' Experiences A Federal agency cannot rely on another agency's categorical exclusion... was established. Federal agencies can also substantiate categorical exclusions by benchmarking, or...

  4. Performance analysis of fusion nuclear-data benchmark experiments for light to heavy materials in MeV energy region with a neutron spectrum shifter

    NASA Astrophysics Data System (ADS)

    Murata, Isao; Ohta, Masayuki; Miyamaru, Hiroyuki; Kondo, Keitaro; Yoshida, Shigeo; Iida, Toshiyuki; Ochiai, Kentaro; Konno, Chikara

    2011-10-01

    Nuclear data are indispensable for development of fusion reactor candidate materials. However, benchmarking of the nuclear data in MeV energy region is not yet adequate. In the present study, benchmark performance in the MeV energy region was investigated theoretically for experiments by using a 14 MeV neutron source. We carried out a systematical analysis for light to heavy materials. As a result, the benchmark performance for the neutron spectrum was confirmed to be acceptable, while for gamma-rays it was not sufficiently accurate. Consequently, a spectrum shifter has to be applied. Beryllium had the best performance as a shifter. Moreover, a preliminary examination of whether it is really acceptable that only the spectrum before the last collision is considered in the benchmark performance analysis. It was pointed out that not only the last collision but also earlier collisions should be considered equally in the benchmark performance analysis.

  5. Exploring tropical forest vegetation dynamics using the FATES model

    NASA Astrophysics Data System (ADS)

    Koven, C. D.; Fisher, R.; Knox, R. G.; Chambers, J.; Kueppers, L. M.; Christoffersen, B. O.; Davies, S. J.; Dietze, M.; Holm, J.; Massoud, E. C.; Muller-Landau, H. C.; Powell, T.; Serbin, S.; Shuman, J. K.; Walker, A. P.; Wright, S. J.; Xu, C.

    2017-12-01

    Tropical forest vegetation dynamics represent a critical climate feedback in the Earth system, which is poorly represented in current global modeling approaches. We discuss recent progress on exploring these dynamics using the Functionally Assembled Terrestrial Ecosystem Simulator (FATES), a demographic vegetation model for the CESM and ACME ESMs. We will discuss benchmarks of FATES predictions for forest structure against inventory sites, sensitivity of FATES predictions of size and age structure to model parameter uncertainty, and experiments using the FATES model to explore PFT competitive dynamics and the dynamics of size and age distributions in responses to changing climate and CO2.

  6. Benchmarking on Tsunami Currents with ComMIT

    NASA Astrophysics Data System (ADS)

    Sharghi vand, N.; Kanoglu, U.

    2015-12-01

    There were no standards for the validation and verification of tsunami numerical models before 2004 Indian Ocean tsunami. Even, number of numerical models has been used for inundation mapping effort, evaluation of critical structures, etc. without validation and verification. After 2004, NOAA Center for Tsunami Research (NCTR) established standards for the validation and verification of tsunami numerical models (Synolakis et al. 2008 Pure Appl. Geophys. 165, 2197-2228), which will be used evaluation of critical structures such as nuclear power plants against tsunami attack. NCTR presented analytical, experimental and field benchmark problems aimed to estimate maximum runup and accepted widely by the community. Recently, benchmark problems were suggested by the US National Tsunami Hazard Mitigation Program Mapping & Modeling Benchmarking Workshop: Tsunami Currents on February 9-10, 2015 at Portland, Oregon, USA (http://nws.weather.gov/nthmp/index.html). These benchmark problems concentrated toward validation and verification of tsunami numerical models on tsunami currents. Three of the benchmark problems were: current measurement of the Japan 2011 tsunami in Hilo Harbor, Hawaii, USA and in Tauranga Harbor, New Zealand, and single long-period wave propagating onto a small-scale experimental model of the town of Seaside, Oregon, USA. These benchmark problems were implemented in the Community Modeling Interface for Tsunamis (ComMIT) (Titov et al. 2011 Pure Appl. Geophys. 168, 2121-2131), which is a user-friendly interface to the validated and verified Method of Splitting Tsunami (MOST) (Titov and Synolakis 1995 J. Waterw. Port Coastal Ocean Eng. 121, 308-316) model and is developed by NCTR. The modeling results are compared with the required benchmark data, providing good agreements and results are discussed. Acknowledgment: The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement no 603839 (Project ASTARTE - Assessment, Strategy and Risk Reduction for Tsunamis in Europe)

  7. Experimental Criticality Benchmarks for SNAP 10A/2 Reactor Cores

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krass, A.W.

    2005-12-19

    This report describes computational benchmark models for nuclear criticality derived from descriptions of the Systems for Nuclear Auxiliary Power (SNAP) Critical Assembly (SCA)-4B experimental criticality program conducted by Atomics International during the early 1960's. The selected experimental configurations consist of fueled SNAP 10A/2-type reactor cores subject to varied conditions of water immersion and reflection under experimental control to measure neutron multiplication. SNAP 10A/2-type reactor cores are compact volumes fueled and moderated with the hydride of highly enriched uranium-zirconium alloy. Specifications for the materials and geometry needed to describe a given experimental configuration for a model using MCNP5 are provided. Themore » material and geometry specifications are adequate to permit user development of input for alternative nuclear safety codes, such as KENO. A total of 73 distinct experimental configurations are described.« less

  8. Multi-beam effects on backscatter and its saturation in experiments with conditions relevant to ignition

    DOE PAGES

    Kirkwood, R. K.; Michel, P.; London, R.; ...

    2011-05-26

    To optimize the coupling to indirect drive targets in the National Ignition Campaign (NIC) at the National Ignition Facility, a model of stimulated scattering produced by multiple laser beams is used. The model has shown that scatter of the 351 nm beams can be significantly enhanced over single beam predictions in ignition relevant targets by the interaction of the multiple crossing beams with a millimeter scale length, 2.5 keV, 0.02 - 0.05 x critical density, plasma. The model uses a suite of simulation capabilities and its key aspects are benchmarked with experiments at smaller laser facilities. The model has alsomore » influenced the design of the initial targets used for NIC by showing that both the stimulated Brillouin scattering (SBS) and stimulated Raman scattering (SRS) can be reduced by the reduction of the plasma density in the beam intersection volume that is caused by an increase in the diameter of the laser entrance hole (LEH). In this model, a linear wave response leads to a small gain exponent produced by each crossing quad of beams (<~1 per quad) which amplifies the scattering that originates in the target interior where the individual beams are separated and crosses many or all other beams near the LEH as it exits the target. As a result all 23 crossing quads of beams produce a total gain exponent of several or greater for seeds of light with wavelengths in the range that is expected for scattering from the interior (480 to 580 nm for SRS). This means that in the absence of wave saturation, the overall multi-beam scatter will be significantly larger than the expectations for single beams. The potential for non-linear saturation of the Langmuir waves amplifying SRS light is also analyzed with a two dimensional, vectorized, particle in cell code (2D VPIC) that is benchmarked by amplification experiments in a plasma with normalized parameters similar to ignition targets. The physics of cumulative scattering by multiple crossing beams that simultaneously amplify the same SBS light wave is further demonstrated in experiments that benchmark the linear models for the ion waves amplifying SBS. Here, the expectation from this model and its experimental benchmarks is shown to be consistent with observations of stimulated Raman scatter in the first series of energetic experiments with ignition targets, confirming the importance of the multi-beam scattering model for optimizing coupling.« less

  9. Convection Effects During Bulk Transparent Alloy Solidification in DECLIC-DSI and Phase-Field Simulations in Diffusive Conditions

    NASA Astrophysics Data System (ADS)

    Mota, F. L.; Song, Y.; Pereda, J.; Billia, B.; Tourret, D.; Debierre, J.-M.; Trivedi, R.; Karma, A.; Bergeon, N.

    2017-08-01

    To study the dynamical formation and evolution of cellular and dendritic arrays under diffusive growth conditions, three-dimensional (3D) directional solidification experiments were conducted in microgravity on a model transparent alloy onboard the International Space Station using the Directional Solidification Insert in the DEvice for the study of Critical LIquids and Crystallization. Selected experiments were repeated on Earth under gravity-driven fluid flow to evidence convection effects. Both radial and axial macrosegregation resulting from convection are observed in ground experiments, and primary spacings measured on Earth and microgravity experiments are noticeably different. The microgravity experiments provide unique benchmark data for numerical simulations of spatially extended pattern formation under diffusive growth conditions. The results of 3D phase-field simulations highlight the importance of accurately modeling thermal conditions that strongly influence the front recoil of the interface and the selection of the primary spacing. The modeling predictions are in good quantitative agreements with the microgravity experiments.

  10. Study rationale and design of OPTIMISE, a randomised controlled trial on the effect of benchmarking on quality of care in type 2 diabetes mellitus.

    PubMed

    Nobels, Frank; Debacker, Noëmi; Brotons, Carlos; Elisaf, Moses; Hermans, Michel P; Michel, Georges; Muls, Erik

    2011-09-22

    To investigate the effect of physician- and patient-specific feedback with benchmarking on the quality of care in adults with type 2 diabetes mellitus (T2DM). Study centres in six European countries were randomised to either a benchmarking or control group. Physicians in both groups received feedback on modifiable outcome indicators (glycated haemoglobin [HbA1c], glycaemia, total cholesterol, high density lipoprotein-cholesterol, low density lipoprotein [LDL]-cholesterol and triglycerides) for each patient at 0, 4, 8 and 12 months, based on the four times yearly control visits recommended by international guidelines. The benchmarking group also received comparative results on three critical quality indicators of vascular risk (HbA1c, LDL-cholesterol and systolic blood pressure [SBP]), checked against the results of their colleagues from the same country, and versus pre-set targets. After 12 months of follow up, the percentage of patients achieving the pre-determined targets for the three critical quality indicators will be assessed in the two groups. Recruitment was completed in December 2008 with 3994 evaluable patients. This paper discusses the study rationale and design of OPTIMISE, a randomised controlled study, that will help assess whether benchmarking is a useful clinical tool for improving outcomes in T2DM in primary care. NCT00681850.

  11. Study rationale and design of OPTIMISE, a randomised controlled trial on the effect of benchmarking on quality of care in type 2 diabetes mellitus

    PubMed Central

    2011-01-01

    Background To investigate the effect of physician- and patient-specific feedback with benchmarking on the quality of care in adults with type 2 diabetes mellitus (T2DM). Methods Study centres in six European countries were randomised to either a benchmarking or control group. Physicians in both groups received feedback on modifiable outcome indicators (glycated haemoglobin [HbA1c], glycaemia, total cholesterol, high density lipoprotein-cholesterol, low density lipoprotein [LDL]-cholesterol and triglycerides) for each patient at 0, 4, 8 and 12 months, based on the four times yearly control visits recommended by international guidelines. The benchmarking group also received comparative results on three critical quality indicators of vascular risk (HbA1c, LDL-cholesterol and systolic blood pressure [SBP]), checked against the results of their colleagues from the same country, and versus pre-set targets. After 12 months of follow up, the percentage of patients achieving the pre-determined targets for the three critical quality indicators will be assessed in the two groups. Results Recruitment was completed in December 2008 with 3994 evaluable patients. Conclusions This paper discusses the study rationale and design of OPTIMISE, a randomised controlled study, that will help assess whether benchmarking is a useful clinical tool for improving outcomes in T2DM in primary care. Trial registration NCT00681850 PMID:21939502

  12. All inclusive benchmarking.

    PubMed

    Ellis, Judith

    2006-07-01

    The aim of this article is to review published descriptions of benchmarking activity and synthesize benchmarking principles to encourage the acceptance and use of Essence of Care as a new benchmarking approach to continuous quality improvement, and to promote its acceptance as an integral and effective part of benchmarking activity in health services. The Essence of Care, was launched by the Department of Health in England in 2001 to provide a benchmarking tool kit to support continuous improvement in the quality of fundamental aspects of health care, for example, privacy and dignity, nutrition and hygiene. The tool kit is now being effectively used by some frontline staff. However, use is inconsistent, with the value of the tool kit, or the support clinical practice benchmarking requires to be effective, not always recognized or provided by National Health Service managers, who are absorbed with the use of quantitative benchmarking approaches and measurability of comparative performance data. This review of published benchmarking literature, was obtained through an ever-narrowing search strategy commencing from benchmarking within quality improvement literature through to benchmarking activity in health services and including access to not only published examples of benchmarking approaches and models used but the actual consideration of web-based benchmarking data. This supported identification of how benchmarking approaches have developed and been used, remaining true to the basic benchmarking principles of continuous improvement through comparison and sharing (Camp 1989). Descriptions of models and exemplars of quantitative and specifically performance benchmarking activity in industry abound (Camp 1998), with far fewer examples of more qualitative and process benchmarking approaches in use in the public services and then applied to the health service (Bullivant 1998). The literature is also in the main descriptive in its support of the effectiveness of benchmarking activity and although this does not seem to have restricted its popularity in quantitative activity, reticence about the value of the more qualitative approaches, for example Essence of Care, needs to be overcome in order to improve the quality of patient care and experiences. The perceived immeasurability and subjectivity of Essence of Care and clinical practice benchmarks means that these benchmarking approaches are not always accepted or supported by health service organizations as valid benchmarking activity. In conclusion, Essence of Care benchmarking is a sophisticated clinical practice benchmarking approach which needs to be accepted as an integral part of health service benchmarking activity to support improvement in the quality of patient care and experiences.

  13. A Quantitative Methodology for Determining the Critical Benchmarks for Project 2061 Strand Maps

    ERIC Educational Resources Information Center

    Kuhn, G.

    2008-01-01

    The American Association for the Advancement of Science (AAAS) was tasked with identifying the key science concepts for science literacy in K-12 students in America (AAAS, 1990, 1993). The AAAS Atlas of Science Literacy (2001) has organized roughly half of these science concepts or benchmarks into fifty flow charts. Each flow chart or strand map…

  14. Academic Productivity in Psychiatry: Benchmarks for the H-Index.

    PubMed

    MacMaster, Frank P; Swansburg, Rose; Rittenbach, Katherine

    2017-08-01

    Bibliometrics play an increasingly critical role in the assessment of faculty for promotion and merit increases. Bibliometrics is the statistical analysis of publications, aimed at evaluating their impact. The objective of this study is to describe h-index and citation benchmarks in academic psychiatry. Faculty lists were acquired from online resources for all academic departments of psychiatry listed as having residency training programs in Canada (as of June 2016). Potential authors were then searched on Web of Science (Thomson Reuters) for their corresponding h-index and total number of citations. The sample included 1683 faculty members in academic psychiatry departments. Restricted to those with a rank of assistant, associate, or full professor resulted in 1601 faculty members (assistant = 911, associate = 387, full = 303). h-index and total citations differed significantly by academic rank. Both were highest in the full professor rank, followed by associate, then assistant. The range in each, however, was large. This study provides the initial benchmarks for the h-index and total citations in academic psychiatry. Regardless of any controversies or criticisms of bibliometrics, they are increasingly influencing promotion, merit increases, and grant support. As such, benchmarking by specialties is needed in order to provide needed context.

  15. [Benchmark experiment to verify radiation transport calculations for dosimetry in radiation therapy].

    PubMed

    Renner, Franziska

    2016-09-01

    Monte Carlo simulations are regarded as the most accurate method of solving complex problems in the field of dosimetry and radiation transport. In (external) radiation therapy they are increasingly used for the calculation of dose distributions during treatment planning. In comparison to other algorithms for the calculation of dose distributions, Monte Carlo methods have the capability of improving the accuracy of dose calculations - especially under complex circumstances (e.g. consideration of inhomogeneities). However, there is a lack of knowledge of how accurate the results of Monte Carlo calculations are on an absolute basis. A practical verification of the calculations can be performed by direct comparison with the results of a benchmark experiment. This work presents such a benchmark experiment and compares its results (with detailed consideration of measurement uncertainty) with the results of Monte Carlo calculations using the well-established Monte Carlo code EGSnrc. The experiment was designed to have parallels to external beam radiation therapy with respect to the type and energy of the radiation, the materials used and the kind of dose measurement. Because the properties of the beam have to be well known in order to compare the results of the experiment and the simulation on an absolute basis, the benchmark experiment was performed using the research electron accelerator of the Physikalisch-Technische Bundesanstalt (PTB), whose beam was accurately characterized in advance. The benchmark experiment and the corresponding Monte Carlo simulations were carried out for two different types of ionization chambers and the results were compared. Considering the uncertainty, which is about 0.7 % for the experimental values and about 1.0 % for the Monte Carlo simulation, the results of the simulation and the experiment coincide. Copyright © 2015. Published by Elsevier GmbH.

  16. ZPPR-20 phase D : a cylindrical assembly of polyethylene moderated U metal reflected by beryllium oxide and polyethylene.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lell, R.; Grimm, K.; McKnight, R.

    The Zero Power Physics Reactor (ZPPR) fast critical facility was built at the Argonne National Laboratory-West (ANL-W) site in Idaho in 1969 to obtain neutron physics information necessary for the design of fast breeder reactors. The ZPPR-20D Benchmark Assembly was part of a series of cores built in Assembly 20 (References 1 through 3) of the ZPPR facility to provide data for developing a nuclear power source for space applications (SP-100). The assemblies were beryllium oxide reflected and had core fuel compositions containing enriched uranium fuel, niobium and rhenium. ZPPR-20 Phase C (HEU-MET-FAST-075) was built as the reference flight configuration.more » Two other configurations, Phases D and E, simulated accident scenarios. Phase D modeled the water immersion scenario during a launch accident, and Phase E (SUB-HEU-MET-FAST-001) modeled the earth burial scenario during a launch accident. Two configurations were recorded for the simulated water immersion accident scenario (Phase D); the critical configuration, documented here, and the subcritical configuration (SUB-HEU-MET-MIXED-001). Experiments in Assembly 20 Phases 20A through 20F were performed in 1988. The reference water immersion configuration for the ZPPR-20D assembly was obtained as reactor loading 129 on October 7, 1988 with a fissile mass of 167.477 kg and a reactivity of -4.626 {+-} 0.044{cents} (k {approx} 0.9997). The SP-100 core was to be constructed of highly enriched uranium nitride, niobium, rhenium and depleted lithium. The core design called for two enrichment zones with niobium-1% zirconium alloy fuel cladding and core structure. Rhenium was to be used as a fuel pin liner to provide shut down in the event of water immersion and flooding. The core coolant was to be depleted lithium metal ({sup 7}Li). The core was to be surrounded radially with a niobium reactor vessel and bypass which would carry the lithium coolant to the forward inlet plenum. Immediately inside the reactor vessel was a rhenium baffle which would act as a neutron curtain in the event of water immersion. A fission gas plenum and coolant inlet plenum were located axially forward of the core. Some material substitutions had to be made in mocking up the SP-100 design. The ZPPR-20 critical assemblies were fueled by 93% enriched uranium metal because uranium nitride, which was the SP-100 fuel type, was not available. ZPPR Assembly 20D was designed to simulate a water immersion accident. The water was simulated by polyethylene (CH{sub 2}), which contains a similar amount of hydrogen and has a similar density. A very accurate transformation to a simplified model is needed to make any of the ZPPR assemblies a practical criticality-safety benchmark. There is simply too much geometric detail in an exact model of a ZPPR assembly, particularly as complicated an assembly as ZPPR-20D. The transformation must reduce the detail to a practical level without masking any of the important features of the critical experiment. And it must do this without increasing the total uncertainty far beyond that of the original experiment. Such a transformation will be described in a later section. First, Assembly 20D was modeled in full detail--every plate, drawer, matrix tube, and air gap was modeled explicitly. Then the regionwise compositions and volumes from this model were converted to an RZ model. ZPPR Assembly 20D has been determined to be an acceptable criticality-safety benchmark experiment.« less

  17. An Approach for Validating Actinide and Fission Product Burnup Credit Criticality Safety Analyses: Criticality (k eff) Predictions

    DOE PAGES

    Scaglione, John M.; Mueller, Don E.; Wagner, John C.

    2014-12-01

    One of the most important remaining challenges associated with expanded implementation of burnup credit in the United States is the validation of depletion and criticality calculations used in the safety evaluation—in particular, the availability and use of applicable measured data to support validation, especially for fission products (FPs). Applicants and regulatory reviewers have been constrained by both a scarcity of data and a lack of clear technical basis or approach for use of the data. In this study, this paper describes a validation approach for commercial spent nuclear fuel (SNF) criticality safety (k eff) evaluations based on best-available data andmore » methods and applies the approach for representative SNF storage and transport configurations/conditions to demonstrate its usage and applicability, as well as to provide reference bias results. The criticality validation approach utilizes not only available laboratory critical experiment (LCE) data from the International Handbook of Evaluated Criticality Safety Benchmark Experiments and the French Haut Taux de Combustion program to support validation of the principal actinides but also calculated sensitivities, nuclear data uncertainties, and limited available FP LCE data to predict and verify individual biases for relevant minor actinides and FPs. The results demonstrate that (a) sufficient critical experiment data exist to adequately validate k eff calculations via conventional validation approaches for the primary actinides, (b) sensitivity-based critical experiment selection is more appropriate for generating accurate application model bias and uncertainty, and (c) calculated sensitivities and nuclear data uncertainties can be used for generating conservative estimates of bias for minor actinides and FPs. Results based on the SCALE 6.1 and the ENDF/B-VII.0 cross-section libraries indicate that a conservative estimate of the bias for the minor actinides and FPs is 1.5% of their worth within the application model. Finally, this paper provides a detailed description of the approach and its technical bases, describes the application of the approach for representative pressurized water reactor and boiling water reactor safety analysis models, and provides reference bias results based on the prerelease SCALE 6.1 code package and ENDF/B-VII nuclear cross-section data.« less

  18. Object-Oriented Implementation of the NAS Parallel Benchmarks using Charm++

    NASA Technical Reports Server (NTRS)

    Krishnan, Sanjeev; Bhandarkar, Milind; Kale, Laxmikant V.

    1996-01-01

    This report describes experiences with implementing the NAS Computational Fluid Dynamics benchmarks using a parallel object-oriented language, Charm++. Our main objective in implementing the NAS CFD kernel benchmarks was to develop a code that could be used to easily experiment with different domain decomposition strategies and dynamic load balancing. We also wished to leverage the object-orientation provided by the Charm++ parallel object-oriented language, to develop reusable abstractions that would simplify the process of developing parallel applications. We first describe the Charm++ parallel programming model and the parallel object array abstraction, then go into detail about each of the Scalar Pentadiagonal (SP) and Lower/Upper Triangular (LU) benchmarks, along with performance results. Finally we conclude with an evaluation of the methodology used.

  19. Deterministic Modeling of the High Temperature Test Reactor with DRAGON-HEXPEDITE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. Ortensi; M.A. Pope; R.M. Ferrer

    2010-10-01

    The Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine the INL’s current prismatic reactor analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 fuel column thin annular core, and the fully loaded core critical condition with 30 fuel columns. Special emphasis is devoted to physical phenomena and artifacts in HTTR that are similar to phenomena and artifacts in themore » NGNP base design. The DRAGON code is used in this study since it offers significant ease and versatility in modeling prismatic designs. DRAGON can generate transport solutions via Collision Probability (CP), Method of Characteristics (MOC) and Discrete Ordinates (Sn). A fine group cross-section library based on the SHEM 281 energy structure is used in the DRAGON calculations. The results from this study show reasonable agreement in the calculation of the core multiplication factor with the MC methods, but a consistent bias of 2–3% with the experimental values is obtained. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement partially stems from the fact that during the experiments the control rods were adjusted to maintain criticality, whereas in the model, the rod positions were fixed. In addition, this work includes a brief study of a cross section generation approach that seeks to decouple the domain in order to account for neighbor effects. This spectral interpenetration is a dominant effect in annular HTR physics. This analysis methodology should be further explored in order to reduce the error that is systematically propagated in the traditional generation of cross sections.« less

  20. Is It Time To Consider Global Sharing of Integral Physics Data?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harold F. McFarlane

    The innocent days of the Atoms for Peace program vanished with the suicide attack on the World Trade Center in New York City that occurred while the GLOBAL 2001 international nuclear fuel cycle conference was convened in Paris. Today’s reality is that maintaining an inventory of unirradiated highly enriched uranium or plutonium for critical experiments requires a facility to accept substantial security cost and intrusion. In the context of a large collection of benchmark integral experiments collected over several decades and the ongoing rapid advances in computer modeling and simulation, there seems to be ample incentive to reduce both themore » number of facilities and material inventory quantities worldwide. As a result of ongoing nonproliferation initiatives, there are viable programs that will accept highly enriched uranium for down blending into commercial fuel. Nevertheless, there are formidable hurdles to overcome before national institutions will voluntarily give up existing nuclear research capabilities. GLOBAL 2005 was the appropriate forum to begin fostering a new spirit of cooperation that could lead to improved international security and better use of precious research and development resources, while ensuring access to existing and future critical experiment data.« less

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, William BJ J; Rearden, Bradley T

    The validation of neutron transport methods used in nuclear criticality safety analyses is required by consensus American National Standards Institute/American Nuclear Society (ANSI/ANS) standards. In the last decade, there has been an increased interest in correlations among critical experiments used in validation that have shared physical attributes and which impact the independence of each measurement. The statistical methods included in many of the frequently cited guidance documents on performing validation calculations incorporate the assumption that all individual measurements are independent, so little guidance is available to practitioners on the topic. Typical guidance includes recommendations to select experiments from multiple facilitiesmore » and experiment series in an attempt to minimize the impact of correlations or common-cause errors in experiments. Recent efforts have been made both to determine the magnitude of such correlations between experiments and to develop and apply methods for adjusting the bias and bias uncertainty to account for the correlations. This paper describes recent work performed at Oak Ridge National Laboratory using the Sampler sequence from the SCALE code system to develop experimental correlations using a Monte Carlo sampling technique. Sampler will be available for the first time with the release of SCALE 6.2, and a brief introduction to the methods used to calculate experiment correlations within this new sequence is presented in this paper. Techniques to utilize these correlations in the establishment of upper subcritical limits are the subject of a companion paper and will not be discussed here. Example experimental uncertainties and correlation coefficients are presented for a variety of low-enriched uranium water-moderated lattice experiments selected for use in a benchmark exercise by the Working Party on Nuclear Criticality Safety Subgroup on Uncertainty Analysis in Criticality Safety Analyses. The results include studies on the effect of fuel rod pitch on the correlations, and some observations are also made regarding difficulties in determining experimental correlations using the Monte Carlo sampling technique.« less

  2. Benchmarking in Czech Higher Education: The Case of Schools of Economics

    ERIC Educational Resources Information Center

    Placek, Michal; Ochrana, František; Pucek, Milan

    2015-01-01

    This article describes the use of benchmarking in universities in the Czech Republic and academics' experiences with it. It is based on research conducted among academics from economics schools in Czech public and private universities. The results identified several issues regarding the utilisation and understanding of benchmarking in the Czech…

  3. Particle Engulfment and Pushing By Solidifying Interfaces - Recent Theoretical and Experimental Developments

    NASA Technical Reports Server (NTRS)

    Stefanescu, D. M.; Catalina, A. V.; Juretzko, Frank R.; Sen, Subhayu; Curreri, P. A.

    2003-01-01

    The objective of the work on Particle Engulfment and Pushing by Solidifying Interfaces (PEP) include: 1) to obtain fundamental understanding of the physics of particle pushing and engulfment, 2) to develop mathematical models to describe the phenomenon, and 3) to perform critical experiments in the microgravity environment of space to provide benchmark data for model validation. Successful completion of this project will yield vital information relevant to a diverse area of terrestrial applications. With PEP being a long term research effort, this report will focus on advances in the theoretical treatment of the solid/liquid interface interaction with an approaching particle, experimental validation of some aspects of the developed models, and the experimental design aspects of future experiments to be performed on board the International Space Station.

  4. The Schultz MIDI Benchmarking Toolbox for MIDI interfaces, percussion pads, and sound cards.

    PubMed

    Schultz, Benjamin G

    2018-04-17

    The Musical Instrument Digital Interface (MIDI) was readily adopted for auditory sensorimotor synchronization experiments. These experiments typically use MIDI percussion pads to collect responses, a MIDI-USB converter (or MIDI-PCI interface) to record responses on a PC and manipulate feedback, and an external MIDI sound module to generate auditory feedback. Previous studies have suggested that auditory feedback latencies can be introduced by these devices. The Schultz MIDI Benchmarking Toolbox (SMIDIBT) is an open-source, Arduino-based package designed to measure the point-to-point latencies incurred by several devices used in the generation of response-triggered auditory feedback. Experiment 1 showed that MIDI messages are sent and received within 1 ms (on average) in the absence of any external MIDI device. Latencies decreased when the baud rate increased above the MIDI protocol default (31,250 bps). Experiment 2 benchmarked the latencies introduced by different MIDI-USB and MIDI-PCI interfaces. MIDI-PCI was superior to MIDI-USB, primarily because MIDI-USB is subject to USB polling. Experiment 3 tested three MIDI percussion pads. Both the audio and MIDI message latencies were significantly greater than 1 ms for all devices, and there were significant differences between percussion pads and instrument patches. Experiment 4 benchmarked four MIDI sound modules. Audio latencies were significantly greater than 1 ms, and there were significant differences between sound modules and instrument patches. These experiments suggest that millisecond accuracy might not be achievable with MIDI devices. The SMIDIBT can be used to benchmark a range of MIDI devices, thus allowing researchers to make informed decisions when choosing testing materials and to arrive at an acceptable latency at their discretion.

  5. From the ideal market to the ideal clinic: constructing a normative standard of fairness for human subjects research.

    PubMed

    Phillips, Trisha

    2011-02-01

    Preventing exploitation in human subjects research requires a benchmark of fairness against which to judge the distribution of the benefits and burdens of a trial. This paper proposes the ideal market and its fair market price as a criterion of fairness. The ideal market approach is not new to discussions about exploitation, so this paper reviews Wertheimer's inchoate presentation of the ideal market as a principle of fairness, attempt of Emanuel and colleagues to apply the ideal market to human subjects research, and Ballantyne's criticisms of both the ideal market and the resulting benchmark of fairness. It argues that the criticism of this particular benchmark is on point, but the rejection of the ideal market is mistaken. After presenting a complete account of the ideal market, this paper proposes a new method for applying the ideal market to human subjects research and illustrates the proposal by considering a sample case.

  6. A suite of benchmark and challenge problems for enhanced geothermal systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Mark; Fu, Pengcheng; McClure, Mark

    A diverse suite of numerical simulators is currently being applied to predict or understand the performance of enhanced geothermal systems (EGS). To build confidence and identify critical development needs for these analytical tools, the United States Department of Energy, Geothermal Technologies Office sponsored a Code Comparison Study (GTO-CCS), with participants from universities, industry, and national laboratories. A principal objective for the study was to create a community forum for improvement and verification of numerical simulators for EGS modeling. Teams participating in the study were those representing U.S. national laboratories, universities, and industries, and each team brought unique numerical simulation capabilitiesmore » to bear on the problems. Two classes of problems were developed during the study, benchmark problems and challenge problems. The benchmark problems were structured to test the ability of the collection of numerical simulators to solve various combinations of coupled thermal, hydrologic, geomechanical, and geochemical processes. This class of problems was strictly defined in terms of properties, driving forces, initial conditions, and boundary conditions. The challenge problems were based on the enhanced geothermal systems research conducted at Fenton Hill, near Los Alamos, New Mexico, between 1974 and 1995. The problems involved two phases of research, stimulation, development, and circulation in two separate reservoirs. The challenge problems had specific questions to be answered via numerical simulation in three topical areas: 1) reservoir creation/stimulation, 2) reactive and passive transport, and 3) thermal recovery. Whereas the benchmark class of problems were designed to test capabilities for modeling coupled processes under strictly specified conditions, the stated objective for the challenge class of problems was to demonstrate what new understanding of the Fenton Hill experiments could be realized via the application of modern numerical simulation tools by recognized expert practitioners. We present the suite of benchmark and challenge problems developed for the GTO-CCS, providing problem descriptions and sample solutions.« less

  7. New features and improved uncertainty analysis in the NEA nuclear data sensitivity tool (NDaST)

    NASA Astrophysics Data System (ADS)

    Dyrda, J.; Soppera, N.; Hill, I.; Bossant, M.; Gulliford, J.

    2017-09-01

    Following the release and initial testing period of the NEA's Nuclear Data Sensitivity Tool [1], new features have been designed and implemented in order to expand its uncertainty analysis capabilities. The aim is to provide a free online tool for integral benchmark testing, that is both efficient and comprehensive, meeting the needs of the nuclear data and benchmark testing communities. New features include access to P1 sensitivities for neutron scattering angular distribution [2] and constrained Chi sensitivities for the prompt fission neutron energy sampling. Both of these are compatible with covariance data accessed via the JANIS nuclear data software, enabling propagation of the resultant uncertainties in keff to a large series of integral experiment benchmarks. These capabilities are available using a number of different covariance libraries e.g., ENDF/B, JEFF, JENDL and TENDL, allowing comparison of the broad range of results it is possible to obtain. The IRPhE database of reactor physics measurements is now also accessible within the tool in addition to the criticality benchmarks from ICSBEP. Other improvements include the ability to determine and visualise the energy dependence of a given calculated result in order to better identify specific regions of importance or high uncertainty contribution. Sorting and statistical analysis of the selected benchmark suite is now also provided. Examples of the plots generated by the software are included to illustrate such capabilities. Finally, a number of analytical expressions, for example Maxwellian and Watt fission spectra will be included. This will allow the analyst to determine the impact of varying such distributions within the data evaluation, either through adjustment of parameters within the expressions, or by comparison to a more general probability distribution fitted to measured data. The impact of such changes is verified through calculations which are compared to a `direct' measurement found by adjustment of the original ENDF format file.

  8. High-energy neutron depth-dose distribution experiment.

    PubMed

    Ferenci, M S; Hertel, N E

    2003-01-01

    A unique set of high-energy neutron depth-dose benchmark experiments were performed at the Los Alamos Neutron Science Center/Weapons Neutron Research (LANSCE/WNR) complex. The experiments consisted of filtered neutron beams with energies up to 800 MeV impinging on a 30 x 30 x 30 cm3 liquid, tissue-equivalent phantom. The absorbed dose was measured in the phantom at various depths with tissue-equivalent ion chambers. This experiment is intended to serve as a benchmark experiment for the testing of high-energy radiation transport codes for the international radiation protection community.

  9. Near-critical density filling of the SF6 fluid cell for the ALI-R-DECLIC experiment in weightlessness

    NASA Astrophysics Data System (ADS)

    Lecoutre, C.; Marre, S.; Garrabos, Y.; Beysens, D.; Hahn, I.

    2018-05-01

    Analyses of ground-based experiments on near-critical fluids to precisely determine their density can be hampered by several effects, especially the density stratification of the sample, the liquid wetting behavior at the cell walls, and a possible singular curvature of the "rectilinear" diameter of the density coexisting curve. For the latter effect, theoretical efforts have been made to understand the amplitude and shape of the critical hook of the density diameter, which depart from predictions from the so-called ideal lattice-gas model of the uniaxial 3D-Ising universality class. In order to optimize the observation of these subtle effects on the position and shape of the liquid-vapor meniscus in the particular case of SF6, we have designed and filled a cell that is highly symmetrized with respect to any median plane of the total fluid volume. In such a viewed quasi-perfect symmetrical fluid volume, the precise detection of the meniscus position and shape for different orientations of the cell with respect to the Earth's gravity acceleration field becomes a sensitive probe to estimate the cell mean density filling and to test the singular diameter effects. After integration of this cell in the ALI-R insert, we take benefit of the high optical and thermal performances of the DECLIC Engineering Model. Here we present the sensitive imaging method providing the precise ground-based SF6 benchmark data. From these data analysis it is found that the temperature dependence of the meniscus position does not reflect the expected critical hook in the rectilinear density diameter. Therefore the off-density criticality of the cell is accurately estimated, before near future experiments using the same ALI-R insert in the DECLIC facility already on-board the International Space Station.

  10. [Customer and patient satisfaction. An appropriate management tool in hospitals?].

    PubMed

    Pawils, S; Trojan, A; Nickel, S; Bleich, C

    2012-09-01

    Recently, the concept of patient satisfaction has been established as an essential part of the quality management of hospitals. Despite the concept's lack of theoretical and methodological foundations, patient surveys on subjective hospital experiences contribute immensely to the improvement of hospitals. What needs to be considered critically in this context is the concept of customer satisfaction for patients, the theoretical integration of empirical results, the reduction of false satisfaction indications and the application of risk-adjusted versus naïve benchmarking of data. This paper aims to contribute to the theoretical discussion of the topic and to build a basis for planning methodologically sound patient surveys.

  11. Developing Evidence for Action on the Postgraduate Experience: An Effective Local Instrument to Move beyond Benchmarking

    ERIC Educational Resources Information Center

    Sampson, K. A.; Johnston, L.; Comer, K.; Brogt, E.

    2016-01-01

    Summative and benchmarking surveys to measure the postgraduate student research experience are well reported in the literature. While useful, we argue that local instruments that provide formative resources with an academic development focus are also required. If higher education institutions are to move beyond the identification of issues and…

  12. How to achieve and prove performance improvement - 15 years of experience in German wastewater benchmarking.

    PubMed

    Bertzbach, F; Franz, T; Möller, K

    2012-01-01

    This paper shows the results of performance improvement, which have been achieved in benchmarking projects in the wastewater industry in Germany over the last 15 years. A huge number of changes in operational practice and also in achieved annual savings can be shown, induced in particular by benchmarking at process level. Investigation of this question produces some general findings for the inclusion of performance improvement in a benchmarking project and for the communication of its results. Thus, we elaborate on the concept of benchmarking at both utility and process level, which is still a necessary distinction for the integration of performance improvement into our benchmarking approach. To achieve performance improvement via benchmarking it should be made quite clear that this outcome depends, on one hand, on a well conducted benchmarking programme and, on the other, on the individual situation within each participating utility.

  13. BENCHMARK DOSE TECHNICAL GUIDANCE DOCUMENT ...

    EPA Pesticide Factsheets

    The purpose of this document is to provide guidance for the Agency on the application of the benchmark dose approach in determining the point of departure (POD) for health effects data, whether a linear or nonlinear low dose extrapolation is used. The guidance includes discussion on computation of benchmark doses and benchmark concentrations (BMDs and BMCs) and their lower confidence limits, data requirements, dose-response analysis, and reporting requirements. This guidance is based on today's knowledge and understanding, and on experience gained in using this approach.

  14. HTR-PROTEUS PEBBLE BED EXPERIMENTAL PROGRAM CORE 4: RANDOM PACKING WITH A 1:1 MODERATOR-TO-FUEL PEBBLE RATIO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John D. Bess; Leland M. Montierth

    2013-03-01

    In its deployment as a pebble bed reactor (PBR) critical facility from 1992 to 1996, the PROTEUS facility was designated as HTR-PROTEUS. This experimental program was performed as part of an International Atomic Energy Agency (IAEA) Coordinated Research Project (CRP) on the Validation of Safety Related Physics Calculations for Low Enriched HTGRs. Within this project, critical experiments were conducted for graphite moderated LEU systems to determine core reactivity, flux and power profiles, reaction-rate ratios, the worth of control rods, both in-core and reflector based, the worth of burnable poisons, kinetic parameters, and the effects of moisture ingress on these parameters.more » One benchmark experiment was evaluated in this report: Core 4. Core 4 represents the only configuration with random pebble packing in the HTR-PROTEUS series of experiments, and has a moderator-to-fuel pebble ratio of 1:1. Three random configurations were performed. The initial configuration, Core 4.1, was rejected because the method for pebble loading, separate delivery tubes for the moderator and fuel pebbles, may not have been completely random; this core loading was rejected by the experimenters. Cores 4.2 and 4.3 were loaded using a single delivery tube, eliminating the possibility for systematic ordering effects. The second and third cores differed slightly in the quantity of pebbles loaded (40 each of moderator and fuel pebbles), stacked height of the pebbles in the core cavity (0.02 m), withdrawn distance of the stainless steel control rods (20 mm), and withdrawn distance of the autorod (30 mm). The 34 coolant channels in the upper axial reflector and the 33 coolant channels in the lower axial reflector were open. Additionally, the axial graphite fillers used in all other HTR-PROTEUS configurations to create a 12-sided core cavity were not used in the randomly packed cores. Instead, graphite fillers were placed on the cavity floor, creating a funnel-like base, to discourage ordering effects during pebble loading. Core 4 was determined to be acceptable benchmark experiment.« less

  15. HTR-proteus pebble bed experimental program core 4: random packing with a 1:1 moderator-to-fuel pebble ratio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bess, John D.; Montierth, Leland M.; Sterbentz, James W.

    2014-03-01

    In its deployment as a pebble bed reactor (PBR) critical facility from 1992 to 1996, the PROTEUS facility was designated as HTR-PROTEUS. This experimental program was performed as part of an International Atomic Energy Agency (IAEA) Coordinated Research Project (CRP) on the Validation of Safety Related Physics Calculations for Low Enriched HTGRs. Within this project, critical experiments were conducted for graphite moderated LEU systems to determine core reactivity, flux and power profiles, reaction-rate ratios, the worth of control rods, both in-core and reflector based, the worth of burnable poisons, kinetic parameters, and the effects of moisture ingress on these parameters.more » One benchmark experiment was evaluated in this report: Core 4. Core 4 represents the only configuration with random pebble packing in the HTR-PROTEUS series of experiments, and has a moderator-to-fuel pebble ratio of 1:1. Three random configurations were performed. The initial configuration, Core 4.1, was rejected because the method for pebble loading, separate delivery tubes for the moderator and fuel pebbles, may not have been completely random; this core loading was rejected by the experimenters. Cores 4.2 and 4.3 were loaded using a single delivery tube, eliminating the possibility for systematic ordering effects. The second and third cores differed slightly in the quantity of pebbles loaded (40 each of moderator and fuel pebbles), stacked height of the pebbles in the core cavity (0.02 m), withdrawn distance of the stainless steel control rods (20 mm), and withdrawn distance of the autorod (30 mm). The 34 coolant channels in the upper axial reflector and the 33 coolant channels in the lower axial reflector were open. Additionally, the axial graphite fillers used in all other HTR-PROTEUS configurations to create a 12-sided core cavity were not used in the randomly packed cores. Instead, graphite fillers were placed on the cavity floor, creating a funnel-like base, to discourage ordering effects during pebble loading. Core 4 was determined to be acceptable benchmark experiment.« less

  16. Benchmark Evaluation of Dounreay Prototype Fast Reactor Minor Actinide Depletion Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hess, J. D.; Gauld, I. C.; Gulliford, J.

    2017-01-01

    Historic measurements of actinide samples in the Dounreay Prototype Fast Reactor (PFR) are of interest for modern nuclear data and simulation validation. Samples of various higher-actinide isotopes were irradiated for 492 effective full-power days and radiochemically assayed at Oak Ridge National Laboratory (ORNL) and Japan Atomic Energy Research Institute (JAERI). Limited data were available regarding the PFR irradiation; a six-group neutron spectra was available with some power history data to support a burnup depletion analysis validation study. Under the guidance of the Organisation for Economic Co-Operation and Development Nuclear Energy Agency (OECD NEA), the International Reactor Physics Experiment Evaluation Projectmore » (IRPhEP) and Spent Fuel Isotopic Composition (SFCOMPO) Project are collaborating to recover all measurement data pertaining to these measurements, including collaboration with the United Kingdom to obtain pertinent reactor physics design and operational history data. These activities will produce internationally peer-reviewed benchmark data to support validation of minor actinide cross section data and modern neutronic simulation of fast reactors with accompanying fuel cycle activities such as transportation, recycling, storage, and criticality safety.« less

  17. Benchmark Testing of a New 56Fe Evaluation for Criticality Safety Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leal, Luiz C; Ivanov, E.

    2015-01-01

    The SAMMY code was used to evaluate resonance parameters of the 56Fe cross section in the resolved resonance energy range of 0–2 MeV using transmission data, capture, elastic, inelastic, and double differential elastic cross sections. The resonance analysis was performed with the code SAMMY that fits R-matrix resonance parameters using the generalized least-squares technique (Bayes’ theory). The evaluation yielded a set of resonance parameters that reproduced the experimental data very well, along with a resonance parameter covariance matrix for data uncertainty calculations. Benchmark tests were conducted to assess the evaluation performance in benchmark calculations.

  18. Interlaboratory Study Characterizing a Yeast Performance Standard for Benchmarking LC-MS Platform Performance*

    PubMed Central

    Paulovich, Amanda G.; Billheimer, Dean; Ham, Amy-Joan L.; Vega-Montoto, Lorenzo; Rudnick, Paul A.; Tabb, David L.; Wang, Pei; Blackman, Ronald K.; Bunk, David M.; Cardasis, Helene L.; Clauser, Karl R.; Kinsinger, Christopher R.; Schilling, Birgit; Tegeler, Tony J.; Variyath, Asokan Mulayath; Wang, Mu; Whiteaker, Jeffrey R.; Zimmerman, Lisa J.; Fenyo, David; Carr, Steven A.; Fisher, Susan J.; Gibson, Bradford W.; Mesri, Mehdi; Neubert, Thomas A.; Regnier, Fred E.; Rodriguez, Henry; Spiegelman, Cliff; Stein, Stephen E.; Tempst, Paul; Liebler, Daniel C.

    2010-01-01

    Optimal performance of LC-MS/MS platforms is critical to generating high quality proteomics data. Although individual laboratories have developed quality control samples, there is no widely available performance standard of biological complexity (and associated reference data sets) for benchmarking of platform performance for analysis of complex biological proteomes across different laboratories in the community. Individual preparations of the yeast Saccharomyces cerevisiae proteome have been used extensively by laboratories in the proteomics community to characterize LC-MS platform performance. The yeast proteome is uniquely attractive as a performance standard because it is the most extensively characterized complex biological proteome and the only one associated with several large scale studies estimating the abundance of all detectable proteins. In this study, we describe a standard operating protocol for large scale production of the yeast performance standard and offer aliquots to the community through the National Institute of Standards and Technology where the yeast proteome is under development as a certified reference material to meet the long term needs of the community. Using a series of metrics that characterize LC-MS performance, we provide a reference data set demonstrating typical performance of commonly used ion trap instrument platforms in expert laboratories; the results provide a basis for laboratories to benchmark their own performance, to improve upon current methods, and to evaluate new technologies. Additionally, we demonstrate how the yeast reference, spiked with human proteins, can be used to benchmark the power of proteomics platforms for detection of differentially expressed proteins at different levels of concentration in a complex matrix, thereby providing a metric to evaluate and minimize preanalytical and analytical variation in comparative proteomics experiments. PMID:19858499

  19. Photosynthetic productivity and its efficiencies in ISIMIP2a biome models: benchmarking for impact assessment studies

    NASA Astrophysics Data System (ADS)

    Ito, Akihiko; Nishina, Kazuya; Reyer, Christopher P. O.; François, Louis; Henrot, Alexandra-Jane; Munhoven, Guy; Jacquemin, Ingrid; Tian, Hanqin; Yang, Jia; Pan, Shufen; Morfopoulos, Catherine; Betts, Richard; Hickler, Thomas; Steinkamp, Jörg; Ostberg, Sebastian; Schaphoff, Sibyll; Ciais, Philippe; Chang, Jinfeng; Rafique, Rashid; Zeng, Ning; Zhao, Fang

    2017-08-01

    Simulating vegetation photosynthetic productivity (or gross primary production, GPP) is a critical feature of the biome models used for impact assessments of climate change. We conducted a benchmarking of global GPP simulated by eight biome models participating in the second phase of the Inter-Sectoral Impact Model Intercomparison Project (ISIMIP2a) with four meteorological forcing datasets (30 simulations), using independent GPP estimates and recent satellite data of solar-induced chlorophyll fluorescence as a proxy of GPP. The simulated global terrestrial GPP ranged from 98 to 141 Pg C yr-1 (1981-2000 mean); considerable inter-model and inter-data differences were found. Major features of spatial distribution and seasonal change of GPP were captured by each model, showing good agreement with the benchmarking data. All simulations showed incremental trends of annual GPP, seasonal-cycle amplitude, radiation-use efficiency, and water-use efficiency, mainly caused by the CO2 fertilization effect. The incremental slopes were higher than those obtained by remote sensing studies, but comparable with those by recent atmospheric observation. Apparent differences were found in the relationship between GPP and incoming solar radiation, for which forcing data differed considerably. The simulated GPP trends co-varied with a vegetation structural parameter, leaf area index, at model-dependent strengths, implying the importance of constraining canopy properties. In terms of extreme events, GPP anomalies associated with a historical El Niño event and large volcanic eruption were not consistently simulated in the model experiments due to deficiencies in both forcing data and parameterized environmental responsiveness. Although the benchmarking demonstrated the overall advancement of contemporary biome models, further refinements are required, for example, for solar radiation data and vegetation canopy schemes.

  20. Experimental physics characteristics of a heavy-metal-reflected fast-spectrum critical assembly

    NASA Technical Reports Server (NTRS)

    Heneveld, W. H.; Paschall, R. K.; Springer, T. H.; Swanson, V. A.; Thiele, A. W.; Tuttle, R. J.

    1972-01-01

    A zero-power critical assembly was designed, constructed, and operated for the purpose of conducting a series of benchmark experiments dealing with the physics characteristics of a UN-fueled, Li-cooled, Mo-reflected, drum-controlled compact fast reactor for use with a space-power electric conversion system. The range of the previous experimental investigations has been expanded to include the reactivity effects of:(1) surrounding the reactor with 15.24 cm (6 in.) of polyethylene, (2) reducing the heights of a portion of the upper and lower axial reflectors by factors of 2 and 4, (3) adding 45 kg of W to the core uniformly in two steps, (4) adding 9.54 kg of Ta to the core uniformly, and (5) inserting 2.3 kg of polyethylene into the core proper and determining the effect of a Ta addition on the polyethylene worth.

  1. Benchmark of ReaxFF force field for subcritical and supercritical water.

    PubMed

    Manzano, Hegoi; Zhang, Weiwei; Raju, Muralikrishna; Dolado, Jorge S; López-Arbeloa, Iñigo; van Duin, Adri C T

    2018-06-21

    Water in the subcritical and supercritical states has remarkable properties that make it an excellent solvent for oxidation of hazardous chemicals, waste separation, and green synthesis. Molecular simulations are a valuable complement to experiments in order to understand and improve the relevant sub- and super-critical reaction mechanisms. Since water molecules under these conditions can act not only as a solvent but also as a reactant, dissociative force fields are especially interesting to investigate these processes. In this work, we evaluate the capacity of the ReaxFF force field to reproduce the microstructure, hydrogen bonding, dielectric constant, diffusion, and proton transfer of sub- and super-critical water. Our results indicate that ReaxFF is able to simulate water properties in these states in very good quantitative agreement with the existing experimental data, with the exception of the static dielectric constant that is reproduced only qualitatively.

  2. SparseBeads data: benchmarking sparsity-regularized computed tomography

    NASA Astrophysics Data System (ADS)

    Jørgensen, Jakob S.; Coban, Sophia B.; Lionheart, William R. B.; McDonald, Samuel A.; Withers, Philip J.

    2017-12-01

    Sparsity regularization (SR) such as total variation (TV) minimization allows accurate image reconstruction in x-ray computed tomography (CT) from fewer projections than analytical methods. Exactly how few projections suffice and how this number may depend on the image remain poorly understood. Compressive sensing connects the critical number of projections to the image sparsity, but does not cover CT, however empirical results suggest a similar connection. The present work establishes for real CT data a connection between gradient sparsity and the sufficient number of projections for accurate TV-regularized reconstruction. A collection of 48 x-ray CT datasets called SparseBeads was designed for benchmarking SR reconstruction algorithms. Beadpacks comprising glass beads of five different sizes as well as mixtures were scanned in a micro-CT scanner to provide structured datasets with variable image sparsity levels, number of projections and noise levels to allow the systematic assessment of parameters affecting performance of SR reconstruction algorithms6. Using the SparseBeads data, TV-regularized reconstruction quality was assessed as a function of numbers of projections and gradient sparsity. The critical number of projections for satisfactory TV-regularized reconstruction increased almost linearly with the gradient sparsity. This establishes a quantitative guideline from which one may predict how few projections to acquire based on expected sample sparsity level as an aid in planning of dose- or time-critical experiments. The results are expected to hold for samples of similar characteristics, i.e. consisting of few, distinct phases with relatively simple structure. Such cases are plentiful in porous media, composite materials, foams, as well as non-destructive testing and metrology. For samples of other characteristics the proposed methodology may be used to investigate similar relations.

  3. Modeling of the Edwards pipe experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tiselj, I.; Petelin, S.

    1995-12-31

    The Edwards pipe experiment is used as one of the basic benchmarks for the two-phase flow codes due to its simple geometry and the wide range of phenomena that it covers. Edwards and O`Brien filled 4-m-long pipe with liquid water at 7 MPa and 502 K and ruptured one end of the tube. They measured pressure and void fraction during the blowdown. Important phenomena observed were pressure rarefaction wave, flashing onset, critical two-phase flow, and void fraction wave. Experimental data were used to analyze the capabilities of the RELAP5/MOD3.1 six-equation two-phase flow model and to examine two different numerical schemes:more » one from the RELAP5/MOD3.1 code and one from our own code, which was based on characteristic upwind discretization.« less

  4. Benchmarking gate-based quantum computers

    NASA Astrophysics Data System (ADS)

    Michielsen, Kristel; Nocon, Madita; Willsch, Dennis; Jin, Fengping; Lippert, Thomas; De Raedt, Hans

    2017-11-01

    With the advent of public access to small gate-based quantum processors, it becomes necessary to develop a benchmarking methodology such that independent researchers can validate the operation of these processors. We explore the usefulness of a number of simple quantum circuits as benchmarks for gate-based quantum computing devices and show that circuits performing identity operations are very simple, scalable and sensitive to gate errors and are therefore very well suited for this task. We illustrate the procedure by presenting benchmark results for the IBM Quantum Experience, a cloud-based platform for gate-based quantum computing.

  5. Benchmark Calibration Tests Completed for Stirling Convertor Heater Head Life Assessment

    NASA Technical Reports Server (NTRS)

    Krause, David L.; Halford, Gary R.; Bowman, Randy R.

    2005-01-01

    A major phase of benchmark testing has been completed at the NASA Glenn Research Center (http://www.nasa.gov/glenn/), where a critical component of the Stirling Radioisotope Generator (SRG) is undergoing extensive experimentation to aid the development of an analytical life-prediction methodology. Two special-purpose test rigs subjected SRG heater-head pressure-vessel test articles to accelerated creep conditions, using the standard design temperatures to stay within the wall material s operating creep-response regime, but increasing wall stresses up to 7 times over the design point. This resulted in well-controlled "ballooning" of the heater-head hot end. The test plan was developed to provide critical input to analytical parameters in a reasonable period of time.

  6. Combining Phase Identification and Statistic Modeling for Automated Parallel Benchmark Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Ye; Ma, Xiaosong; Liu, Qing Gary

    2015-01-01

    Parallel application benchmarks are indispensable for evaluating/optimizing HPC software and hardware. However, it is very challenging and costly to obtain high-fidelity benchmarks reflecting the scale and complexity of state-of-the-art parallel applications. Hand-extracted synthetic benchmarks are time-and labor-intensive to create. Real applications themselves, while offering most accurate performance evaluation, are expensive to compile, port, reconfigure, and often plainly inaccessible due to security or ownership concerns. This work contributes APPRIME, a novel tool for trace-based automatic parallel benchmark generation. Taking as input standard communication-I/O traces of an application's execution, it couples accurate automatic phase identification with statistical regeneration of event parameters tomore » create compact, portable, and to some degree reconfigurable parallel application benchmarks. Experiments with four NAS Parallel Benchmarks (NPB) and three real scientific simulation codes confirm the fidelity of APPRIME benchmarks. They retain the original applications' performance characteristics, in particular the relative performance across platforms.« less

  7. Decoys Selection in Benchmarking Datasets: Overview and Perspectives

    PubMed Central

    Réau, Manon; Langenfeld, Florent; Zagury, Jean-François; Lagarde, Nathalie; Montes, Matthieu

    2018-01-01

    Virtual Screening (VS) is designed to prospectively help identifying potential hits, i.e., compounds capable of interacting with a given target and potentially modulate its activity, out of large compound collections. Among the variety of methodologies, it is crucial to select the protocol that is the most adapted to the query/target system under study and that yields the most reliable output. To this aim, the performance of VS methods is commonly evaluated and compared by computing their ability to retrieve active compounds in benchmarking datasets. The benchmarking datasets contain a subset of known active compounds together with a subset of decoys, i.e., assumed non-active molecules. The composition of both the active and the decoy compounds subsets is critical to limit the biases in the evaluation of the VS methods. In this review, we focus on the selection of decoy compounds that has considerably changed over the years, from randomly selected compounds to highly customized or experimentally validated negative compounds. We first outline the evolution of decoys selection in benchmarking databases as well as current benchmarking databases that tend to minimize the introduction of biases, and secondly, we propose recommendations for the selection and the design of benchmarking datasets. PMID:29416509

  8. Space Weather Action Plan Ionizing Radiation Benchmarks: Phase 1 update and plans for Phase 2

    NASA Astrophysics Data System (ADS)

    Talaat, E. R.; Kozyra, J.; Onsager, T. G.; Posner, A.; Allen, J. E., Jr.; Black, C.; Christian, E. R.; Copeland, K.; Fry, D. J.; Johnston, W. R.; Kanekal, S. G.; Mertens, C. J.; Minow, J. I.; Pierson, J.; Rutledge, R.; Semones, E.; Sibeck, D. G.; St Cyr, O. C.; Xapsos, M.

    2017-12-01

    Changes in the near-Earth radiation environment can affect satellite operations, astronauts in space, commercial space activities, and the radiation environment on aircraft at relevant latitudes or altitudes. Understanding the diverse effects of increased radiation is challenging, but producing ionizing radiation benchmarks will help address these effects. The following areas have been considered in addressing the near-Earth radiation environment: the Earth's trapped radiation belts, the galactic cosmic ray background, and solar energetic-particle events. The radiation benchmarks attempt to account for any change in the near-Earth radiation environment, which, under extreme cases, could present a significant risk to critical infrastructure operations or human health. The goal of these ionizing radiation benchmarks and associated confidence levels will define at least the radiation intensity as a function of time, particle type, and energy for an occurrence frequency of 1 in 100 years and an intensity level at the theoretical maximum for the event. In this paper, we present the benchmarks that address radiation levels at all applicable altitudes and latitudes in the near-Earth environment, the assumptions made and the associated uncertainties, and the next steps planned for updating the benchmarks.

  9. The Surge Capacity for People in Emergencies (SCOPE) study in Australasian hospitals.

    PubMed

    Traub, Matthias; Bradt, David A; Joseph, Anthony P

    2007-04-16

    To measure physical assets in Australasian hospitals required for the management of mass casualties as a result of terrorism or natural disasters. A cross-sectional survey of Australian and New Zealand hospitals. All emergency department directors of Australasian College for Emergency Medicine (ACEM)-accredited hospitals, as well as private and non-ACEM accredited emergency departments staffed by ACEM Fellows in metropolitan Sydney. Numbers of operating theatres, intensive care unit (ICU) beds and x-ray machines; state of preparedness using benchmarks defined by the Centers for Disease Control and Prevention in the United States. We found that 61%-82% of critically injured patients would not have immediate access to operative care, 34%-70% would have delayed access to an ICU bed, and 42% of the less critically injured would have delayed access to x-ray facilities. Our study demonstrates that physical assets in Australasian public hospitals do not meet US hospital preparedness benchmarks for mass casualty incidents. We recommend national agreement on disaster preparedness benchmarks and periodic publication of hospital performance indicators to enhance disaster preparedness.

  10. Experimental power density distribution benchmark in the TRIGA Mark II reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snoj, L.; Stancar, Z.; Radulovic, V.

    2012-07-01

    In order to improve the power calibration process and to benchmark the existing computational model of the TRIGA Mark II reactor at the Josef Stefan Inst. (JSI), a bilateral project was started as part of the agreement between the French Commissariat a l'energie atomique et aux energies alternatives (CEA) and the Ministry of higher education, science and technology of Slovenia. One of the objectives of the project was to analyze and improve the power calibration process of the JSI TRIGA reactor (procedural improvement and uncertainty reduction) by using absolutely calibrated CEA fission chambers (FCs). This is one of the fewmore » available power density distribution benchmarks for testing not only the fission rate distribution but also the absolute values of the fission rates. Our preliminary calculations indicate that the total experimental uncertainty of the measured reaction rate is sufficiently low that the experiments could be considered as benchmark experiments. (authors)« less

  11. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code

    DOE PAGES

    Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; ...

    2015-12-21

    This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemore » specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000 ® problems. These benchmark and scaling studies show promising results.« less

  12. Online Object Tracking, Learning and Parsing with And-Or Graphs.

    PubMed

    Wu, Tianfu; Lu, Yang; Zhu, Song-Chun

    2017-12-01

    This paper presents a method, called AOGTracker, for simultaneously tracking, learning and parsing (TLP) of unknown objects in video sequences with a hierarchical and compositional And-Or graph (AOG) representation. The TLP method is formulated in the Bayesian framework with a spatial and a temporal dynamic programming (DP) algorithms inferring object bounding boxes on-the-fly. During online learning, the AOG is discriminatively learned using latent SVM [1] to account for appearance (e.g., lighting and partial occlusion) and structural (e.g., different poses and viewpoints) variations of a tracked object, as well as distractors (e.g., similar objects) in background. Three key issues in online inference and learning are addressed: (i) maintaining purity of positive and negative examples collected online, (ii) controling model complexity in latent structure learning, and (iii) identifying critical moments to re-learn the structure of AOG based on its intrackability. The intrackability measures uncertainty of an AOG based on its score maps in a frame. In experiments, our AOGTracker is tested on two popular tracking benchmarks with the same parameter setting: the TB-100/50/CVPR2013 benchmarks  , [3] , and the VOT benchmarks [4] -VOT 2013, 2014, 2015 and TIR2015 (thermal imagery tracking). In the former, our AOGTracker outperforms state-of-the-art tracking algorithms including two trackers based on deep convolutional network   [5] , [6] . In the latter, our AOGTracker outperforms all other trackers in VOT2013 and is comparable to the state-of-the-art methods in VOT2014, 2015 and TIR2015.

  13. Revisiting Yasinsky and Henry`s benchmark using modern nodal codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feltus, M.A.; Becker, M.W.

    1995-12-31

    The numerical experiments analyzed by Yasinsky and Henry are quite trivial by comparison with today`s standards because they used the finite difference code WIGLE for their benchmark. Also, this problem is a simple slab (one-dimensional) case with no feedback mechanisms. This research attempts to obtain STAR (Ref. 2) and NEM (Ref. 3) code results in order to produce a more modern kinetics benchmark with results comparable WIGLE.

  14. Benchmarking criticality analysis of TRIGA fuel storage racks.

    PubMed

    Robinson, Matthew Loren; DeBey, Timothy M; Higginbotham, Jack F

    2017-01-01

    A criticality analysis was benchmarked to sub-criticality measurements of the hexagonal fuel storage racks at the United States Geological Survey TRIGA MARK I reactor in Denver. These racks, which hold up to 19 fuel elements each, are arranged at 0.61m (2 feet) spacings around the outer edge of the reactor. A 3-dimensional model was created of the racks using MCNP5, and the model was verified experimentally by comparison to measured subcritical multiplication data collected in an approach to critical loading of two of the racks. The validated model was then used to show that in the extreme condition where the entire circumference of the pool was lined with racks loaded with used fuel the storage array is subcritical with a k value of about 0.71; well below the regulatory limit of 0.8. A model was also constructed of the rectangular 2×10 fuel storage array used in many other TRIGA reactors to validate the technique against the original TRIGA licensing sub-critical analysis performed in 1966. The fuel used in this study was standard 20% enriched (LEU) aluminum or stainless steel clad TRIGA fuel. Copyright © 2016. Published by Elsevier Ltd.

  15. Characterization of the CALIBAN Critical Assembly Neutron Spectra using Several Adjustment Methods Based on Activation Foils Measurement

    NASA Astrophysics Data System (ADS)

    Casoli, Pierre; Grégoire, Gilles; Rousseau, Guillaume; Jacquet, Xavier; Authier, Nicolas

    2016-02-01

    CALIBAN is a metallic critical assembly managed by the Criticality, Neutron Science and Measurement Department located on the French CEA Center of Valduc. The reactor is extensively used for benchmark experiments dedicated to the evaluation of nuclear data, for electronic hardening or to study the effect of the neutrons on various materials. Therefore CALIBAN irradiation characteristics and especially its central cavity neutron spectrum have to be very accurately evaluated. In order to strengthen our knowledge of this spectrum, several adjustment methods based on activation foils measurements are being studied for a few years in the laboratory. Firstly two codes included in the UMG package have been tested and compared: MAXED and GRAVEL. More recently, the CALIBAN cavity spectrum has been studied using CALMAR, a new adjustment tool currently under development at the CEA Center of Cadarache. The article will discuss and compare the results and the quality of spectrum rebuilding obtained with the UMG codes and with the CALMAR software, from a set of activation measurements carried out in the CALIBAN irradiation cavity.

  16. The Benchmarking Capacity of a General Outcome Measure of Academic Language in Science and Social Studies

    ERIC Educational Resources Information Center

    Mooney, Paul; Lastrapes, Renée E.

    2016-01-01

    The amount of research evaluating the technical merits of general outcome measures of science and social studies achievement is growing. This study targeted criterion validity for critical content monitoring. Questions addressed the concurrent criterion validity of alternate presentation formats of critical content monitoring and the measure's…

  17. Use of integral experiments in support to the validation of JEFF-3.2 nuclear data evaluation

    NASA Astrophysics Data System (ADS)

    Leclaire, Nicolas; Cochet, Bertrand; Jinaphanh, Alexis; Haeck, Wim

    2017-09-01

    For many years now, IRSN has developed its own Monte Carlo continuous energy capability, which allows testing various nuclear data libraries. In that prospect, a validation database of 1136 experiments was built from cases used for the validation of the APOLLO2-MORET 5 multigroup route of the CRISTAL V2.0 package. In this paper, the keff obtained for more than 200 benchmarks using the JEFF-3.1.1 and JEFF-3.2 libraries are compared to benchmark keff values and main discrepancies are analyzed regarding the neutron spectrum. Special attention is paid on benchmarks for which the results have been highly modified between both JEFF-3 versions.

  18. Validating vignette and conjoint survey experiments against real-world behavior

    PubMed Central

    Hainmueller, Jens; Hangartner, Dominik; Yamamoto, Teppei

    2015-01-01

    Survey experiments, like vignette and conjoint analyses, are widely used in the social sciences to elicit stated preferences and study how humans make multidimensional choices. However, there is a paucity of research on the external validity of these methods that examines whether the determinants that explain hypothetical choices made by survey respondents match the determinants that explain what subjects actually do when making similar choices in real-world situations. This study compares results from conjoint and vignette analyses on which immigrant attributes generate support for naturalization with closely corresponding behavioral data from a natural experiment in Switzerland, where some municipalities used referendums to decide on the citizenship applications of foreign residents. Using a representative sample from the same population and the official descriptions of applicant characteristics that voters received before each referendum as a behavioral benchmark, we find that the effects of the applicant attributes estimated from the survey experiments perform remarkably well in recovering the effects of the same attributes in the behavioral benchmark. We also find important differences in the relative performances of the different designs. Overall, the paired conjoint design, where respondents evaluate two immigrants side by side, comes closest to the behavioral benchmark; on average, its estimates are within 2% percentage points of the effects in the behavioral benchmark. PMID:25646415

  19. Toxicological Benchmarks for Screening Potential Contaminants of Concern for Effects on Soil and Litter Invertebrates and Heterotrophic Process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Will, M.E.

    1994-01-01

    This report presents a standard method for deriving benchmarks for the purpose of ''contaminant screening,'' performed by comparing measured ambient concentrations of chemicals. The work was performed under Work Breakdown Structure 1.4.12.2.3.04.07.02 (Activity Data Sheet 8304). In addition, this report presents sets of data concerning the effects of chemicals in soil on invertebrates and soil microbial processes, benchmarks for chemicals potentially associated with United States Department of Energy sites, and literature describing the experiments from which data were drawn for benchmark derivation.

  20. International land Model Benchmarking (ILAMB) Package v002.00

    DOE Data Explorer

    Collier, Nathaniel [Oak Ridge National Laboratory; Hoffman, Forrest M. [Oak Ridge National Laboratory; Mu, Mingquan [University of California, Irvine; Randerson, James T. [University of California, Irvine; Riley, William J. [Lawrence Berkeley National Laboratory

    2016-05-09

    As a contribution to International Land Model Benchmarking (ILAMB) Project, we are providing new analysis approaches, benchmarking tools, and science leadership. The goal of ILAMB is to assess and improve the performance of land models through international cooperation and to inform the design of new measurement campaigns and field studies to reduce uncertainties associated with key biogeochemical processes and feedbacks. ILAMB is expected to be a primary analysis tool for CMIP6 and future model-data intercomparison experiments. This team has developed initial prototype benchmarking systems for ILAMB, which will be improved and extended to include ocean model metrics and diagnostics.

  1. International land Model Benchmarking (ILAMB) Package v001.00

    DOE Data Explorer

    Mu, Mingquan [University of California, Irvine; Randerson, James T. [University of California, Irvine; Riley, William J. [Lawrence Berkeley National Laboratory; Hoffman, Forrest M. [Oak Ridge National Laboratory

    2016-05-02

    As a contribution to International Land Model Benchmarking (ILAMB) Project, we are providing new analysis approaches, benchmarking tools, and science leadership. The goal of ILAMB is to assess and improve the performance of land models through international cooperation and to inform the design of new measurement campaigns and field studies to reduce uncertainties associated with key biogeochemical processes and feedbacks. ILAMB is expected to be a primary analysis tool for CMIP6 and future model-data intercomparison experiments. This team has developed initial prototype benchmarking systems for ILAMB, which will be improved and extended to include ocean model metrics and diagnostics.

  2. Neutronics Benchmarks for the Utilization of Mixed-Oxide Fuel: Joint U.S./ Russian Progress Report for Fiscal Year 1997, Volume 4, Part 8 - Neutron Poison Plates in Assemblies Containing Homogeneous Mixtures of Polystyrene-Moderated Plutonium and Uranium Oxides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yavuz, M.

    1999-05-01

    In the 1970s at the Battelle Pacific Northwest Laboratory (PNL), a series of critical experiments using a remotely operated Split-Table Machine was performed with homogeneous mixtures of (Pu-U)O{sub 2}-polystyrene fuels in the form of square compacts having different heights. The experiments determined the critical geometric configurations of MOX fuel assemblies with and without neutron poison plates. With respect to PuO{sub 2} content and moderation [H/(Pu+U)atomic] ratio (MR), two different homogeneous (Pu-U) O{sub 2}-polystyrene mixtures were considered: Mixture (1) 14.62 wt% PuO{sub 2} with 30.6 MR, and Mixture (2) 30.3 wt% PuO{sub 2} with 2.8 MR. In all mixtures, the uraniummore » was depleted to about O.151 wt% U{sup 235}. Assemblies contained copper, copper-cadmium or aluminum neutron poison plates having thicknesses up to {approximately}2.5 cm. This evaluation contains 22 experiments for Mixture 1, and 10 for Mixture 2 compacts. For Mixture 1, there are 10 configurations with copper plates, 6 with aluminum, and 5 with copper-cadmium. One experiment contained no poison plate. For Mixture 2 compacts, there are 3 configurations with copper, 3 with aluminum, and 3 with copper-cadmium poison plates. One experiment contained no poison plate.« less

  3. Performance of Landslide-HySEA tsunami model for NTHMP benchmarking validation process

    NASA Astrophysics Data System (ADS)

    Macias, Jorge

    2017-04-01

    In its FY2009 Strategic Plan, the NTHMP required that all numerical tsunami inundation models be verified as accurate and consistent through a model benchmarking process. This was completed in 2011, but only for seismic tsunami sources and in a limited manner for idealized solid underwater landslides. Recent work by various NTHMP states, however, has shown that landslide tsunami hazard may be dominant along significant parts of the US coastline, as compared to hazards from other tsunamigenic sources. To perform the above-mentioned validation process, a set of candidate benchmarks were proposed. These benchmarks are based on a subset of available laboratory date sets for solid slide experiments and deformable slide experiments, and include both submarine and subaerial slides. A benchmark based on a historic field event (Valdez, AK, 1964) close the list of proposed benchmarks. The Landslide-HySEA model has participated in the workshop that was organized at Texas A&M University - Galveston, on January 9-11, 2017. The aim of this presentation is to show some of the numerical results obtained for Landslide-HySEA in the framework of this benchmarking validation/verification effort. Acknowledgements. This research has been partially supported by the Junta de Andalucía research project TESELA (P11-RNM7069), the Spanish Government Research project SIMURISK (MTM2015-70490-C02-01-R) and Universidad de Málaga, Campus de Excelencia Internacional Andalucía Tech. The GPU computations were performed at the Unit of Numerical Methods (University of Malaga).

  4. Plutonium Critical Mass Curve Comparison to Mass at Upper Subcritical Limit (USL) Using Whisper

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alwin, Jennifer Louise; Zhang, Ning

    Whisper is computational software designed to assist the nuclear criticality safety analyst with validation studies with the MCNP ® Monte Carlo radiation transport package. Standard approaches to validation rely on the selection of benchmarks based upon expert judgment. Whisper uses sensitivity/uncertainty (S/U) methods to select relevant benchmarks to a particular application or set of applications being analyzed. Using these benchmarks, Whisper computes a calculational margin. Whisper attempts to quantify the margin of subcriticality (MOS) from errors in software and uncertainties in nuclear data. The combination of the Whisper-derived calculational margin and MOS comprise the baseline upper subcritical limit (USL), tomore » which an additional margin may be applied by the nuclear criticality safety analyst as appropriate to ensure subcriticality. A series of critical mass curves for plutonium, similar to those found in Figure 31 of LA-10860-MS, have been generated using MCNP6.1.1 and the iterative parameter study software, WORM_Solver. The baseline USL for each of the data points of the curves was then computed using Whisper 1.1. The USL was then used to determine the equivalent mass for plutonium metal-water system. ANSI/ANS-8.1 states that it is acceptable to use handbook data, such as the data directly from the LA-10860-MS, as it is already considered validated (Section 4.3 4) “Use of subcritical limit data provided in ANSI/ANS standards or accepted reference publications does not require further validation.”). This paper attempts to take a novel approach to visualize traditional critical mass curves and allows comparison with the amount of mass for which the k eff is equal to the USL (calculational margin + margin of subcriticality). However, the intent is to plot the critical mass data along with USL, not to suggest that already accepted handbook data should have new and more rigorous requirements for validation.« less

  5. Disaster metrics: quantitative benchmarking of hospital surge capacity in trauma-related multiple casualty events.

    PubMed

    Bayram, Jamil D; Zuabi, Shawki; Subbarao, Italo

    2011-06-01

    Hospital surge capacity in multiple casualty events (MCE) is the core of hospital medical response, and an integral part of the total medical capacity of the community affected. To date, however, there has been no consensus regarding the definition or quantification of hospital surge capacity. The first objective of this study was to quantitatively benchmark the various components of hospital surge capacity pertaining to the care of critically and moderately injured patients in trauma-related MCE. The second objective was to illustrate the applications of those quantitative parameters in local, regional, national, and international disaster planning; in the distribution of patients to various hospitals by prehospital medical services; and in the decision-making process for ambulance diversion. A 2-step approach was adopted in the methodology of this study. First, an extensive literature search was performed, followed by mathematical modeling. Quantitative studies on hospital surge capacity for trauma injuries were used as the framework for our model. The North Atlantic Treaty Organization triage categories (T1-T4) were used in the modeling process for simplicity purposes. Hospital Acute Care Surge Capacity (HACSC) was defined as the maximum number of critical (T1) and moderate (T2) casualties a hospital can adequately care for per hour, after recruiting all possible additional medical assets. HACSC was modeled to be equal to the number of emergency department beds (#EDB), divided by the emergency department time (EDT); HACSC = #EDB/EDT. In trauma-related MCE, the EDT was quantitatively benchmarked to be 2.5 (hours). Because most of the critical and moderate casualties arrive at hospitals within a 6-hour period requiring admission (by definition), the hospital bed surge capacity must match the HACSC at 6 hours to ensure coordinated care, and it was mathematically benchmarked to be 18% of the staffed hospital bed capacity. Defining and quantitatively benchmarking the different components of hospital surge capacity is vital to hospital preparedness in MCE. Prospective studies of our mathematical model are needed to verify its applicability, generalizability, and validity.

  6. Methodology of full-core Monte Carlo calculations with leakage parameter evaluations for benchmark critical experiment analysis

    NASA Astrophysics Data System (ADS)

    Sboev, A. G.; Ilyashenko, A. S.; Vetrova, O. A.

    1997-02-01

    The method of bucking evaluation, realized in the MOnte Carlo code MCS, is described. This method was applied for calculational analysis of well known light water experiments TRX-1 and TRX-2. The analysis of this comparison shows, that there is no coincidence between Monte Carlo calculations, obtained by different ways: the MCS calculations with given experimental bucklings; the MCS calculations with given bucklings evaluated on base of full core MCS direct simulations; the full core MCNP and MCS direct simulations; the MCNP and MCS calculations, where the results of cell calculations are corrected by the coefficients taking into the account the leakage from the core. Also the buckling values evaluated by full core MCS calculations have differed from experimental ones, especially in the case of TRX-1, when this difference has corresponded to 0.5 percent increase of Keff value.

  7. Systematic Benchmarking of Diagnostic Technologies for an Electrical Power System

    NASA Technical Reports Server (NTRS)

    Kurtoglu, Tolga; Jensen, David; Poll, Scott

    2009-01-01

    Automated health management is a critical functionality for complex aerospace systems. A wide variety of diagnostic algorithms have been developed to address this technical challenge. Unfortunately, the lack of support to perform large-scale V&V (verification and validation) of diagnostic technologies continues to create barriers to effective development and deployment of such algorithms for aerospace vehicles. In this paper, we describe a formal framework developed for benchmarking of diagnostic technologies. The diagnosed system is the Advanced Diagnostics and Prognostics Testbed (ADAPT), a real-world electrical power system (EPS), developed and maintained at the NASA Ames Research Center. The benchmarking approach provides a systematic, empirical basis to the testing of diagnostic software and is used to provide performance assessment for different diagnostic algorithms.

  8. Toxicological benchmarks for screening potential contaminants of concern for effects on terrestrial plants: 1994 revision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Will, M.E.; Suter, G.W. II

    1994-09-01

    One of the initial stages in ecological risk assessment for hazardous waste sites is screening contaminants to determine which of them are worthy of further consideration as contaminants of potential concern. This process is termed contaminant screening. It is performed by comparing measured ambient concentrations of chemicals to benchmark concentrations. Currently, no standard benchmark concentrations exist for assessing contaminants in soil with respect to their toxicity to plants. This report presents a standard method for deriving benchmarks for this purpose (phytotoxicity benchmarks), a set of data concerning effects of chemicals in soil or soil solution on plants, and a setmore » of phytotoxicity benchmarks for 38 chemicals potentially associated with United States Department of Energy (DOE) sites. In addition, background information on the phytotoxicity and occurrence of the chemicals in soils is presented, and literature describing the experiments from which data were drawn for benchmark derivation is reviewed. Chemicals that are found in soil at concentrations exceeding both the phytotoxicity benchmark and the background concentration for the soil type should be considered contaminants of potential concern.« less

  9. Toxicological Benchmarks for Screening Potential Contaminants of Concern for Effects on Terrestrial Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suter, G.W. II

    1993-01-01

    One of the initial stages in ecological risk assessment for hazardous waste sites is screening contaminants to determine which of them are worthy of further consideration as contaminants of potential concern. This process is termed contaminant screening. It is performed by comparing measured ambient concentrations of chemicals to benchmark concentrations. Currently, no standard benchmark concentrations exist for assessing contaminants in soil with respect to their toxicity to plants. This report presents a standard method for deriving benchmarks for this purpose (phytotoxicity benchmarks), a set of data concerning effects of chemicals in soil or soil solution on plants, and a setmore » of phytotoxicity benchmarks for 38 chemicals potentially associated with United States Department of Energy (DOE) sites. In addition, background information on the phytotoxicity and occurrence of the chemicals in soils is presented, and literature describing the experiments from which data were drawn for benchmark derivation is reviewed. Chemicals that are found in soil at concentrations exceeding both the phytotoxicity benchmark and the background concentration for the soil type should be considered contaminants of potential concern.« less

  10. Structural Benchmark Creep Testing for Microcast MarM-247 Advanced Stirling Convertor E2 Heater Head Test Article SN18

    NASA Technical Reports Server (NTRS)

    Krause, David L.; Brewer, Ethan J.; Pawlik, Ralph

    2013-01-01

    This report provides test methodology details and qualitative results for the first structural benchmark creep test of an Advanced Stirling Convertor (ASC) heater head of ASC-E2 design heritage. The test article was recovered from a flight-like Microcast MarM-247 heater head specimen previously used in helium permeability testing. The test article was utilized for benchmark creep test rig preparation, wall thickness and diametral laser scan hardware metrological developments, and induction heater custom coil experiments. In addition, a benchmark creep test was performed, terminated after one week when through-thickness cracks propagated at thermocouple weld locations. Following this, it was used to develop a unique temperature measurement methodology using contact thermocouples, thereby enabling future benchmark testing to be performed without the use of conventional welded thermocouples, proven problematic for the alloy. This report includes an overview of heater head structural benchmark creep testing, the origin of this particular test article, test configuration developments accomplished using the test article, creep predictions for its benchmark creep test, qualitative structural benchmark creep test results, and a short summary.

  11. A comparison of five benchmarks

    NASA Technical Reports Server (NTRS)

    Huss, Janice E.; Pennline, James A.

    1987-01-01

    Five benchmark programs were obtained and run on the NASA Lewis CRAY X-MP/24. A comparison was made between the programs codes and between the methods for calculating performance figures. Several multitasking jobs were run to gain experience in how parallel performance is measured.

  12. GEN-IV Benchmarking of Triso Fuel Performance Models under accident conditions modeling input data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collin, Blaise Paul

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: • The modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release. • The modeling of the AGR-1 and HFR-EU1bis safety testing experiments. •more » The comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from “Case 5” of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. “Case 5” of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to “effects of the numerical calculation method rather than the physical model” [IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read this document thoroughly to make sure all the data needed for their calculations is provided in the document. Missing data will be added to a revision of the document if necessary. 09/2016: Tables 6 and 8 updated. AGR-2 input data added« less

  13. Generation IV benchmarking of TRISO fuel performance models under accident conditions: Modeling input data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collin, Blaise P.

    2014-09-01

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: the modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release; the modeling of the AGR-1 and HFR-EU1bis safety testing experiments; and, the comparisonmore » of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from ''Case 5'' of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. ''Case 5'' of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to ''effects of the numerical calculation method rather than the physical model''[IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read this document thoroughly to make sure all the data needed for their calculations is provided in the document. Missing data will be added to a revision of the document if necessary.« less

  14. A new numerical benchmark of a freshwater lens

    NASA Astrophysics Data System (ADS)

    Stoeckl, L.; Walther, M.; Graf, T.

    2016-04-01

    A numerical benchmark for 2-D variable-density flow and solute transport in a freshwater lens is presented. The benchmark is based on results of laboratory experiments conducted by Stoeckl and Houben (2012) using a sand tank on the meter scale. This benchmark describes the formation and degradation of a freshwater lens over time as it can be found under real-world islands. An error analysis gave the appropriate spatial and temporal discretization of 1 mm and 8.64 s, respectively. The calibrated parameter set was obtained using the parameter estimation tool PEST. Comparing density-coupled and density-uncoupled results showed that the freshwater-saltwater interface position is strongly dependent on density differences. A benchmark that adequately represents saltwater intrusion and that includes realistic features of coastal aquifers or freshwater lenses was lacking. This new benchmark was thus developed and is demonstrated to be suitable to test variable-density groundwater models applied to saltwater intrusion investigations.

  15. Validation of Cross Sections with Criticality Experiment and Reaction Rates: the Neptunium Case

    NASA Astrophysics Data System (ADS)

    Leong, L. S.; Tassan-Got, L.; Audouin, L.; Berthier, B.; Le Naour, C.; Stéphan, C.; Paradela, C.; Tarrío, D.; Duran, I.

    2014-04-01

    The 237Np neutron-induced fission cross section has been recently measured in a large energy range (from eV to GeV) at the n_TOF facility at CERN. When compared to previous measurements the n_TOF fission cross section appears to be higher by 5-7% beyond the fission threshold. To check the relevance of the n_TOF data, we considered a criticality experiment performed at Los Alamos with a 6 kg sphere of 237Np, surrounded by uranium highly enriched in 235U so as to approach criticality with fast neutrons. The multiplication factor keff of the calculation is in better agreement with the experiment when we replace the ENDF/B-VII.0 evaluation of the 237Np fission cross section by the n_TOF data. We also explored the hypothesis of deficiencies of the inelastic cross section in 235U which has been invoked by some authors to explain the deviation of 750 pcm. The large modification needed to reduce the deviation seems to be incompatible with existing inelastic cross section measurements. Also we show that the νbar of 237Np can hardly be incriminated because of the high accuracy of the existing data. Fission rate ratios or averaged fission cross sections measured in several fast neutron fields seem to give contradictory results on the validation of the 237Np cross section but at least one of the benchmark experiments, where the active deposits have been well calibrated for the number of atoms, favors the n_TOF data set. These outcomes support the hypothesis of a higher fission cross section of 237Np.

  16. Characterization of addressability by simultaneous randomized benchmarking.

    PubMed

    Gambetta, Jay M; Córcoles, A D; Merkel, S T; Johnson, B R; Smolin, John A; Chow, Jerry M; Ryan, Colm A; Rigetti, Chad; Poletto, S; Ohki, Thomas A; Ketchen, Mark B; Steffen, M

    2012-12-14

    The control and handling of errors arising from cross talk and unwanted interactions in multiqubit systems is an important issue in quantum information processing architectures. We introduce a benchmarking protocol that provides information about the amount of addressability present in the system and implement it on coupled superconducting qubits. The protocol consists of randomized benchmarking experiments run both individually and simultaneously on pairs of qubits. A relevant figure of merit for the addressability is then related to the differences in the measured average gate fidelities in the two experiments. We present results from two similar samples with differing cross talk and unwanted qubit-qubit interactions. The results agree with predictions based on simple models of the classical cross talk and Stark shifts.

  17. Neutronics Benchmarks for the Utilization of Mixed-Oxide Fuel: Joint U.S./Russian Progress Report for Fiscal Year 1997

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akkurt, H

    2001-01-11

    In 1967, a series of critical experiments were conducted at the Westinghouse Reactor Evaluation Center (WREC) using mixed-oxide (MOX) PuO{sub 2}-UO{sub 2} and/or UO{sub 2} fuels in various lattices and configurations . These experiments were performed under the joint sponsorship of the Empire State Atomic Development Associates (ESADA) plutonium program and Westinghouse . The purpose of these experiments was to develop experimental data to validate analytical methods used in the design of a plutonium-bearing replacement fuel for water reactors. Three different fuels were used during the experimental program: two MOX fuels and a low-enriched UO{sub 2} fuel. The MOX fuelsmore » were distinguished by their {sup 240}Pu content: 8 wt% {sup 240}Pu and 24 wt% {sup 240}Pu. Both MOX fuels contained 2.0 wt % PuO{sub 2} in natural UO{sub 2} . The UO{sub 2} fuel with 2.72 wt % enrichment was used for comparison with the plutonium data and for use in multiregion experiments.« less

  18. Impact of uncertainties in the uranium 235 cross section resonance structure on characteristics measured in the BFS-79 critical assemblies

    NASA Astrophysics Data System (ADS)

    Andrianova, Olga; Lomakov, Gleb; Manturov, Gennady

    2017-09-01

    The report presents the results of an analysis of benchmark experiments form the international ICSBEP Handbook (HEU-MET-INTER-005) carried out at the the SSC RF - IPPE in cooperation with the Idaho National Laboratory (INL, USA) applicable to the verification of calculations of a wide range of tasks related to safe storage of vitrified radioactive waste. Experiments on the BFS assemblies make it possible to perform a large series of studies needed for neutron data refinement, including measurements of reactivity effects which allow testing the neutron cross section resonance structure. This series of studies is considered as a sample joint analysis framework for differential and integral experiments required to correct nuclea data files of the ROSFOND evaluated neutron data library. Thus, it is shown that despite the wide range of available experimental data, in so far as it relates to the resonance region refinement, the experiments on reactivity measurement make it possible to more subtly reflect the resonance structure peculiarities in addition to the time-of-flight measurement method.

  19. Parallelization of NAS Benchmarks for Shared Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    This paper presents our experiences of parallelizing the sequential implementation of NAS benchmarks using compiler directives on SGI Origin2000 distributed shared memory (DSM) system. Porting existing applications to new high performance parallel and distributed computing platforms is a challenging task. Ideally, a user develops a sequential version of the application, leaving the task of porting to new generations of high performance computing systems to parallelization tools and compilers. Due to the simplicity of programming shared-memory multiprocessors, compiler developers have provided various facilities to allow the users to exploit parallelism. Native compilers on SGI Origin2000 support multiprocessing directives to allow users to exploit loop-level parallelism in their programs. Additionally, supporting tools can accomplish this process automatically and present the results of parallelization to the users. We experimented with these compiler directives and supporting tools by parallelizing sequential implementation of NAS benchmarks. Results reported in this paper indicate that with minimal effort, the performance gain is comparable with the hand-parallelized, carefully optimized, message-passing implementations of the same benchmarks.

  20. Society of Critical Care Medicine

    MedlinePlus

    ... Liberation Sepsis ICU Management Coding and Billing ICU Design Workforce ICU REPORT Disaster ICU Benchmarking Tools International ... Family Award for Ethics Honorary Life Membership ICU Design Citation ICU Heroes Lifetime Achievement Norma J. Shoemaker ...

  1. Evaluating virtual hosted desktops for graphics-intensive astronomy

    NASA Astrophysics Data System (ADS)

    Meade, B. F.; Fluke, C. J.

    2018-04-01

    Visualisation of data is critical to understanding astronomical phenomena. Today, many instruments produce datasets that are too big to be downloaded to a local computer, yet many of the visualisation tools used by astronomers are deployed only on desktop computers. Cloud computing is increasingly used to provide a computation and simulation platform in astronomy, but it also offers great potential as a visualisation platform. Virtual hosted desktops, with graphics processing unit (GPU) acceleration, allow interactive, graphics-intensive desktop applications to operate co-located with astronomy datasets stored in remote data centres. By combining benchmarking and user experience testing, with a cohort of 20 astronomers, we investigate the viability of replacing physical desktop computers with virtual hosted desktops. In our work, we compare two Apple MacBook computers (one old and one new, representing hardware and opposite ends of the useful lifetime) with two virtual hosted desktops: one commercial (Amazon Web Services) and one in a private research cloud (the Australian NeCTAR Research Cloud). For two-dimensional image-based tasks and graphics-intensive three-dimensional operations - typical of astronomy visualisation workflows - we found that benchmarks do not necessarily provide the best indication of performance. When compared to typical laptop computers, virtual hosted desktops can provide a better user experience, even with lower performing graphics cards. We also found that virtual hosted desktops are equally simple to use, provide greater flexibility in choice of configuration, and may actually be a more cost-effective option for typical usage profiles.

  2. TiD-Introducing and Benchmarking an Event-Delivery System for Brain-Computer Interfaces.

    PubMed

    Breitwieser, Christian; Tavella, Michele; Schreuder, Martijn; Cincotti, Febo; Leeb, Robert; Muller-Putz, Gernot R

    2017-12-01

    In this paper, we present and analyze an event distribution system for brain-computer interfaces. Events are commonly used to mark and describe incidents during an experiment and are therefore critical for later data analysis or immediate real-time processing. The presented approach, called Tools for brain-computer interaction interface D (TiD), delivers messages in XML format via a buslike system using transmission control protocol connections or shared memory. A dedicated server dispatches TiD messages to distributed or local clients. The TiD message is designed to be flexible and contains time stamps for event synchronization, whereas events describe incidents, which occur during an experiment. TiD was tested extensively toward stability and latency. The effect of an occurring event jitter was analyzed and benchmarked on a reference implementation under different conditions as gigabit and 100-Mb Ethernet or Wi-Fi with a different number of event receivers. A 3-dB signal attenuation, which occurs when averaging jitter influenced trials aligned by events, is starting to become visible at around 1-2 kHz in the case of a gigabit connection. Mean event distribution times across operating systems are ranging from 0.3 to 0.5ms for a gigabit network connection for 10 6 events. Results for other environmental conditions are available in this paper. References already using TiD for event distribution are provided showing the applicability of TiD for event delivery with distributed or local clients.

  3. Processing and validation of JEFF-3.1.1 and ENDF/B-VII.0 group-wise cross section libraries for shielding calculations

    NASA Astrophysics Data System (ADS)

    Pescarini, M.; Sinitsa, V.; Orsi, R.; Frisoni, M.

    2013-03-01

    This paper presents a synthesis of the ENEA-Bologna Nuclear Data Group programme dedicated to generate and validate group-wise cross section libraries for shielding and radiation damage deterministic calculations in nuclear fission reactors, following the data processing methodology recommended in the ANSI/ANS-6.1.2-1999 (R2009) American Standard. The VITJEFF311.BOLIB and VITENDF70.BOLIB finegroup coupled n-γ (199 n + 42 γ - VITAMIN-B6 structure) multi-purpose cross section libraries, based on the Bondarenko method for neutron resonance self-shielding and respectively on JEFF-3.1.1 and ENDF/B-VII.0 evaluated nuclear data, were produced in AMPX format using the NJOY-99.259 and the ENEA-Bologna 2007 Revision of the SCAMPI nuclear data processing systems. Two derived broad-group coupled n-γ (47 n + 20 γ - BUGLE-96 structure) working cross section libraries in FIDO-ANISN format for LWR shielding and pressure vessel dosimetry calculations, named BUGJEFF311.BOLIB and BUGENDF70.BOLIB, were generated by the revised version of SCAMPI, through problem-dependent cross section collapsing and self-shielding from the cited fine-group libraries. The validation results on the criticality safety benchmark experiments for the fine-group libraries and the preliminary validation results for the broad-group working libraries on the PCA-Replica and VENUS-3 engineering neutron shielding benchmark experiments are reported in synthesis.

  4. Benchmark matrix and guide: Part II.

    PubMed

    1991-01-01

    In the last issue of the Journal of Quality Assurance (September/October 1991, Volume 13, Number 5, pp. 14-19), the benchmark matrix developed by Headquarters Air Force Logistics Command was published. Five horizontal levels on the matrix delineate progress in TQM: business as usual, initiation, implementation, expansion, and integration. The six vertical categories that are critical to the success of TQM are leadership, structure, training, recognition, process improvement, and customer focus. In this issue, "Benchmark Matrix and Guide: Part II" will show specifically how to apply the categories of leadership, structure, and training to the benchmark matrix progress levels. At the intersection of each category and level, specific behavior objectives are listed with supporting behaviors and guidelines. Some categories will have objectives that are relatively easy to accomplish, allowing quick progress from one level to the next. Other categories will take considerable time and effort to complete. In the next issue, Part III of this series will focus on recognition, process improvement, and customer focus.

  5. Repulsive four-body interactions of α particles and quasistable nuclear α -particle condensates in heavy self-conjugate nuclei

    NASA Astrophysics Data System (ADS)

    Bai, Dong; Ren, Zhongzhou

    2018-05-01

    We study the effects of repulsive four-body interactions of α particles on nuclear α -particle condensates in heavy self-conjugate nuclei using a semianalytic approach, and find that the repulsive four-body interactions could decrease the critical number of α particles, beyond which quasistable α -particle condensate states can no longer exist, even if these four-body interactions make only tiny contributions to the total energy of the Hoyle-like state of 16O. Explicitly, we study eight benchmark parameter sets, and find that the critical number Ncr decreases by |Δ Ncr|˜1 -4 from Ncr˜11 with vanishing four-body interactions. We also discuss the effects of four-body interactions on energies and radii of α -particle condensates. Our study can be useful for future experiments to study α -particle condensates in heavy self-conjugate nuclei. Also, the experimental determination of Ncr will eventually help establish a better understanding on the α -particle interactions, especially the four-body interactions.

  6. Benchmark Modeling of the Near-Field and Far-Field Wave Effects of Wave Energy Arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rhinefrank, Kenneth E; Haller, Merrick C; Ozkan-Haller, H Tuba

    2013-01-26

    This project is an industry-led partnership between Columbia Power Technologies and Oregon State University that will perform benchmark laboratory experiments and numerical modeling of the near-field and far-field impacts of wave scattering from an array of wave energy devices. These benchmark experimental observations will help to fill a gaping hole in our present knowledge of the near-field effects of multiple, floating wave energy converters and are a critical requirement for estimating the potential far-field environmental effects of wave energy arrays. The experiments will be performed at the Hinsdale Wave Research Laboratory (Oregon State University) and will utilize an array ofmore » newly developed Buoys' that are realistic, lab-scale floating power converters. The array of Buoys will be subjected to realistic, directional wave forcing (1:33 scale) that will approximate the expected conditions (waves and water depths) to be found off the Central Oregon Coast. Experimental observations will include comprehensive in-situ wave and current measurements as well as a suite of novel optical measurements. These new optical capabilities will include imaging of the 3D wave scattering using a binocular stereo camera system, as well as 3D device motion tracking using a newly acquired LED system. These observing systems will capture the 3D motion history of individual Buoys as well as resolve the 3D scattered wave field; thus resolving the constructive and destructive wave interference patterns produced by the array at high resolution. These data combined with the device motion tracking will provide necessary information for array design in order to balance array performance with the mitigation of far-field impacts. As a benchmark data set, these data will be an important resource for testing of models for wave/buoy interactions, buoy performance, and far-field effects on wave and current patterns due to the presence of arrays. Under the proposed project we will initiate high-resolution (fine scale, very near-field) fluid/structure interaction simulations of buoy motions, as well as array-scale, phase-resolving wave scattering simulations. These modeling efforts will utilize state-of-the-art research quality models, which have not yet been brought to bear on this complex problem of large array wave/structure interaction problem.« less

  7. USDA-ARS?s Scientific Manuscript database

    Ecosystems that maximize soil organic matter and good soil structure maintain high soil biological functioning, soil health and plant growth. Natural ecosystems such as prairies are valuable benchmarks for developing sustainable crop and soil management practices. Soil biological properties critical...

  8. A benchmark for subduction zone modeling

    NASA Astrophysics Data System (ADS)

    van Keken, P.; King, S.; Peacock, S.

    2003-04-01

    Our understanding of subduction zones hinges critically on the ability to discern its thermal structure and dynamics. Computational modeling has become an essential complementary approach to observational and experimental studies. The accurate modeling of subduction zones is challenging due to the unique geometry, complicated rheological description and influence of fluid and melt formation. The complicated physics causes problems for the accurate numerical solution of the governing equations. As a consequence it is essential for the subduction zone community to be able to evaluate the ability and limitations of various modeling approaches. The participants of a workshop on the modeling of subduction zones, held at the University of Michigan at Ann Arbor, MI, USA in 2002, formulated a number of case studies to be developed into a benchmark similar to previous mantle convection benchmarks (Blankenbach et al., 1989; Busse et al., 1991; Van Keken et al., 1997). Our initial benchmark focuses on the dynamics of the mantle wedge and investigates three different rheologies: constant viscosity, diffusion creep, and dislocation creep. In addition we investigate the ability of codes to accurate model dynamic pressure and advection dominated flows. Proceedings of the workshop and the formulation of the benchmark are available at www.geo.lsa.umich.edu/~keken/subduction02.html We strongly encourage interested research groups to participate in this benchmark. At Nice 2003 we will provide an update and first set of benchmark results. Interested researchers are encouraged to contact one of the authors for further details.

  9. Noninterceptive transverse emittance measurements using BPM for Chinese ADS R&D project

    NASA Astrophysics Data System (ADS)

    Wang, Zhi-Jun; Feng, Chi; He, Yuan; Dou, Weiping; Tao, Yue; Chen, Wei-long; Jia, Huan; Liu, Shu-hui; Wang, Wang-sheng; Zhang, Yong; Wu, Jian-qiang; Zhang, Sheng-hu; Zhang, X. L.

    2016-04-01

    The noninterceptive four-dimensional transverse emittance measurements are essential for commissioning the high power continue-wave (CW) proton linacs as well as their operations. The conventional emittance measuring devices such as slits and wire scanners are not well suited under these conditions due to sure beam damages. Therefore, the method of using noninterceptive Beam Position Monitor (BPM) is developed and demonstrated on Injector Scheme II at the Chinese Accelerator Driven Sub-critical System (China-ADS) proofing facility inside Institute of Modern Physics (IMP) [1]. The results of measurements are in good agreements with wire scanners and slits at low duty-factor pulsed (LDFP) beam. In this paper, the detailed experiment designs, data analysis and result benchmarking are presented.

  10. Nuclear data correlation between different isotopes via integral information

    NASA Astrophysics Data System (ADS)

    Rochman, Dimitri A.; Bauge, Eric; Vasiliev, Alexander; Ferroukhi, Hakim; Perret, Gregory

    2018-05-01

    This paper presents a Bayesian approach based on integral experiments to create correlations between different isotopes which do not appear with differential data. A simple Bayesian set of equations is presented with random nuclear data, similarly to the usual methods applied with differential data. As a consequence, updated nuclear data (cross sections, ν, fission neutron spectra and covariance matrices) are obtained, leading to better integral results. An example for 235U and 238U is proposed taking into account the Bigten criticality benchmark.

  11. Indulging anxiety: human enhancement from a Protestant perspective.

    PubMed

    Hanson, Mark J

    1999-08-01

    At the heart of any ethics of human enhancement must be some normative assumptions about human nature. The purpose of this essay is to draw on themes from a Protestant theological anthropology to provide a basis for understanding and evaluating the tension between maintaining our humanity and enhancing it. Drawing primarily on the work of theologian Reinhold Niebuhr, I interpret enhancement as proceeding from the anxiety that characterizes human experience at the juncture of freedom and finiteness. Religious and moral dimensions of human sinfulness are considered in relation to cultural values that motivate human enhancement generally. I employ these dimensions in a series of benchmarks to suggest a background of theological, anthropological, and moral considerations against which enhancement is not to be condemmed but rather critically evaluated.

  12. Benchmarking, benchmarks, or best practices? Applying quality improvement principles to decrease surgical turnaround time.

    PubMed

    Mitchell, L

    1996-01-01

    The processes of benchmarking, benchmark data comparative analysis, and study of best practices are distinctly different. The study of best practices is explained with an example based on the Arthur Andersen & Co. 1992 "Study of Best Practices in Ambulatory Surgery". The results of a national best practices study in ambulatory surgery were used to provide our quality improvement team with the goal of improving the turnaround time between surgical cases. The team used a seven-step quality improvement problem-solving process to improve the surgical turnaround time. The national benchmark for turnaround times between surgical cases in 1992 was 13.5 minutes. The initial turnaround time at St. Joseph's Medical Center was 19.9 minutes. After the team implemented solutions, the time was reduced to an average of 16.3 minutes, an 18% improvement. Cost-benefit analysis showed a potential enhanced revenue of approximately $300,000, or a potential savings of $10,119. Applying quality improvement principles to benchmarking, benchmarks, or best practices can improve process performance. Understanding which form of benchmarking the institution wishes to embark on will help focus a team and use appropriate resources. Communicating with professional organizations that have experience in benchmarking will save time and money and help achieve the desired results.

  13. Clear, Complete, and Justified Problem Formulations for Aquatic Life Benchmark Values: Specifying the Dimensions

    EPA Science Inventory

    Nations that develop water quality benchmark values have relied primarily on standard data and methods. However, experience with chemicals such as Se, ammonia, and tributyltin has shown that standard methods do not adequately address some taxa, modes of exposure and effects. Deve...

  14. Benchmarking Academic Libraries: An Australian Case Study.

    ERIC Educational Resources Information Center

    Robertson, Margaret; Trahn, Isabella

    1997-01-01

    Discusses experiences and outcomes of benchmarking at the Queensland University of Technology (Australia) library that compared acquisitions, cataloging, document delivery, and research support services with those of the University of New South Wales. Highlights include results as a catalyst for change, and the use of common output and performance…

  15. International E-Benchmarking: Flexible Peer Development of Authentic Learning Principles in Higher Education

    ERIC Educational Resources Information Center

    Leppisaari, Irja; Vainio, Leena; Herrington, Jan; Im, Yeonwook

    2011-01-01

    More and more, social technologies and virtual work methods are facilitating new ways of crossing boundaries in professional development and international collaborations. This paper examines the peer development of higher education teachers through the experiences of the IVBM project (International Virtual Benchmarking, 2009-2010). The…

  16. CLEAR, COMPLETE, AND JUSTIFIED PROBLEM FORMULATIONS FOR AQUATIC LIFE BENCHMARK VALUES: SPECIFYING THE DIMENSIONS

    EPA Science Inventory

    Nations that develop water quality benchmark values have relied primarily on standard data and methods. However, experience with chemicals such as Se, ammonia, and tributyltin has shown that standard methods do not adequately address some taxa, modes of exposure and effects. Deve...

  17. Toxicological benchmarks for screening potential contaminants of concern for effects on soil and litter invertebrates and heterotrophic process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Will, M.E.; Suter, G.W. II

    1994-09-01

    One of the initial stages in ecological risk assessments for hazardous waste sites is the screening of contaminants to determine which of them are worthy of further consideration as {open_quotes}contaminants of potential concern.{close_quotes} This process is termed {open_quotes}contaminant screening.{close_quotes} It is performed by comparing measured ambient concentrations of chemicals to benchmark concentrations. Currently, no standard benchmark concentrations exist for assessing contaminants in soil with respect to their toxicity to soil- and litter-dwelling invertebrates, including earthworms, other micro- and macroinvertebrates, or heterotrophic bacteria and fungi. This report presents a standard method for deriving benchmarks for this purpose, sets of data concerningmore » effects of chemicals in soil on invertebrates and soil microbial processes, and benchmarks for chemicals potentially associated with United States Department of Energy sites. In addition, literature describing the experiments from which data were drawn for benchmark derivation. Chemicals that are found in soil at concentrations exceeding both the benchmarks and the background concentration for the soil type should be considered contaminants of potential concern.« less

  18. Test One to Test Many: A Unified Approach to Quantum Benchmarks

    NASA Astrophysics Data System (ADS)

    Bai, Ge; Chiribella, Giulio

    2018-04-01

    Quantum benchmarks are routinely used to validate the experimental demonstration of quantum information protocols. Many relevant protocols, however, involve an infinite set of input states, of which only a finite subset can be used to test the quality of the implementation. This is a problem, because the benchmark for the finitely many states used in the test can be higher than the original benchmark calculated for infinitely many states. This situation arises in the teleportation and storage of coherent states, for which the benchmark of 50% fidelity is commonly used in experiments, although finite sets of coherent states normally lead to higher benchmarks. Here, we show that the average fidelity over all coherent states can be indirectly probed with a single setup, requiring only two-mode squeezing, a 50-50 beam splitter, and homodyne detection. Our setup enables a rigorous experimental validation of quantum teleportation, storage, amplification, attenuation, and purification of noisy coherent states. More generally, we prove that every quantum benchmark can be tested by preparing a single entangled state and measuring a single observable.

  19. Benchmark gamma-ray skyshine experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nason, R.R.; Shultis, J.K.; Faw, R.E.

    1982-01-01

    A benchmark gamma-ray skyshine experiment is descibed in which /sup 60/Co sources were either collimated into an upward 150-deg conical beam or shielded vertically by two different thicknesses of concrete. A NaI(Tl) spectrometer and a high pressure ion chamber were used to measure, respectively, the energy spectrum and the 4..pi..-exposure rate of the air-reflected gamma photons up to 700 m from the source. Analyses of the data and comparison to DOT discrete ordinates calculations are presented.

  20. Material Activation Benchmark Experiments at the NuMI Hadron Absorber Hall in Fermilab

    NASA Astrophysics Data System (ADS)

    Matsumura, H.; Matsuda, N.; Kasugai, Y.; Toyoda, A.; Yashima, H.; Sekimoto, S.; Iwase, H.; Oishi, K.; Sakamoto, Y.; Nakashima, H.; Leveling, A.; Boehnlein, D.; Lauten, G.; Mokhov, N.; Vaziri, K.

    2014-06-01

    In our previous study, double and mirror symmetric activation peaks found for Al and Au arranged spatially on the back of the Hadron absorber of the NuMI beamline in Fermilab were considerably higher than those expected purely from muon-induced reactions. From material activation bench-mark experiments, we conclude that this activation is due to hadrons with energy greater than 3 GeV that had passed downstream through small gaps in the hadron absorber.

  1. Revenue cycle management.

    PubMed

    Manley, Ray; Satiani, Bhagwan

    2009-11-01

    With the widening gap between overhead expenses and reimbursement, management of the revenue cycle is a critical part of a successful vascular surgery practice. It is important to review the data on all the components of the revenue cycle: payer contracting, appointment scheduling, preregistration, registration process, coding and capturing charges, proper billing of patients and insurers, follow-up of accounts receivable, and finally using appropriate benchmarking. The industry benchmarks used should be those of peers in identical groups. Warning signs of poor performance are discussed enabling the practice to formulate a performance improvement plan.

  2. An Approach for Validating Actinide and Fission Product Burnup Credit Criticality Safety Analyses--Criticality (keff) Predictions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scaglione, John M; Mueller, Don; Wagner, John C

    2011-01-01

    One of the most significant remaining challenges associated with expanded implementation of burnup credit in the United States is the validation of depletion and criticality calculations used in the safety evaluation - in particular, the availability and use of applicable measured data to support validation, especially for fission products. Applicants and regulatory reviewers have been constrained by both a scarcity of data and a lack of clear technical basis or approach for use of the data. U.S. Nuclear Regulatory Commission (NRC) staff have noted that the rationale for restricting their Interim Staff Guidance on burnup credit (ISG-8) to actinide-only ismore » based largely on the lack of clear, definitive experiments that can be used to estimate the bias and uncertainty for computational analyses associated with using burnup credit. To address the issue of validation, the NRC initiated a project with the Oak Ridge National Laboratory to (1) develop and establish a technically sound validation approach (both depletion and criticality) for commercial spent nuclear fuel (SNF) criticality safety evaluations based on best-available data and methods and (2) apply the approach for representative SNF storage and transport configurations/conditions to demonstrate its usage and applicability, as well as to provide reference bias results. The purpose of this paper is to describe the criticality (k{sub eff}) validation approach, and resulting observations and recommendations. Validation of the isotopic composition (depletion) calculations is addressed in a companion paper at this conference. For criticality validation, the approach is to utilize (1) available laboratory critical experiment (LCE) data from the International Handbook of Evaluated Criticality Safety Benchmark Experiments and the French Haut Taux de Combustion (HTC) program to support validation of the principal actinides and (2) calculated sensitivities, nuclear data uncertainties, and the limited available fission product LCE data to predict and verify individual biases for relevant minor actinides and fission products. This paper (1) provides a detailed description of the approach and its technical bases, (2) describes the application of the approach for representative pressurized water reactor and boiling water reactor safety analysis models to demonstrate its usage and applicability, (3) provides reference bias results based on the prerelease SCALE 6.1 code package and ENDF/B-VII nuclear cross-section data, and (4) provides recommendations for application of the results and methods to other code and data packages.« less

  3. Issues in Benchmarking Human Reliability Analysis Methods: A Literature Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronald L. Boring; Stacey M. L. Hendrickson; John A. Forester

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessments (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study comparing and evaluating HRA methods in assessing operator performance in simulator experiments is currently underway. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing pastmore » benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.« less

  4. XWeB: The XML Warehouse Benchmark

    NASA Astrophysics Data System (ADS)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  5. Statistical Analysis of NAS Parallel Benchmarks and LINPACK Results

    NASA Technical Reports Server (NTRS)

    Meuer, Hans-Werner; Simon, Horst D.; Strohmeier, Erich; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    In the last three years extensive performance data have been reported for parallel machines both based on the NAS Parallel Benchmarks, and on LINPACK. In this study we have used the reported benchmark results and performed a number of statistical experiments using factor, cluster, and regression analyses. In addition to the performance results of LINPACK and the eight NAS parallel benchmarks, we have also included peak performance of the machine, and the LINPACK n and n(sub 1/2) values. Some of the results and observations can be summarized as follows: 1) All benchmarks are strongly correlated with peak performance. 2) LINPACK and EP have each a unique signature. 3) The remaining NPB can grouped into three groups as follows: (CG and IS), (LU and SP), and (MG, FT, and BT). Hence three (or four with EP) benchmarks are sufficient to characterize the overall NPB performance. Our poster presentation will follow a standard poster format, and will present the data of our statistical analysis in detail.

  6. The introduction of Greek Central Health Fund: Has the reform met its goal in the sector of Primary Health Care or is there a new model needed?

    PubMed

    Polyzos, Nikos; Karakolias, Stefanos; Dikeos, Costas; Theodorou, Mamas; Kastanioti, Catherine; Mama, Kalomira; Polizoidis, Periklis; Skamnakis, Christoforos; Tsairidis, Charalampos; Thireos, Eleutherios

    2014-11-25

    The National Organization for Healthcare Provision (EOPYY) originates from the recent reform in Greek healthcare, aiming amidst economic predicament, at the rationalization of health expenditure and reactivation of the pivotal role of Primary Health Care (PHC). Health funding (public/private) mix is examined, alongside the role of pre-existing health insurance funds. The main pursuit of this paper is to evaluate whether EOPYY has met its goals. The article surveys for best practices in advanced health systems and similar sickness funds. The main benchmarks focus on PHC provision and providers' reimbursement. It then turns to an analysis of EOPYY, focusing on specific questions and searching the relevant databases. It compares the best practice examples to the EOPYY (alongside further developments set by new legislation in L 4238/14), revealing weaknesses relevant to non-integrated PHC network, unbalanced manpower, non-gatekeeping, under-financing and other funding problems caused by the current crisis. Finally, a new model of medical procedures cost accounting was tested in health centers. An alternative operation of EOPYY functioning primarily as an insurer whereas its proprietary units are integrated with these of the NHS is proposed. The paper claims it is critical to revise the current induced demand favorable reimbursement system, via per capita payments for physicians combined with extra pay-for-performance payments, while cost accounting corroborates a prospective system for NHS's and EOPYY's units, under a combination of global budgets and Ambulatory Patient Groups (APGs) Self-critical points on the limitations of results due to lack of adequate data (not) given by EOPYY are initially raised. Then the issue concerning the debate between 'copying' benchmarks and 'a la cart' selectively adopting and adapting best practices from wider experience is discussed, with preference to the latter. The idea of an 'a la cart' choice of international examples is proposed. The 'results' discussing EOPYY's dual function and induced-demand favorable reimbursement system are further critically examined. International experience shows evidence of effective alternatives, such as per capita and pay-for-performance payments for practicing doctors as well as per case reimbursement for health centers under global budget principles.

  7. Internal Quality Assurance Benchmarking. ENQA Workshop Report 20

    ERIC Educational Resources Information Center

    Blackstock, Douglas; Burquel, Nadine; Comet, Nuria; Kajaste, Matti; dos Santos, Sergio Machado; Marcos, Sandra; Moser, Marion; Ponds, Henri; Scheuthle, Harald; Sixto, Luis Carlos Velon

    2012-01-01

    The Internal Quality Assurance group of ENQA (IQA Group) has been organising a yearly seminar for its members since 2007. The main objective is to share experiences concerning the internal quality assurance of work processes in the participating agencies. The overarching theme of the 2011 seminar was how to use benchmarking as a tool for…

  8. Benchmarking Outcomes in the Critically Injured Trauma Patient and the Effect of Implementing Standard Operating Procedures

    PubMed Central

    Cuschieri, Joseph; Johnson, Jeffrey L.; Sperry, Jason; West, Michael A.; Moore, Ernest E.; Minei, Joseph P.; Bankey, Paul E.; Nathens, Avery B.; Cuenca, Alex G.; Efron, Philip A.; Hennessy, Laura; Xiao, Wenzhong; Mindrinos, Michael N.; McDonald-Smith, Grace P.; Mason, Philip H.; Billiar, Timothy R.; Schoenfeld, David A.; Warren, H. Shaw; Cobb, J. Perren; Moldawer, Lyle L.; Davis, Ronald W.; Maier, Ronald V.; Tompkins, Ronald G.

    2012-01-01

    Objective To determine and compare outcomes with accepted benchmarks in trauma care at seven academic Level I trauma centers in which patients were treated based on a series of standard operating procedures (SOPs). Background Injury remains the leading cause of death for those under 45 years of age. We describe the baseline patient characteristics and well-defined outcomes of persons hospitalized in the United States for severe blunt trauma. Methods We followed 1,637 trauma patients from 2003–2009 up to 28 hospital days using SOPs developed at the onset of the study. An extensive database on patient and injury characteristics, clinical treatment, and outcomes was created. These data were compared with existing trauma benchmarks. Results The study patients were critically injured and in shock. SOP compliance improved 10–40% during the study period. Multiple organ failure and mortality rates were 34.8% and 16.7% respectively. Time to recovery, defined as the time until the patient was free of organ failure for at least two consecutive days, was developed as a new outcome measure. There was a reduction in mortality rate in the cohort during the study that cannot be explained by changes in the patient population. Conclusions This study provides the current benchmark and the overall positive effect of implementing SOPs for severely injured patients. Over the course of the study, there were improvements in morbidity and mortality and increasing compliance with SOPs. Mortality was surprisingly low, given the degree of injury, and improved over the duration of the study, which correlated with improved SOP compliance. PMID:22470077

  9. Benchmarking outcomes in the critically injured trauma patient and the effect of implementing standard operating procedures.

    PubMed

    Cuschieri, Joseph; Johnson, Jeffrey L; Sperry, Jason; West, Michael A; Moore, Ernest E; Minei, Joseph P; Bankey, Paul E; Nathens, Avery B; Cuenca, Alex G; Efron, Philip A; Hennessy, Laura; Xiao, Wenzhong; Mindrinos, Michael N; McDonald-Smith, Grace P; Mason, Philip H; Billiar, Timothy R; Schoenfeld, David A; Warren, H Shaw; Cobb, J Perren; Moldawer, Lyle L; Davis, Ronald W; Maier, Ronald V; Tompkins, Ronald G

    2012-05-01

    To determine and compare outcomes with accepted benchmarks in trauma care at 7 academic level I trauma centers in which patients were treated on the basis of a series of standard operating procedures (SOPs). Injury remains the leading cause of death for those younger than 45 years. This study describes the baseline patient characteristics and well-defined outcomes of persons hospitalized in the United States for severe blunt trauma. We followed 1637 trauma patients from 2003 to 2009 up to 28 hospital days using SOPs developed at the onset of the study. An extensive database on patient and injury characteristics, clinical treatment, and outcomes was created. These data were compared with existing trauma benchmarks. The study patients were critically injured and were in shock. SOP compliance improved 10% to 40% during the study period. Multiple organ failure and mortality rates were 34.8% and 16.7%, respectively. Time to recovery, defined as the time until the patient was free of organ failure for at least 2 consecutive days, was developed as a new outcome measure. There was a reduction in mortality rate in the cohort during the study that cannot be explained by changes in the patient population. This study provides the current benchmark and the overall positive effect of implementing SOPs for severely injured patients. Over the course of the study, there were improvements in morbidity and mortality rates and increasing compliance with SOPs. Mortality was surprisingly low, given the degree of injury, and improved over the duration of the study, which correlated with improved SOP compliance.

  10. Thermo-hydro-mechanical-chemical processes in fractured-porous media: Benchmarks and examples

    NASA Astrophysics Data System (ADS)

    Kolditz, O.; Shao, H.; Görke, U.; Kalbacher, T.; Bauer, S.; McDermott, C. I.; Wang, W.

    2012-12-01

    The book comprises an assembly of benchmarks and examples for porous media mechanics collected over the last twenty years. Analysis of thermo-hydro-mechanical-chemical (THMC) processes is essential to many applications in environmental engineering, such as geological waste deposition, geothermal energy utilisation, carbon capture and storage, water resources management, hydrology, even climate change. In order to assess the feasibility as well as the safety of geotechnical applications, process-based modelling is the only tool to put numbers, i.e. to quantify future scenarios. This charges a huge responsibility concerning the reliability of computational tools. Benchmarking is an appropriate methodology to verify the quality of modelling tools based on best practices. Moreover, benchmarking and code comparison foster community efforts. The benchmark book is part of the OpenGeoSys initiative - an open source project to share knowledge and experience in environmental analysis and scientific computation.

  11. TWANG-PIC, a novel gyro-averaged one-dimensional particle-in-cell code for interpretation of gyrotron experiments

    NASA Astrophysics Data System (ADS)

    Braunmueller, F.; Tran, T. M.; Vuillemin, Q.; Alberti, S.; Genoud, J.; Hogge, J.-Ph.; Tran, M. Q.

    2015-06-01

    A new gyrotron simulation code for simulating the beam-wave interaction using a monomode time-dependent self-consistent model is presented. The new code TWANG-PIC is derived from the trajectory-based code TWANG by describing the electron motion in a gyro-averaged one-dimensional Particle-In-Cell (PIC) approach. In comparison to common PIC-codes, it is distinguished by its computation speed, which makes its use in parameter scans and in experiment interpretation possible. A benchmark of the new code is presented as well as a comparative study between the two codes. This study shows that the inclusion of a time-dependence in the electron equations, as it is the case in the PIC-approach, is mandatory for simulating any kind of non-stationary oscillations in gyrotrons. Finally, the new code is compared with experimental results and some implications of the violated model assumptions in the TWANG code are disclosed for a gyrotron experiment in which non-stationary regimes have been observed and for a critical case that is of interest in high power gyrotron development.

  12. Characterization of a BODIPY Dye as an Active Species for Redox Flow Batteries.

    PubMed

    Kosswattaarachchi, Anjula M; Friedman, Alan E; Cook, Timothy R

    2016-12-08

    An all-organic redox flow battery (RFB) employing a fluorescent boron-dipyrromethene (BODIPY) dye (PM567) was investigated. In a RFB, the stability of the electrolyte in all charged states is critically linked to coulombic efficiency. To evaluate stability, bulk electrolysis and cyclic voltammetry (CV) experiments were performed. Oxidized and reduced, PM567 does not remain intact; however, the products of bulk electrolysis evolve over time to show stable redox behavior, making the dye a precursor for the active species of an RFB. A theoretical cell potential of 2.32 V was predicted from CV experiments with a working discharge voltage of approximately 1.6 V in a static test cell. Mass spectrometry was used to identify the products of bulk electrolysis. Related experiments were carried out using ferrocene and cobaltocenium hexafluorophosphate as redox-stable benchmarks to further explain the stability results. The coulombic efficiency of a model cell using PM567 as a precursor for charge carriers stabilized around 73 %. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. TWANG-PIC, a novel gyro-averaged one-dimensional particle-in-cell code for interpretation of gyrotron experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Braunmueller, F., E-mail: falk.braunmueller@epfl.ch; Tran, T. M.; Alberti, S.

    A new gyrotron simulation code for simulating the beam-wave interaction using a monomode time-dependent self-consistent model is presented. The new code TWANG-PIC is derived from the trajectory-based code TWANG by describing the electron motion in a gyro-averaged one-dimensional Particle-In-Cell (PIC) approach. In comparison to common PIC-codes, it is distinguished by its computation speed, which makes its use in parameter scans and in experiment interpretation possible. A benchmark of the new code is presented as well as a comparative study between the two codes. This study shows that the inclusion of a time-dependence in the electron equations, as it is themore » case in the PIC-approach, is mandatory for simulating any kind of non-stationary oscillations in gyrotrons. Finally, the new code is compared with experimental results and some implications of the violated model assumptions in the TWANG code are disclosed for a gyrotron experiment in which non-stationary regimes have been observed and for a critical case that is of interest in high power gyrotron development.« less

  14. Modernization at the Y-12 National Security Complex: A Case for Additional Experimental Benchmarks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thornbury, M. L.; Juarez, C.; Krass, A. W.

    Efforts are underway at the Y-12 National Security Complex (Y-12) to modernize the recovery, purification, and consolidation of un-irradiated, highly enriched uranium metal. Successful integration of advanced technology such as Electrorefining (ER) eliminates many of the intermediate chemistry systems and processes that are the current and historical basis of the nuclear fuel cycle at Y-12. The cost of operations, the inventory of hazardous chemicals, and the volume of waste are significantly reduced by ER. It also introduces unique material forms and compositions related to the chemistry of chloride salts for further consideration in safety analysis and engineering. The work hereinmore » briefly describes recent investigations of nuclear criticality for 235UO2Cl2 (uranyl chloride) and 6LiCl (lithium chloride) in aqueous solution. Of particular interest is the minimum critical mass of highly enriched uranium as a function of the molar ratio of 6Li to 235U. The work herein also briefly describes recent investigations of nuclear criticality for 235U metal reflected by salt mixtures of 6LiCl or 7LiCl (lithium chloride), KCl (potassium chloride), and 235UCl3 or 238UCl3 (uranium tri-chloride). Computational methods for analysis of nuclear criticality safety and published nuclear data are employed in the absence of directly relevant experimental criticality benchmarks.« less

  15. Benchmarking initiatives in the water industry.

    PubMed

    Parena, R; Smeets, E

    2001-01-01

    Customer satisfaction and service care are every day pushing professionals in the water industry to seek to improve their performance, lowering costs and increasing the provided service level. Process Benchmarking is generally recognised as a systematic mechanism of comparing one's own utility with other utilities or businesses with the intent of self-improvement by adopting structures or methods used elsewhere. The IWA Task Force on Benchmarking, operating inside the Statistics and Economics Committee, has been committed to developing a general accepted concept of Process Benchmarking to support water decision-makers in addressing issues of efficiency. In a first step the Task Force disseminated among the Committee members a questionnaire focused on providing suggestions about the kind, the evolution degree and the main concepts of Benchmarking adopted in the represented Countries. A comparison among the guidelines adopted in The Netherlands and Scandinavia has recently challenged the Task Force in drafting a methodology for a worldwide process benchmarking in water industry. The paper provides a framework of the most interesting benchmarking experiences in the water sector and describes in detail both the final results of the survey and the methodology focused on identification of possible improvement areas.

  16. Reactor Testing and Qualification: Prioritized High-level Criticality Testing Needs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S. Bragg-Sitton; J. Bess; J. Werner

    2011-09-01

    Researchers at the Idaho National Laboratory (INL) were tasked with reviewing possible criticality testing needs to support development of the fission surface power system reactor design. Reactor physics testing can provide significant information to aid in development of technologies associated with small, fast spectrum reactors that could be applied for non-terrestrial power systems, leading to eventual system qualification. Several studies have been conducted in recent years to assess the data and analyses required to design and build a space fission power system with high confidence that the system will perform as designed [Marcille, 2004a, 2004b; Weaver, 2007; Parry et al.,more » 2008]. This report will provide a summary of previous critical tests and physics measurements that are potentially applicable to the current reactor design (both those that have been benchmarked and those not yet benchmarked), summarize recent studies of potential nuclear testing needs for space reactor development and their applicability to the current baseline fission surface power (FSP) system design, and provide an overview of a suite of tests (separate effects, sub-critical or critical) that could fill in the information database to improve the accuracy of physics modeling efforts as the FSP design is refined. Some recommendations for tasks that could be completed in the near term are also included. Specific recommendations on critical test configurations will be reserved until after the sensitivity analyses being conducted by Los Alamos National Laboratory (LANL) are completed (due August 2011).« less

  17. Benchmarking carbon-nitrogen interactions in Earth System Models to observations: An inter-comparison of nitrogen limitation in global land surface models with carbon and nitrogen cycles (CLM-CN and O-CN)

    NASA Astrophysics Data System (ADS)

    Thomas, R. Q.; Zaehle, S.; Templer, P. H.; Goodale, C. L.

    2011-12-01

    Predictions of climate change depend on accurately modeling the feedbacks among the carbon cycle, nitrogen cycle, and climate system. Several global land surface models have shown that nitrogen limitation determines how land carbon fluxes respond to rising CO2, nitrogen deposition, and climate change, thereby influencing predictions of climate change. However, the magnitude of the carbon-nitrogen-climate feedbacks varies considerably by model, leading to critical and timely questions of why they differ and how they compare to field observations. To address these questions, we initiated a model inter-comparison of spatial patterns and drivers of nitrogen limitation. The experiment assessed the regional consequences of sustained nitrogen additions in a set of 25-year global nitrogen fertilization simulations. The model experiments were designed to cover effects from small changes in nitrogen inputs associated with plausible increases in nitrogen deposition to large changes associated with field-based nitrogen fertilization experiments. The analyses of model simulations included assessing the geographically varying degree of nitrogen limitation on plant and soil carbon cycling and the mechanisms underlying model differences. Here, we present results from two global land-surface models (CLM-CN and O-CN) with differing approaches to modeling carbon-nitrogen interactions. The predictions from each model were compared to a set of globally distributed observational data that includes nitrogen fertilization experiments, 15N tracer studies, small catchment nitrogen input-output studies, and syntheses across nitrogen deposition gradients. Together these datasets test many aspects of carbon-nitrogen coupling and are able to differentiate between the two models. Overall, this study is the first to explicitly benchmark carbon and nitrogen interactions in Earth System Models using a range of observations and is a foundation for future inter-comparisons.

  18. Correlation of Noncancer Benchmark Doses in Short- and Long-Term Rodent Bioassays.

    PubMed

    Kratchman, Jessica; Wang, Bing; Fox, John; Gray, George

    2018-05-01

    This study investigated whether, in the absence of chronic noncancer toxicity data, short-term noncancer toxicity data can be used to predict chronic toxicity effect levels by focusing on the dose-response relationship instead of a critical effect. Data from National Toxicology Program (NTP) technical reports have been extracted and modeled using the Environmental Protection Agency's Benchmark Dose Software. Best-fit, minimum benchmark dose (BMD), and benchmark dose lower limits (BMDLs) have been modeled for all NTP pathologist identified significant nonneoplastic lesions, final mean body weight, and mean organ weight of 41 chemicals tested by NTP between 2000 and 2012. Models were then developed at the chemical level using orthogonal regression techniques to predict chronic (two years) noncancer health effect levels using the results of the short-term (three months) toxicity data. The findings indicate that short-term animal studies may reasonably provide a quantitative estimate of a chronic BMD or BMDL. This can allow for faster development of human health toxicity values for risk assessment for chemicals that lack chronic toxicity data. © 2017 Society for Risk Analysis.

  19. Winning Strategy: Set Benchmarks of Early Success to Build Momentum for the Long Term

    ERIC Educational Resources Information Center

    Spiro, Jody

    2012-01-01

    Change is a highly personal experience. Everyone participating in the effort has different reactions to change, different concerns, and different motivations for being involved. The smart change leader sets benchmarks along the way so there are guideposts and pause points instead of an endless change process. "Early wins"--a term used to describe…

  20. New Reactor Physics Benchmark Data in the March 2012 Edition of the IRPhEP Handbook

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John D. Bess; J. Blair Briggs; Jim Gulliford

    2012-11-01

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) was established to preserve integral reactor physics experimental data, including separate or special effects data for nuclear energy and technology applications. Numerous experiments that have been performed worldwide, represent a large investment of infrastructure, expertise, and cost, and are valuable resources of data for present and future research. These valuable assets provide the basis for recording, development, and validation of methods. If the experimental data are lost, the high cost to repeat many of these measurements may be prohibitive. The purpose of the IRPhEP is to provide an extensively peer-reviewed set ofmore » reactor physics-related integral data that can be used by reactor designers and safety analysts to validate the analytical tools used to design next-generation reactors and establish the safety basis for operation of these reactors. Contributors from around the world collaborate in the evaluation and review of selected benchmark experiments for inclusion in the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook) [1]. Several new evaluations have been prepared for inclusion in the March 2012 edition of the IRPhEP Handbook.« less

  1. An ensemble approach for large-scale identification of protein-protein interactions using the alignments of multiple sequences

    PubMed Central

    Wang, Lei; You, Zhu-Hong; Chen, Xing; Li, Jian-Qiang; Yan, Xin; Zhang, Wei; Huang, Yu-An

    2017-01-01

    Protein–Protein Interactions (PPI) is not only the critical component of various biological processes in cells, but also the key to understand the mechanisms leading to healthy and diseased states in organisms. However, it is time-consuming and cost-intensive to identify the interactions among proteins using biological experiments. Hence, how to develop a more efficient computational method rapidly became an attractive topic in the post-genomic era. In this paper, we propose a novel method for inference of protein-protein interactions from protein amino acids sequences only. Specifically, protein amino acids sequence is firstly transformed into Position-Specific Scoring Matrix (PSSM) generated by multiple sequences alignments; then the Pseudo PSSM is used to extract feature descriptors. Finally, ensemble Rotation Forest (RF) learning system is trained to predict and recognize PPIs based solely on protein sequence feature. When performed the proposed method on the three benchmark data sets (Yeast, H. pylori, and independent dataset) for predicting PPIs, our method can achieve good average accuracies of 98.38%, 89.75%, and 96.25%, respectively. In order to further evaluate the prediction performance, we also compare the proposed method with other methods using same benchmark data sets. The experiment results demonstrate that the proposed method consistently outperforms other state-of-the-art method. Therefore, our method is effective and robust and can be taken as a useful tool in exploring and discovering new relationships between proteins. A web server is made publicly available at the URL http://202.119.201.126:8888/PsePSSM/ for academic use. PMID:28029645

  2. The Ongoing Impact of the U.S. Fast Reactor Integral Experiments Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John D. Bess; Michael A. Pope; Harold F. McFarlane

    2012-11-01

    The creation of a large database of integral fast reactor physics experiments advanced nuclear science and technology in ways that were unachievable by less capital intensive and operationally challenging approaches. They enabled the compilation of integral physics benchmark data, validated (or not) analytical methods, and provided assurance of future rector designs The integral experiments performed at Argonne National Laboratory (ANL) represent decades of research performed to support fast reactor design and our understanding of neutronics behavior and reactor physics measurements. Experiments began in 1955 with the Zero Power Reactor No. 3 (ZPR-3) and terminated with the Zero Power Physics Reactormore » (ZPPR, originally the Zero Power Plutonium Reactor) in 1990 at the former ANL-West site in Idaho, which is now part of the Idaho National Laboratory (INL). Two additional critical assemblies, ZPR-6 and ZPR-9, operated at the ANL-East site in Illinois. A total of 128 fast reactor assemblies were constructed with these facilities [1]. The infrastructure and measurement capabilities are too expensive to be replicated in the modern era, making the integral database invaluable as the world pushes ahead with development of liquid metal cooled reactors.« less

  3. Electron scattering data as the basis for kinetic models -- what can we realistically provide, and how?

    NASA Astrophysics Data System (ADS)

    Buckman, Stephen

    2009-10-01

    It is unlikely that anyone would dispute the important role that the availability of accurate data can play in the modeling and simulation of low temperature plasmas. Fundamental measurements of collision processes, from the relatively simple (eg. elastic scattering) to the complex (eg. molecular dissociation) are critical to developing an understanding of discharge and plasma behaviour. While there has been a healthy relationship between the data users and data gatherers at meetings such as GEC for many years, there are often misunderstandings about the capabilities that reside in each of these areas, and how best to maintain and strengthen the communication between them. This paper will attempt to summarise those electron-driven processes that are accessible, in a quantitative sense, in modern scattering experiments. Advances in treating reactive and excited species will also be discussed, as will the potential to push our measurement technologies further. An inescapable conclusion is that the collision community can best contribute through a strategic alliance between experiment and theory. Theory should be benchmarked against experiment for those processes and targets that are accessible, and used wisely for those processes where experiment cannot contribute.

  4. [Does implementation of benchmarking in quality circles improve the quality of care of patients with asthma and reduce drug interaction?].

    PubMed

    Kaufmann-Kolle, Petra; Szecsenyi, Joachim; Broge, Björn; Haefeli, Walter Emil; Schneider, Antonius

    2011-01-01

    The purpose of this cluster-randomised controlled trial was to evaluate the efficacy of quality circles (QCs) working either with general data-based feedback or with an open benchmark within the field of asthma care and drug-drug interactions. Twelve QCs, involving 96 general practitioners from 85 practices, were randomised. Six QCs worked with traditional anonymous feedback and six with an open benchmark. Two QC meetings supported with feedback reports were held covering the topics "drug-drug interactions" and "asthma"; in both cases discussions were guided by a trained moderator. Outcome measures included health-related quality of life and patient satisfaction with treatment, asthma severity and number of potentially inappropriate drug combinations as well as the general practitioners' satisfaction in relation to the performance of the QC. A significant improvement in the treatment of asthma was observed in both trial arms. However, there was only a slight improvement regarding inappropriate drug combinations. There were no relevant differences between the group with open benchmark (B-QC) and traditional quality circles (T-QC). The physicians' satisfaction with the QC performance was significantly higher in the T-QCs. General practitioners seem to take a critical perspective about open benchmarking in quality circles. Caution should be used when implementing benchmarking in a quality circle as it did not improve healthcare when compared to the traditional procedure with anonymised comparisons. Copyright © 2011. Published by Elsevier GmbH.

  5. D Modeling of Industrial Heritage Building Using COTSs System: Test, Limits and Performances

    NASA Astrophysics Data System (ADS)

    Piras, M.; Di Pietra, V.; Visintini, D.

    2017-08-01

    The role of UAV systems in applied geomatics is continuously increasing in several applications as inspection, surveying and geospatial data. This evolution is mainly due to two factors: new technologies and new algorithms for data processing. About technologies, from some years ago there is a very wide use of commercial UAV even COTSs (Commercial On-The-Shelf) systems. Moreover, these UAVs allow to easily acquire oblique images, giving the possibility to overcome the limitations of the nadir approach related to the field of view and occlusions. In order to test potential and issue of COTSs systems, the Italian Society of Photogrammetry and Topography (SIFET) has organised the SBM2017, which is a benchmark where all people can participate in a shared experience. This benchmark, called "Photogrammetry with oblique images from UAV: potentialities and challenges", permits to collect considerations from the users, highlight the potential of these systems, define the critical aspects and the technological challenges and compare distinct approaches and software. The case study is the "Fornace Penna" in Scicli (Ragusa, Italy), an inaccessible monument of industrial architecture from the early 1900s. The datasets (images and video) have been acquired from three different UAVs system: Parrot Bebop 2, DJI Phantom 4 and Flytop Flynovex. The aim of this benchmark is to generate the 3D model of the "Fornace Penna", making an analysis considering different software, imaging geometry and processing strategies. This paper describes the surveying strategies, the methodologies and five different photogrammetric obtained results (sensor calibration, external orientation, dense point cloud and two orthophotos), using separately - the single images and the frames extracted from the video - acquired with the DJI system.

  6. How does audit and feedback influence intentions of health professionals to improve practice? A laboratory experiment and field study in cardiac rehabilitation.

    PubMed

    Gude, Wouter T; van Engen-Verheul, Mariëtte M; van der Veer, Sabine N; de Keizer, Nicolette F; Peek, Niels

    2017-04-01

    To identify factors that influence the intentions of health professionals to improve their practice when confronted with clinical performance feedback, which is an essential first step in the audit and feedback mechanism. We conducted a theory-driven laboratory experiment with 41 individual professionals, and a field study in 18 centres in the context of a cluster-randomised trial of electronic audit and feedback in cardiac rehabilitation. Feedback reports were provided through a web-based application, and included performance scores and benchmark comparisons (high, intermediate or low performance) for a set of process and outcome indicators. From each report participants selected indicators for improvement into their action plan. Our unit of observation was an indicator presented in a feedback report (selected yes/no); we considered selecting an indicator to reflect an intention to improve. We analysed 767 observations in the laboratory experiment and 614 in the field study, respectively. Each 10% decrease in performance score increased the probability of an indicator being selected by 54% (OR, 1.54; 95% CI 1.29% to 1.83%) in the laboratory experiment, and 25% (OR, 1.25; 95% CI 1.13% to 1.39%) in the field study. Also, performance being benchmarked as low and intermediate increased this probability in laboratory settings. Still, participants ignored the benchmarks in 34% (laboratory experiment) and 48% (field study) of their selections. When confronted with clinical performance feedback, performance scores and benchmark comparisons influenced health professionals' intentions to improve practice. However, there was substantial variation in these intentions, because professionals disagreed with benchmarks, deemed improvement unfeasible or did not consider the indicator an essential aspect of care quality. These phenomena impede intentions to improve practice, and are thus likely to dilute the effects of audit and feedback interventions. NTR3251, pre-results. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  7. A Seafloor Benchmark for 3-dimensional Geodesy

    NASA Astrophysics Data System (ADS)

    Chadwell, C. D.; Webb, S. C.; Nooner, S. L.

    2014-12-01

    We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone. Using a ROV to place and remove sensors on the benchmarks will significantly reduce the number of sensors required by the community to monitor offshore strain in subduction zones.

  8. Final Report of the NASA Office of Safety and Mission Assurance Agile Benchmarking Team

    NASA Technical Reports Server (NTRS)

    Wetherholt, Martha

    2016-01-01

    To ensure that the NASA Safety and Mission Assurance (SMA) community remains in a position to perform reliable Software Assurance (SA) on NASAs critical software (SW) systems with the software industry rapidly transitioning from waterfall to Agile processes, Terry Wilcutt, Chief, Safety and Mission Assurance, Office of Safety and Mission Assurance (OSMA) established the Agile Benchmarking Team (ABT). The Team's tasks were: 1. Research background literature on current Agile processes, 2. Perform benchmark activities with other organizations that are involved in software Agile processes to determine best practices, 3. Collect information on Agile-developed systems to enable improvements to the current NASA standards and processes to enhance their ability to perform reliable software assurance on NASA Agile-developed systems, 4. Suggest additional guidance and recommendations for updates to those standards and processes, as needed. The ABT's findings and recommendations for software management, engineering and software assurance are addressed herein.

  9. PSO algorithm enhanced with Lozi Chaotic Map - Tuning experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pluhacek, Michal; Senkerik, Roman; Zelinka, Ivan

    2015-03-10

    In this paper it is investigated the effect of tuning of control parameters of the Lozi Chaotic Map employed as a chaotic pseudo-random number generator for the particle swarm optimization algorithm. Three different benchmark functions are selected from the IEEE CEC 2013 competition benchmark set. The Lozi map is extensively tuned and the performance of PSO is evaluated.

  10. Benchmarking infrastructure for mutation text mining

    PubMed Central

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  11. Benchmarking infrastructure for mutation text mining.

    PubMed

    Klein, Artjom; Riazanov, Alexandre; Hindle, Matthew M; Baker, Christopher Jo

    2014-02-25

    Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption.

  12. Qualifying for the Green500: Experience with the newest generation of supercomputers at LANL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yilk, Todd

    The High Performance Computing Division of Los Alamos National Laboratory recently brought four new supercomputing platforms on line: Trinity with separate partitions built around the Haswell and Knights Landing CPU architectures for capability computing and Grizzly, Fire, and Ice for capacity computing applications. The power monitoring infrastructure of these machines is significantly enhanced over previous supercomputing generations at LANL and all were qualified at the highest level of the Green500 benchmark. Here, this paper discusses supercomputing at LANL, the Green500 benchmark, and notes on our experience meeting the Green500's reporting requirements.

  13. Qualifying for the Green500: Experience with the newest generation of supercomputers at LANL

    DOE PAGES

    Yilk, Todd

    2018-02-17

    The High Performance Computing Division of Los Alamos National Laboratory recently brought four new supercomputing platforms on line: Trinity with separate partitions built around the Haswell and Knights Landing CPU architectures for capability computing and Grizzly, Fire, and Ice for capacity computing applications. The power monitoring infrastructure of these machines is significantly enhanced over previous supercomputing generations at LANL and all were qualified at the highest level of the Green500 benchmark. Here, this paper discusses supercomputing at LANL, the Green500 benchmark, and notes on our experience meeting the Green500's reporting requirements.

  14. The State of Online Retailing.

    ERIC Educational Resources Information Center

    Tamimi, Nabil; Rajan, Murli; Sebastianelli, Rose

    2003-01-01

    Benchmarks online retailing transactions against critical factors that impact online retailing. Findings suggest several areas that e-retailers should target for improvement, including the speed of home page loading, ability to translate into multiple languages, capabilities of search engines, security policies display, payment options, minimum…

  15. A publicly available benchmark for biomedical dataset retrieval: the reference standard for the 2016 bioCADDIE dataset retrieval challenge

    PubMed Central

    Gururaj, Anupama E.; Chen, Xiaoling; Pournejati, Saeid; Alter, George; Hersh, William R.; Demner-Fushman, Dina; Ohno-Machado, Lucila

    2017-01-01

    Abstract The rapid proliferation of publicly available biomedical datasets has provided abundant resources that are potentially of value as a means to reproduce prior experiments, and to generate and explore novel hypotheses. However, there are a number of barriers to the re-use of such datasets, which are distributed across a broad array of dataset repositories, focusing on different data types and indexed using different terminologies. New methods are needed to enable biomedical researchers to locate datasets of interest within this rapidly expanding information ecosystem, and new resources are needed for the formal evaluation of these methods as they emerge. In this paper, we describe the design and generation of a benchmark for information retrieval of biomedical datasets, which was developed and used for the 2016 bioCADDIE Dataset Retrieval Challenge. In the tradition of the seminal Cranfield experiments, and as exemplified by the Text Retrieval Conference (TREC), this benchmark includes a corpus (biomedical datasets), a set of queries, and relevance judgments relating these queries to elements of the corpus. This paper describes the process through which each of these elements was derived, with a focus on those aspects that distinguish this benchmark from typical information retrieval reference sets. Specifically, we discuss the origin of our queries in the context of a larger collaborative effort, the biomedical and healthCAre Data Discovery Index Ecosystem (bioCADDIE) consortium, and the distinguishing features of biomedical dataset retrieval as a task. The resulting benchmark set has been made publicly available to advance research in the area of biomedical dataset retrieval. Database URL: https://biocaddie.org/benchmark-data PMID:29220453

  16. Lecture Notes on Criticality Safety Validation Using MCNP & Whisper

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise

    Training classes for nuclear criticality safety, MCNP documentation. The need for, and problems surrounding, validation of computer codes and data area considered first. Then some background for MCNP & Whisper is given--best practices for Monte Carlo criticality calculations, neutron spectra, S(α,β) thermal neutron scattering data, nuclear data sensitivities, covariance data, and correlation coefficients. Whisper is computational software designed to assist the nuclear criticality safety analyst with validation studies with the Monte Carlo radiation transport package MCNP. Whisper's methodology (benchmark selection – C k's, weights; extreme value theory – bias, bias uncertainty; MOS for nuclear data uncertainty – GLLS) and usagemore » are discussed.« less

  17. JENDL-4.0/HE Benchmark Test with Concrete and Iron Shielding Experiments at JAEA/TIARA

    NASA Astrophysics Data System (ADS)

    Konno, Chikara; Matsuda, Norihiro; Kwon, Saerom; Ohta, Masayuki; Sato, Satoshi

    2017-09-01

    As a benchmark test of JENDL-4.0/HE released in 2015, we have analyzed the concrete and iron shielding experiments with the quasi mono-energetic 40 and 65 MeV neutron sources at TIARA in JAEA by using MCNP5 and ACE files processed from JENDL-4.0/HE with NJOY2012. As a result, it was found out that the calculation results with JENDL-4.0/HE agreed with the measured ones in the concrete experiment well, while they underestimated the measured ones in the iron experiment with 65 MeV neutrons more for the thicker assemblies. We examined the 56Fe data of JENDL-4.0/HE in detail and it was considered that the larger non-elastic scattering cross sections of 56Fe caused the underestimation in the calculation with JENDL-4.0/HE for the iron experiment with 65 MeV neutrons.

  18. Summary of comparison and analysis of results from exercises 1 and 2 of the OECD PBMR coupled neutronics/thermal hydraulics transient benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mkhabela, P.; Han, J.; Tyobeka, B.

    2006-07-01

    The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has accepted, through the Nuclear Science Committee (NSC), the inclusion of the Pebble-Bed Modular Reactor 400 MW design (PBMR-400) coupled neutronics/thermal hydraulics transient benchmark problem as part of their official activities. The scope of the benchmark is to establish a well-defined problem, based on a common given library of cross sections, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events through a set of multi-dimensional computational test problems. The benchmark includes three steady state exercises andmore » six transient exercises. This paper describes the first two steady state exercises, their objectives and the international participation in terms of organization, country and computer code utilized. This description is followed by a comparison and analysis of the participants' results submitted for these two exercises. The comparison of results from different codes allows for an assessment of the sensitivity of a result to the method employed and can thus help to focus the development efforts on the most critical areas. The two first exercises also allow for removing of user-related modeling errors and prepare core neutronics and thermal-hydraulics models of the different codes for the rest of the exercises in the benchmark. (authors)« less

  19. Using Smart Pumps to Understand and Evaluate Clinician Practice Patterns to Ensure Patient Safety

    PubMed Central

    Mansfield, Jennifer; Jarrett, Steven

    2013-01-01

    Background: Safety software installed on intravenous (IV) infusion pumps has been shown to positively impact the quality of patient care through avoidance of medication errors. The data derived from the use of smart pumps are often overlooked, although these data provide helpful insight into the delivery of quality patient care. Objective: The objectives of this report are to describe the value of implementing IV infusion safety software and analyzing the data and reports generated by this system. Case study: Based on experience at the Carolinas HealthCare System (CHS), executive score cards provide an aggregate view of compliance rate, number of alerts, overrides, and edits. The report of serious errors averted (ie, critical catches) supplies the location, date, and time of the critical catch, thereby enabling management to pinpoint the end-user for educational purposes. By examining the number of critical catches, a return on investment may be calculated. Assuming 3,328 of these events each year, an estimated cost avoidance would be $29,120,000 per year for CHS. Other reports allow benchmarking between institutions. Conclusion: A review of the data about medication safety across CHS has helped garner support for a medication safety officer position with the goal of ultimately creating a safer environment for the patient. PMID:24474836

  20. Nonparametric estimation of benchmark doses in environmental risk assessment

    PubMed Central

    Piegorsch, Walter W.; Xiong, Hui; Bhattacharya, Rabi N.; Lin, Lizhen

    2013-01-01

    Summary An important statistical objective in environmental risk analysis is estimation of minimum exposure levels, called benchmark doses (BMDs), that induce a pre-specified benchmark response in a dose-response experiment. In such settings, representations of the risk are traditionally based on a parametric dose-response model. It is a well-known concern, however, that if the chosen parametric form is misspecified, inaccurate and possibly unsafe low-dose inferences can result. We apply a nonparametric approach for calculating benchmark doses, based on an isotonic regression method for dose-response estimation with quantal-response data (Bhattacharya and Kong, 2007). We determine the large-sample properties of the estimator, develop bootstrap-based confidence limits on the BMDs, and explore the confidence limits’ small-sample properties via a short simulation study. An example from cancer risk assessment illustrates the calculations. PMID:23914133

  1. HyspIRI Low Latency Concept and Benchmarks

    NASA Technical Reports Server (NTRS)

    Mandl, Dan

    2010-01-01

    Topics include HyspIRI low latency data ops concept, HyspIRI data flow, ongoing efforts, experiment with Web Coverage Processing Service (WCPS) approach to injecting new algorithms into SensorWeb, low fidelity HyspIRI IPM testbed, compute cloud testbed, open cloud testbed environment, Global Lambda Integrated Facility (GLIF) and OCC collaboration with Starlight, delay tolerant network (DTN) protocol benchmarking, and EO-1 configuration for preliminary DTN prototype.

  2. RNA-seq mixology: designing realistic control experiments to compare protocols and analysis methods

    PubMed Central

    Holik, Aliaksei Z.; Law, Charity W.; Liu, Ruijie; Wang, Zeya; Wang, Wenyi; Ahn, Jaeil; Asselin-Labat, Marie-Liesse; Smyth, Gordon K.

    2017-01-01

    Abstract Carefully designed control experiments provide a gold standard for benchmarking different genomics research tools. A shortcoming of many gene expression control studies is that replication involves profiling the same reference RNA sample multiple times. This leads to low, pure technical noise that is atypical of regular studies. To achieve a more realistic noise structure, we generated a RNA-sequencing mixture experiment using two cell lines of the same cancer type. Variability was added by extracting RNA from independent cell cultures and degrading particular samples. The systematic gene expression changes induced by this design allowed benchmarking of different library preparation kits (standard poly-A versus total RNA with Ribozero depletion) and analysis pipelines. Data generated using the total RNA kit had more signal for introns and various RNA classes (ncRNA, snRNA, snoRNA) and less variability after degradation. For differential expression analysis, voom with quality weights marginally outperformed other popular methods, while for differential splicing, DEXSeq was simultaneously the most sensitive and the most inconsistent method. For sample deconvolution analysis, DeMix outperformed IsoPure convincingly. Our RNA-sequencing data set provides a valuable resource for benchmarking different protocols and data pre-processing workflows. The extra noise mimics routine lab experiments more closely, ensuring any conclusions are widely applicable. PMID:27899618

  3. Rating Faculty Collegiality

    ERIC Educational Resources Information Center

    Cipriano, Robert E.; Buller, Jeffrey L.

    2012-01-01

    Most position descriptions for college and university faculty include benchmarks that indicate assumptions about collegiality. Criticism about this practice has been voiced for years. But case law in the United States has upheld the use of collegiality as a factor in decisions regarding faculty employment, tenure, and promotion. Indeed, several…

  4. Validation of tungsten cross sections in the neutron energy region up to 100 keV

    NASA Astrophysics Data System (ADS)

    Pigni, Marco T.; Žerovnik, Gašper; Leal, Luiz. C.; Trkov, Andrej

    2017-09-01

    Following a series of recent cross section evaluations on tungsten isotopes performed at Oak Ridge National Laboratory (ORNL), this paper presents the validation work carried out to test the performance of the evaluated cross sections based on lead-slowing-down (LSD) benchmarks conducted in Grenoble. ORNL completed the resonance parameter evaluation of four tungsten isotopes - 182,183,184,186W - in August 2014 and submitted it as an ENDF-compatible file to be part of the next release of the ENDF/B-VIII.0 nuclear data library. The evaluations were performed with support from the US Nuclear Criticality Safety Program in an effort to provide improved tungsten cross section and covariance data for criticality safety sensitivity analyses. The validation analysis based on the LSD benchmarks showed an improved agreement with the experimental response when the ORNL tungsten evaluations were included in the ENDF/B-VII.1 library. Comparison with the results obtained with the JEFF-3.2 nuclear data library are also discussed.

  5. An experimental phylogeny to benchmark ancestral sequence reconstruction

    PubMed Central

    Randall, Ryan N.; Radford, Caelan E.; Roof, Kelsey A.; Natarajan, Divya K.; Gaucher, Eric A.

    2016-01-01

    Ancestral sequence reconstruction (ASR) is a still-burgeoning method that has revealed many key mechanisms of molecular evolution. One criticism of the approach is an inability to validate its algorithms within a biological context as opposed to a computer simulation. Here we build an experimental phylogeny using the gene of a single red fluorescent protein to address this criticism. The evolved phylogeny consists of 19 operational taxonomic units (leaves) and 17 ancestral bifurcations (nodes) that display a wide variety of fluorescent phenotypes. The 19 leaves then serve as ‘modern' sequences that we subject to ASR analyses using various algorithms and to benchmark against the known ancestral genotypes and ancestral phenotypes. We confirm computer simulations that show all algorithms infer ancient sequences with high accuracy, yet we also reveal wide variation in the phenotypes encoded by incorrectly inferred sequences. Specifically, Bayesian methods incorporating rate variation significantly outperform the maximum parsimony criterion in phenotypic accuracy. Subsampling of extant sequences had minor effect on the inference of ancestral sequences. PMID:27628687

  6. Verification of ARES transport code system with TAKEDA benchmarks

    NASA Astrophysics Data System (ADS)

    Zhang, Liang; Zhang, Bin; Zhang, Penghe; Chen, Mengteng; Zhao, Jingchang; Zhang, Shun; Chen, Yixue

    2015-10-01

    Neutron transport modeling and simulation are central to many areas of nuclear technology, including reactor core analysis, radiation shielding and radiation detection. In this paper the series of TAKEDA benchmarks are modeled to verify the critical calculation capability of ARES, a discrete ordinates neutral particle transport code system. SALOME platform is coupled with ARES to provide geometry modeling and mesh generation function. The Koch-Baker-Alcouffe parallel sweep algorithm is applied to accelerate the traditional transport calculation process. The results show that the eigenvalues calculated by ARES are in excellent agreement with the reference values presented in NEACRP-L-330, with a difference less than 30 pcm except for the first case of model 3. Additionally, ARES provides accurate fluxes distribution compared to reference values, with a deviation less than 2% for region-averaged fluxes in all cases. All of these confirms the feasibility of ARES-SALOME coupling and demonstrate that ARES has a good performance in critical calculation.

  7. featsel: A framework for benchmarking of feature selection algorithms and cost functions

    NASA Astrophysics Data System (ADS)

    Reis, Marcelo S.; Estrela, Gustavo; Ferreira, Carlos Eduardo; Barrera, Junior

    In this paper, we introduce featsel, a framework for benchmarking of feature selection algorithms and cost functions. This framework allows the user to deal with the search space as a Boolean lattice and has its core coded in C++ for computational efficiency purposes. Moreover, featsel includes Perl scripts to add new algorithms and/or cost functions, generate random instances, plot graphs and organize results into tables. Besides, this framework already comes with dozens of algorithms and cost functions for benchmarking experiments. We also provide illustrative examples, in which featsel outperforms the popular Weka workbench in feature selection procedures on data sets from the UCI Machine Learning Repository.

  8. BACT Simulation User Guide (Version 7.0)

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.

    1997-01-01

    This report documents the structure and operation of a simulation model of the Benchmark Active Control Technology (BACT) Wind-Tunnel Model. The BACT system was designed, built, and tested at NASA Langley Research Center as part of the Benchmark Models Program and was developed to perform wind-tunnel experiments to obtain benchmark quality data to validate computational fluid dynamics and computational aeroelasticity codes, to verify the accuracy of current aeroservoelasticity design and analysis tools, and to provide an active controls testbed for evaluating new and innovative control algorithms for flutter suppression and gust load alleviation. The BACT system has been especially valuable as a control system testbed.

  9. Benchmarking for On-Scalp MEG Sensors.

    PubMed

    Xie, Minshu; Schneiderman, Justin F; Chukharkin, Maxim L; Kalabukhov, Alexei; Riaz, Bushra; Lundqvist, Daniel; Whitmarsh, Stephen; Hamalainen, Matti; Jousmaki, Veikko; Oostenveld, Robert; Winkler, Dag

    2017-06-01

    We present a benchmarking protocol for quantitatively comparing emerging on-scalp magnetoencephalography (MEG) sensor technologies to their counterparts in state-of-the-art MEG systems. As a means of validation, we compare a high-critical-temperature superconducting quantum interference device (high T c SQUID) with the low- T c SQUIDs of an Elekta Neuromag TRIUX system in MEG recordings of auditory and somatosensory evoked fields (SEFs) on one human subject. We measure the expected signal gain for the auditory-evoked fields (deeper sources) and notice some unfamiliar features in the on-scalp sensor-based recordings of SEFs (shallower sources). The experimental results serve as a proof of principle for the benchmarking protocol. This approach is straightforward, general to various on-scalp MEG sensors, and convenient to use on human subjects. The unexpected features in the SEFs suggest on-scalp MEG sensors may reveal information about neuromagnetic sources that is otherwise difficult to extract from state-of-the-art MEG recordings. As the first systematically established on-scalp MEG benchmarking protocol, magnetic sensor developers can employ this method to prove the utility of their technology in MEG recordings. Further exploration of the SEFs with on-scalp MEG sensors may reveal unique information about their sources.

  10. Active learning for noisy oracle via density power divergence.

    PubMed

    Sogawa, Yasuhiro; Ueno, Tsuyoshi; Kawahara, Yoshinobu; Washio, Takashi

    2013-10-01

    The accuracy of active learning is critically influenced by the existence of noisy labels given by a noisy oracle. In this paper, we propose a novel pool-based active learning framework through robust measures based on density power divergence. By minimizing density power divergence, such as β-divergence and γ-divergence, one can estimate the model accurately even under the existence of noisy labels within data. Accordingly, we develop query selecting measures for pool-based active learning using these divergences. In addition, we propose an evaluation scheme for these measures based on asymptotic statistical analyses, which enables us to perform active learning by evaluating an estimation error directly. Experiments with benchmark datasets and real-world image datasets show that our active learning scheme performs better than several baseline methods. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. MCNP4A: Features and philosophy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendricks, J.S.

    This paper describes MCNP, states its philosophy, introduces a number of new features becoming available with version MCNP4A, and answers a number of questions asked by participants in the workshop. MCNP is a general-purpose three-dimensional neutron, photon and electron transport code. Its philosophy is ``Quality, Value and New Features.`` Quality is exemplified by new software quality assurance practices and a program of benchmarking against experiments. Value includes a strong emphasis on documentation and code portability. New features are the third priority. MCNP4A is now available at Los Alamos. New features in MCNP4A include enhanced statistical analysis, distributed processor multitasking, newmore » photon libraries, ENDF/B-VI capabilities, X-Windows graphics, dynamic memory allocation, expanded criticality output, periodic boundaries, plotting of particle tracks via SABRINA, and many other improvements. 23 refs.« less

  12. Exploiting Publication Contents and Collaboration Networks for Collaborator Recommendation

    PubMed Central

    Kong, Xiangjie; Jiang, Huizhen; Yang, Zhuo; Xu, Zhenzhen; Xia, Feng; Tolba, Amr

    2016-01-01

    Thanks to the proliferation of online social networks, it has become conventional for researchers to communicate and collaborate with each other. Meanwhile, one critical challenge arises, that is, how to find the most relevant and potential collaborators for each researcher? In this work, we propose a novel collaborator recommendation model called CCRec, which combines the information on researchers’ publications and collaboration network to generate better recommendation. In order to effectively identify the most potential collaborators for researchers, we adopt a topic clustering model to identify the academic domains, as well as a random walk model to compute researchers’ feature vectors. Using DBLP datasets, we conduct benchmarking experiments to examine the performance of CCRec. The experimental results show that CCRec outperforms other state-of-the-art methods in terms of precision, recall and F1 score. PMID:26849682

  13. How do I know if my forecasts are better? Using benchmarks in hydrological ensemble prediction

    NASA Astrophysics Data System (ADS)

    Pappenberger, F.; Ramos, M. H.; Cloke, H. L.; Wetterhall, F.; Alfieri, L.; Bogner, K.; Mueller, A.; Salamon, P.

    2015-03-01

    The skill of a forecast can be assessed by comparing the relative proximity of both the forecast and a benchmark to the observations. Example benchmarks include climatology or a naïve forecast. Hydrological ensemble prediction systems (HEPS) are currently transforming the hydrological forecasting environment but in this new field there is little information to guide researchers and operational forecasters on how benchmarks can be best used to evaluate their probabilistic forecasts. In this study, it is identified that the forecast skill calculated can vary depending on the benchmark selected and that the selection of a benchmark for determining forecasting system skill is sensitive to a number of hydrological and system factors. A benchmark intercomparison experiment is then undertaken using the continuous ranked probability score (CRPS), a reference forecasting system and a suite of 23 different methods to derive benchmarks. The benchmarks are assessed within the operational set-up of the European Flood Awareness System (EFAS) to determine those that are 'toughest to beat' and so give the most robust discrimination of forecast skill, particularly for the spatial average fields that EFAS relies upon. Evaluating against an observed discharge proxy the benchmark that has most utility for EFAS and avoids the most naïve skill across different hydrological situations is found to be meteorological persistency. This benchmark uses the latest meteorological observations of precipitation and temperature to drive the hydrological model. Hydrological long term average benchmarks, which are currently used in EFAS, are very easily beaten by the forecasting system and the use of these produces much naïve skill. When decomposed into seasons, the advanced meteorological benchmarks, which make use of meteorological observations from the past 20 years at the same calendar date, have the most skill discrimination. They are also good at discriminating skill in low flows and for all catchment sizes. Simpler meteorological benchmarks are particularly useful for high flows. Recommendations for EFAS are to move to routine use of meteorological persistency, an advanced meteorological benchmark and a simple meteorological benchmark in order to provide a robust evaluation of forecast skill. This work provides the first comprehensive evidence on how benchmarks can be used in evaluation of skill in probabilistic hydrological forecasts and which benchmarks are most useful for skill discrimination and avoidance of naïve skill in a large scale HEPS. It is recommended that all HEPS use the evidence and methodology provided here to evaluate which benchmarks to employ; so forecasters can have trust in their skill evaluation and will have confidence that their forecasts are indeed better.

  14. Radiation shielding quality assurance

    NASA Astrophysics Data System (ADS)

    Um, Dallsun

    For the radiation shielding quality assurance, the validity and reliability of the neutron transport code MCNP, which is now one of the most widely used radiation shielding analysis codes, were checked with lot of benchmark experiments. And also as a practical example, follows were performed in this thesis. One integral neutron transport experiment to measure the effect of neutron streaming in iron and void was performed with Dog-Legged Void Assembly in Knolls Atomic Power Laboratory in 1991. Neutron flux was measured six different places with the methane detectors and a BF-3 detector. The main purpose of the measurements was to provide benchmark against which various neutron transport calculation tools could be compared. Those data were used in verification of Monte Carlo Neutron & Photon Transport Code, MCNP, with the modeling for that. Experimental results and calculation results were compared in both ways, as the total integrated value of neutron fluxes along neutron energy range from 10 KeV to 2 MeV and as the neutron spectrum along with neutron energy range. Both results are well matched with the statistical error +/-20%. MCNP results were also compared with those of TORT, a three dimensional discrete ordinates code which was developed by Oak Ridge National Laboratory. MCNP results are superior to the TORT results at all detector places except one. This means that MCNP is proved as a very powerful tool for the analysis of neutron transport through iron & air and further it could be used as a powerful tool for the radiation shielding analysis. For one application of the analysis of variance (ANOVA) to neutron and gamma transport problems, uncertainties for the calculated values of critical K were evaluated as in the ANOVA on statistical data.

  15. Benchmarking: a method for continuous quality improvement in health.

    PubMed

    Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe

    2012-05-01

    Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields of application and examples of how it has been used in the healthcare sector. Benchmarking is often thought to consist simply of comparing indicators and is not perceived in its entirety, that is, as a tool based on voluntary and active collaboration among several organizations to create a spirit of competition and to apply best practices. The key feature of benchmarking is its integration within a comprehensive and participatory policy of continuous quality improvement (CQI). Conditions for successful benchmarking focus essentially on careful preparation of the process, monitoring of the relevant indicators, staff involvement and inter-organizational visits. Compared to methods previously implemented in France (CQI and collaborative projects), benchmarking has specific features that set it apart as a healthcare innovation. This is especially true for healthcare or medical-social organizations, as the principle of inter-organizational visiting is not part of their culture. Thus, this approach will need to be assessed for feasibility and acceptability before it is more widely promoted.

  16. Nuclear Data Needs for Generation IV Nuclear Energy Systems

    NASA Astrophysics Data System (ADS)

    Rullhusen, Peter

    2006-04-01

    Nuclear data needs for generation IV systems. Future of nuclear energy and the role of nuclear data / P. Finck. Nuclear data needs for generation IV nuclear energy systems-summary of U.S. workshop / T. A. Taiwo, H. S. Khalil. Nuclear data needs for the assessment of gen. IV systems / G. Rimpault. Nuclear data needs for generation IV-lessons from benchmarks / S. C. van der Marck, A. Hogenbirk, M. C. Duijvestijn. Core design issues of the supercritical water fast reactor / M. Mori ... [et al.]. GFR core neutronics studies at CEA / J. C. Bosq ... [et al]. Comparative study on different phonon frequency spectra of graphite in GCR / Young-Sik Cho ... [et al.]. Innovative fuel types for minor actinides transmutation / D. Haas, A. Fernandez, J. Somers. The importance of nuclear data in modeling and designing generation IV fast reactors / K. D. Weaver. The GIF and Mexico-"everything is possible" / C. Arrenondo Sánchez -- Benmarks, sensitivity calculations, uncertainties. Sensitivity of advanced reactor and fuel cycle performance parameters to nuclear data uncertainties / G. Aliberti ... [et al.]. Sensitivity and uncertainty study for thermal molten salt reactors / A. Biduad ... [et al.]. Integral reactor physics benchmarks- The International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPHEP) / J. B. Briggs, D. W. Nigg, E. Sartori. Computer model of an error propagation through micro-campaign of fast neutron gas cooled nuclear reactor / E. Ivanov. Combining differential and integral experiments on [symbol] for reducing uncertainties in nuclear data applications / T. Kawano ... [et al.]. Sensitivity of activation cross sections of the Hafnium, Tanatalum and Tungsten stable isotopes to nuclear reaction mechanisms / V. Avrigeanu ... [et al.]. Generating covariance data with nuclear models / A. J. Koning. Sensitivity of Candu-SCWR reactors physics calculations to nuclear data files / K. S. Kozier, G. R. Dyck. The lead cooled fast reactor benchmark BREST-300: analysis with sensitivity method / V. Smirnov ... [et al.]. Sensitivity analysis of neutron cross-sections considered for design and safety studies of LFR and SFR generation IV systems / K. Tucek, J. Carlsson, H. Wider -- Experiments. INL capabilities for nuclear data measurements using the Argonne intense pulsed neutron source facility / J. D. Cole ... [et al.]. Cross-section measurements in the fast neutron energy range / A. Plompen. Recent measurements of neutron capture cross sections for minor actinides by a JNC and Kyoto University Group / H. Harada ... [et al.]. Determination of minor actinides fission cross sections by means of transfer reactions / M. Aiche ... [et al.] -- Evaluated data libraries. Nuclear data services from the NEA / H. Henriksson, Y. Rugama. Nuclear databases for energy applications: an IAEA perspective / R. Capote Noy, A. L. Nichols, A. Trkov. Nuclear data evaluation for generation IV / G. Noguère ... [et al.]. Improved evaluations of neutron-induced reactions on americium isotopes / P. Talou ... [et al.]. Using improved ENDF-based nuclear data for candu reactor calculations / J. Prodea. A comparative study on the graphite-moderated reactors using different evaluated nuclear data / Do Heon Kim ... [et al.].

  17. What has changed in the evidence for early experience? Update of a BEME systematic review.

    PubMed

    Yardley, Sarah; Littlewood, Sonia; Margolis, Stephen A; Scherpbier, Albert; Spencer, John; Ypinazar, Valmae; Dornan, Tim

    2010-01-01

    We previously reviewed evidence published from 1992 to 2001 concerning early experience for healthcare undergraduates (Dornan T, Littlewood S, Margolis S, Scherpbier A, Spencer J, Ypinazar V. 2006. How can experience in clinical and community settings contribute to early medical education? A BEME systematic review. Med Teach 28:3-18). This subsequent study reviews evidence published from 2002 to 2008. Identify changes in the evidence base; determine the value of re-reviewing; set a future research agenda. The same search strategy as in the original review was repeated. Newly identified publications were critically appraised against the same benchmarks of strength and educational importance. Twenty-four new empirical studies of early authentic experience in education of health professionals met our inclusion criteria, yielding 96 outcomes. Sixty five outcomes (from 22 studies) were both educationally important and based on strong evidence. A new significant theme was found: the use of early experience to help students understand and align themselves with patient and community perspectives on illness and healthcare. More publications were now from outside Europe and North America. In addition to supporting the findings of our original review, this update shows an expansion in research sources, and a shift in research content focus. There are still questions, however, about how early authentic experience leads to particular learning outcomes and what will make it most educationally effective.

  18. Development and Experimental Benchmark of Simulations to Predict Used Nuclear Fuel Cladding Temperatures during Drying and Transfer Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greiner, Miles

    Radial hydride formation in high-burnup used fuel cladding has the potential to radically reduce its ductility and suitability for long-term storage and eventual transport. To avoid this formation, the maximum post-reactor temperature must remain sufficiently low to limit the cladding hoop stress, and so that hydrogen from the existing circumferential hydrides will not dissolve and become available to re-precipitate into radial hydrides under the slow cooling conditions during drying, transfer and early dry-cask storage. The objective of this research is to develop and experimentallybenchmark computational fluid dynamics simulations of heat transfer in post-pool-storage drying operations, when high-burnup fuel cladding ismore » likely to experience its highest temperature. These benchmarked tools can play a key role in evaluating dry cask storage systems for extended storage of high-burnup fuels and post-storage transportation, including fuel retrievability. The benchmarked tools will be used to aid the design of efficient drying processes, as well as estimate variations of surface temperatures as a means of inferring helium integrity inside the canister or cask. This work will be conducted effectively because the principal investigator has experience developing these types of simulations, and has constructed a test facility that can be used to benchmark them.« less

  19. Benchmarking image fusion system design parameters

    NASA Astrophysics Data System (ADS)

    Howell, Christopher L.

    2013-06-01

    A clear and absolute method for discriminating between image fusion algorithm performances is presented. This method can effectively be used to assist in the design and modeling of image fusion systems. Specifically, it is postulated that quantifying human task performance using image fusion should be benchmarked to whether the fusion algorithm, at a minimum, retained the performance benefit achievable by each independent spectral band being fused. The established benchmark would then clearly represent the threshold that a fusion system should surpass to be considered beneficial to a particular task. A genetic algorithm is employed to characterize the fused system parameters using a Matlab® implementation of NVThermIP as the objective function. By setting the problem up as a mixed-integer constraint optimization problem, one can effectively look backwards through the image acquisition process: optimizing fused system parameters by minimizing the difference between modeled task difficulty measure and the benchmark task difficulty measure. The results of an identification perception experiment are presented, where human observers were asked to identify a standard set of military targets, and used to demonstrate the effectiveness of the benchmarking process.

  20. New Flutter Analysis Technique for Time-Domain Computational Aeroelasticity

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi; Lung, Shun-Fat

    2017-01-01

    A new time-domain approach for computing flutter speed is presented. Based on the time-history result of aeroelastic simulation, the unknown unsteady aerodynamics model is estimated using a system identification technique. The full aeroelastic model is generated via coupling the estimated unsteady aerodynamic model with the known linear structure model. The critical dynamic pressure is computed and used in the subsequent simulation until the convergence of the critical dynamic pressure is achieved. The proposed method is applied to a benchmark cantilevered rectangular wing.

  1. Importance of inlet boundary conditions for numerical simulation of combustor flows

    NASA Technical Reports Server (NTRS)

    Sturgess, G. J.; Syed, S. A.; Mcmanus, K. R.

    1983-01-01

    Fluid dynamic computer codes for the mathematical simulation of problems in gas turbine engine combustion systems are required as design and diagnostic tools. To eventually achieve a performance standard with these codes of more than qualitative accuracy it is desirable to use benchmark experiments for validation studies. Typical of the fluid dynamic computer codes being developed for combustor simulations is the TEACH (Teaching Elliptic Axisymmetric Characteristics Heuristically) solution procedure. It is difficult to find suitable experiments which satisfy the present definition of benchmark quality. For the majority of the available experiments there is a lack of information concerning the boundary conditions. A standard TEACH-type numerical technique is applied to a number of test-case experiments. It is found that numerical simulations of gas turbine combustor-relevant flows can be sensitive to the plane at which the calculations start and the spatial distributions of inlet quantities for swirling flows.

  2. A Simulation Environment for Benchmarking Sensor Fusion-Based Pose Estimators.

    PubMed

    Ligorio, Gabriele; Sabatini, Angelo Maria

    2015-12-19

    In-depth analysis and performance evaluation of sensor fusion-based estimators may be critical when performed using real-world sensor data. For this reason, simulation is widely recognized as one of the most powerful tools for algorithm benchmarking. In this paper, we present a simulation framework suitable for assessing the performance of sensor fusion-based pose estimators. The systems used for implementing the framework were magnetic/inertial measurement units (MIMUs) and a camera, although the addition of further sensing modalities is straightforward. Typical nuisance factors were also included for each sensor. The proposed simulation environment was validated using real-life sensor data employed for motion tracking. The higher mismatch between real and simulated sensors was about 5% of the measured quantity (for the camera simulation), whereas a lower correlation was found for an axis of the gyroscope (0.90). In addition, a real benchmarking example of an extended Kalman filter for pose estimation from MIMU and camera data is presented.

  3. Engineering Education as a Complex System

    ERIC Educational Resources Information Center

    Gattie, David K.; Kellam, Nadia N.; Schramski, John R.; Walther, Joachim

    2011-01-01

    This paper presents a theoretical basis for cultivating engineering education as a complex system that will prepare students to think critically and make decisions with regard to poorly understood, ill-structured issues. Integral to this theoretical basis is a solution space construct developed and presented as a benchmark for evaluating…

  4. The Complexities in Fostering Critical Thinking through School-Based Curriculum Innovation: Research Evidence from Singapore

    ERIC Educational Resources Information Center

    Tan, Liang See; Koh, Elizabeth; Lee, Shu Shing; Ponnusamy, Letchmi Devi; Tan, Keith Chiu Kian

    2017-01-01

    Singapore's strong performance in international benchmarking studies--Trends in International Mathematics and Science Study (TIMSS) and Programme for International Student Assessment (PISA)--poses a conundrum to researchers who view Singapore's pedagogy as characterized by the teaching of facts and procedures, and lacking in constructivist…

  5. Applicability domains for classification problems: benchmarking of distance to models for AMES mutagenicity set

    EPA Science Inventory

    For QSAR and QSPR modeling of biological and physicochemical properties, estimating the accuracy of predictions is a critical problem. The “distance to model” (DM) can be defined as a metric that defines the similarity between the training set molecules and the test set compound ...

  6. Assessing Civic Competence against the Normative Benchmark of Considered Opinions

    ERIC Educational Resources Information Center

    Kim, Jin Woo

    2017-01-01

    There is widespread skepticism about civic competence. Some question if citizens are informed enough to make considered decisions. Others doubt citizens' ability to rationally evaluate relevant evidence and update their opinions even when they have necessary information. The purpose of my dissertation is to critically evaluate this literature and…

  7. Quantifying Information Gain from Dynamic Downscaling Experiments

    NASA Astrophysics Data System (ADS)

    Tian, Y.; Peters-Lidard, C. D.

    2015-12-01

    Dynamic climate downscaling experiments are designed to produce information at higher spatial and temporal resolutions. Such additional information is generated from the low-resolution initial and boundary conditions via the predictive power of the physical laws. However, errors and uncertainties in the initial and boundary conditions can be propagated and even amplified to the downscaled simulations. Additionally, the limit of predictability in nonlinear dynamical systems will also damper the information gain, even if the initial and boundary conditions were error-free. Thus it is critical to quantitatively define and measure the amount of information increase from dynamic downscaling experiments, to better understand and appreciate their potentials and limitations. We present a scheme to objectively measure the information gain from such experiments. The scheme is based on information theory, and we argue that if a downscaling experiment is to exhibit value, it has to produce more information than what can be simply inferred from information sources already available. These information sources include the initial and boundary conditions, the coarse resolution model in which the higher-resolution models are embedded, and the same set of physical laws. These existing information sources define an "information threshold" as a function of the spatial and temporal resolution, and this threshold serves as a benchmark to quantify the information gain from the downscaling experiments, or any other approaches. For a downscaling experiment to shown any value, the information has to be above this threshold. A recent NASA-supported downscaling experiment is used as an example to illustrate the application of this scheme.

  8. Benchmarking of Neutron Production of Heavy-Ion Transport Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Remec, Igor; Ronningen, Reginald M.; Heilbronn, Lawrence

    Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in design and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models andmore » codes and additional benchmarking are required.« less

  9. Benchmarking of Heavy Ion Transport Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Remec, Igor; Ronningen, Reginald M.; Heilbronn, Lawrence

    Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in designing and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models andmore » codes and additional benchmarking are required.« less

  10. Verification of MCNP6.2 for Nuclear Criticality Safety Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise

    2017-05-10

    Several suites of verification/validation benchmark problems were run in early 2017 to verify that the new production release of MCNP6.2 performs correctly for nuclear criticality safety applications (NCS). MCNP6.2 results for several NCS validation suites were compared to the results from MCNP6.1 [1] and MCNP6.1.1 [2]. MCNP6.1 is the production version of MCNP® released in 2013, and MCNP6.1.1 is the update released in 2014. MCNP6.2 includes all of the standard features for NCS calculations that have been available for the past 15 years, along with new features for sensitivity-uncertainty based methods for NCS validation [3]. Results from the benchmark suitesmore » were compared with results from previous verification testing [4-8]. Criticality safety analysts should consider testing MCNP6.2 on their particular problems and validation suites. No further development of MCNP5 is planned. MCNP6.1 is now 4 years old, and MCNP6.1.1 is now 3 years old. In general, released versions of MCNP are supported only for about 5 years, due to resource limitations. All future MCNP improvements, bug fixes, user support, and new capabilities are targeted only to MCNP6.2 and beyond.« less

  11. Commencing Student Experience: New Insights and Implications for Action

    ERIC Educational Resources Information Center

    Grebennikov, Leonid; Shah, Mahsood

    2012-01-01

    In many developed countries, including Australia, it is common practice to regularly survey university students in order to assess their experience inside and beyond the classroom. Governments conduct nationwide surveys to assess the quality of student experience, benchmark outcomes nationally and in some cases reward better performing…

  12. Assessing validity of observational intervention studies - the Benchmarking Controlled Trials.

    PubMed

    Malmivaara, Antti

    2016-09-01

    Benchmarking Controlled Trial (BCT) is a concept which covers all observational studies aiming to assess impact of interventions or health care system features to patients and populations. To create and pilot test a checklist for appraising methodological validity of a BCT. The checklist was created by extracting the most essential elements from the comprehensive set of criteria in the previous paper on BCTs. Also checklists and scientific papers on observational studies and respective systematic reviews were utilized. Ten BCTs published in the Lancet and in the New England Journal of Medicine were used to assess feasibility of the created checklist. The appraised studies seem to have several methodological limitations, some of which could be avoided in planning, conducting and reporting phases of the studies. The checklist can be used for planning, conducting, reporting, reviewing, and critical reading of observational intervention studies. However, the piloted checklist should be validated in further studies. Key messages Benchmarking Controlled Trial (BCT) is a concept which covers all observational studies aiming to assess impact of interventions or health care system features to patients and populations. This paper presents a checklist for appraising methodological validity of BCTs and pilot-tests the checklist with ten BCTs published in leading medical journals. The appraised studies seem to have several methodological limitations, some of which could be avoided in planning, conducting and reporting phases of the studies. The checklist can be used for planning, conducting, reporting, reviewing, and critical reading of observational intervention studies.

  13. Physics of neutral gas jet interaction with magnetized plasmas

    NASA Astrophysics Data System (ADS)

    Wang, Zhanhui; Xu, Xueqiao; Diamond, Patrick; Xu, Min; Duan, Xuru; Yu, Deliang; Zhou, Yulin; Shi, Yongfu; Nie, Lin; Ke, Rui; Zhong, Wulv; Shi, Zhongbing; Sun, Aiping; Li, Jiquan; Yao, Lianghua

    2017-10-01

    It is critical to understand the physics and transport dynamics during the plasma fuelling process. Plasma and neutral interactions involve the transfer of charge, momentum, and energy in ion-neutral and electron-neutral collisions. Thus, a seven field fluid model of neutral gas jet injection (NGJI) is obtained, which couples plasma density, heat, and momentum transport equations together with neutrals density and momentum transport equations of both molecules and atoms. Transport dynamics of plasma and neutrals are simulated for a complete range of discharge times, including steady state before NGJI, transport during NGJI, and relaxation after NGJI. With the trans-neut module of BOUT + + code, the simulations of mean profile variations and fueling depths during fueling have been benchmarked well with other codes and also validated with HL-2A experiment results. Both fast component (FC) and slow component (SC) of NGJI are simulated and validated with the HL-2A experimental measurements. The plasma blocking effect on the FC penetration is also simulated and validated well with the experiment. This work is supported by the National Natural Science Foundation of China under Grant No. 11575055.

  14. Benchmarking: A Method for Continuous Quality Improvement in Health

    PubMed Central

    Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe

    2012-01-01

    Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields of application and examples of how it has been used in the healthcare sector. Benchmarking is often thought to consist simply of comparing indicators and is not perceived in its entirety, that is, as a tool based on voluntary and active collaboration among several organizations to create a spirit of competition and to apply best practices. The key feature of benchmarking is its integration within a comprehensive and participatory policy of continuous quality improvement (CQI). Conditions for successful benchmarking focus essentially on careful preparation of the process, monitoring of the relevant indicators, staff involvement and inter-organizational visits. Compared to methods previously implemented in France (CQI and collaborative projects), benchmarking has specific features that set it apart as a healthcare innovation. This is especially true for healthcare or medical–social organizations, as the principle of inter-organizational visiting is not part of their culture. Thus, this approach will need to be assessed for feasibility and acceptability before it is more widely promoted. PMID:23634166

  15. Practice Benchmarking in the Age of Targeted Auditing

    PubMed Central

    Langdale, Ryan P.; Holland, Ben F.

    2012-01-01

    The frequency and sophistication of health care reimbursement auditing has progressed rapidly in recent years, leaving many oncologists wondering whether their private practices would survive a full-scale Office of the Inspector General (OIG) investigation. The Medicare Part B claims database provides a rich source of information for physicians seeking to understand how their billing practices measure up to their peers, both locally and nationally. This database was dissected by a team of cancer specialists to uncover important benchmarks related to targeted auditing. All critical Medicare charges, payments, denials, and service ratios in this article were derived from the full 2010 Medicare Part B claims database. Relevant claims were limited by using Medicare provider specialty codes 83 (hematology/oncology) and 90 (medical oncology), with an emphasis on claims filed from the physician office place of service (11). All charges, denials, and payments were summarized at the Current Procedural Terminology code level to drive practice benchmarking standards. A careful analysis of this data set, combined with the published audit priorities of the OIG, produced germane benchmarks from which medical oncologists can monitor, measure and improve on common areas of billing fraud, waste or abuse in their practices. Part II of this series and analysis will focus on information pertinent to radiation oncologists. PMID:23598847

  16. MECHANICAL DESIGN CRITERIA FOR INTERVERTEBRAL DISC TISSUE ENGINEERING

    PubMed Central

    Nerurkar, Nandan L.; Elliott, Dawn M.; Mauck, Robert L.

    2009-01-01

    Due to the inability of current clinical practices to restore function to degenerated intervertebral discs, the arena of disc tissue engineering has received substantial attention in recent years. Despite tremendous growth and progress in this field, translation to clinical implementation has been hindered by a lack of well-defined functional benchmarks. Because successful replacement of the disc is contingent upon replication of some or all of its complex mechanical behaviour, it is critically important that disc mechanics be well characterized in order to establish discrete functional goals for tissue engineering. In this review, the key functional signatures of the intervertebral disc are discussed and used to propose a series of native tissue benchmarks to guide the development of engineered replacement tissues. These benchmarks include measures of mechanical function under tensile, compressive and shear deformations for the disc and its substructures. In some cases, important functional measures are identified that have yet to be measured in the native tissue. Ultimately, native tissue benchmark values are compared to measurements that have been made on engineered disc tissues, identifying measures where functional equivalence was achieved, and others where there remain opportunities for advancement. Several excellent reviews exist regarding disc composition and structure, as well as recent tissue engineering strategies; therefore this review will remain focused on the functional aspects of disc tissue engineering. PMID:20080239

  17. Practice benchmarking in the age of targeted auditing.

    PubMed

    Langdale, Ryan P; Holland, Ben F

    2012-11-01

    The frequency and sophistication of health care reimbursement auditing has progressed rapidly in recent years, leaving many oncologists wondering whether their private practices would survive a full-scale Office of the Inspector General (OIG) investigation. The Medicare Part B claims database provides a rich source of information for physicians seeking to understand how their billing practices measure up to their peers, both locally and nationally. This database was dissected by a team of cancer specialists to uncover important benchmarks related to targeted auditing. All critical Medicare charges, payments, denials, and service ratios in this article were derived from the full 2010 Medicare Part B claims database. Relevant claims were limited by using Medicare provider specialty codes 83 (hematology/oncology) and 90 (medical oncology), with an emphasis on claims filed from the physician office place of service (11). All charges, denials, and payments were summarized at the Current Procedural Terminology code level to drive practice benchmarking standards. A careful analysis of this data set, combined with the published audit priorities of the OIG, produced germane benchmarks from which medical oncologists can monitor, measure and improve on common areas of billing fraud, waste or abuse in their practices. Part II of this series and analysis will focus on information pertinent to radiation oncologists.

  18. Methodology and Data Sources for Assessing Extreme Charging Events within the Earth's Magnetosphere

    NASA Astrophysics Data System (ADS)

    Parker, L. N.; Minow, J. I.; Talaat, E. R.

    2016-12-01

    Spacecraft surface and internal charging is a potential threat to space technologies because electrostatic discharges on, or within, charged spacecraft materials can result in a number of adverse impacts to spacecraft systems. The Space Weather Action Plan (SWAP) ionizing radiation benchmark team recognized that spacecraft charging will need to be considered to complete the ionizing radiation benchmarks in order to evaluate the threat of charging to critical space infrastructure operating within the near-Earth ionizing radiation environments. However, the team chose to defer work on the lower energy charging environments and focus the initial benchmark efforts on the higher energy galactic cosmic ray, solar energetic particle, and trapped radiation belt particle environments of concern for radiation dose and single event effects in humans and hardware. Therefore, an initial set of 1 in 100 year spacecraft charging environment benchmarks remains to be defined to meet the SWAP goals. This presentation will discuss the available data sources and a methodology to assess the 1 in 100 year extreme space weather events that drive surface and internal charging threats to spacecraft. Environments to be considered are the hot plasmas in the outer magnetosphere during geomagnetic storms, relativistic electrons in the outer radiation belt, and energetic auroral electrons in low Earth orbit at high latitudes.

  19. An artificial bee colony algorithm for locating the critical slip surface in slope stability analysis

    NASA Astrophysics Data System (ADS)

    Kang, Fei; Li, Junjie; Ma, Zhenyue

    2013-02-01

    Determination of the critical slip surface with the minimum factor of safety of a slope is a difficult constrained global optimization problem. In this article, an artificial bee colony algorithm with a multi-slice adjustment method is proposed for locating the critical slip surfaces of soil slopes, and the Spencer method is employed to calculate the factor of safety. Six benchmark examples are presented to illustrate the reliability and efficiency of the proposed technique, and it is also compared with some well-known or recent algorithms for the problem. The results show that the new algorithm is promising in terms of accuracy and efficiency.

  20. An evaluative model of system performance in manned teleoperational systems

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.

    1989-01-01

    Manned teleoperational systems are used in aerospace operations in which humans must interact with machines remotely. Manual guidance of remotely piloted vehicles, controling a wind tunnel, carrying out a scientific procedure remotely are examples of teleoperations. A four input parameter throughput (Tp) model is presented which can be used to evaluate complex, manned, teleoperations-based systems and make critical comparisons among candidate control systems. The first two parameters of this model deal with nominal (A) and off-nominal (B) predicted events while the last two focus on measured events of two types, human performance (C) and system performance (D). Digital simulations showed that the expression A(1-B)/C+D) produced the greatest homogeneity of variance and distribution symmetry. Results from a recently completed manned life science telescience experiment will be used to further validate the model. Complex, interacting teleoperational systems may be systematically evaluated using this expression much like a computer benchmark is used.

  1. Vortices and antivortices in two-dimensional ultracold Fermi gases

    PubMed Central

    Bighin, G.; Salasnich, L.

    2017-01-01

    Vortices are commonly observed in the context of classical hydrodynamics: from whirlpools after stirring the coffee in a cup to a violent atmospheric phenomenon such as a tornado, all classical vortices are characterized by an arbitrary circulation value of the local velocity field. On the other hand the appearance of vortices with quantized circulation represents one of the fundamental signatures of macroscopic quantum phenomena. In two-dimensional superfluids quantized vortices play a key role in determining finite-temperature properties, as the superfluid phase and the normal state are separated by a vortex unbinding transition, the Berezinskii-Kosterlitz-Thouless transition. Very recent experiments with two-dimensional superfluid fermions motivate the present work: we present theoretical results based on the renormalization group showing that the universal jump of the superfluid density and the critical temperature crucially depend on the interaction strength, providing a strong benchmark for forthcoming investigations. PMID:28374762

  2. Vortices and antivortices in two-dimensional ultracold Fermi gases

    NASA Astrophysics Data System (ADS)

    Bighin, G.; Salasnich, L.

    2017-04-01

    Vortices are commonly observed in the context of classical hydrodynamics: from whirlpools after stirring the coffee in a cup to a violent atmospheric phenomenon such as a tornado, all classical vortices are characterized by an arbitrary circulation value of the local velocity field. On the other hand the appearance of vortices with quantized circulation represents one of the fundamental signatures of macroscopic quantum phenomena. In two-dimensional superfluids quantized vortices play a key role in determining finite-temperature properties, as the superfluid phase and the normal state are separated by a vortex unbinding transition, the Berezinskii-Kosterlitz-Thouless transition. Very recent experiments with two-dimensional superfluid fermions motivate the present work: we present theoretical results based on the renormalization group showing that the universal jump of the superfluid density and the critical temperature crucially depend on the interaction strength, providing a strong benchmark for forthcoming investigations.

  3. Vegetation composition and structure of southern coastal plain pine forests: An ecological comparison

    USGS Publications Warehouse

    Hedman, C.W.; Grace, S.L.; King, S.E.

    2000-01-01

    Longleaf pine (Pinus palustris) ecosystems are characterized by a diverse community of native groundcover species. Critics of plantation forestry claim that loblolly (Pinus taeda) and slash pine (Pinus elliottii) forests are devoid of native groundcover due to associated management practices. As a result of these practices, some believe that ecosystem functions characteristic of longleaf pine are lost under loblolly and slash pine plantation management. Our objective was to quantify and compare vegetation composition and structure of longleaf, loblolly, and slash pine forests of differing ages, management strategies, and land-use histories. Information from this study will further our understanding and lead to inferences about functional differences among pine cover types. Vegetation and environmental data were collected in 49 overstory plots across Southlands Experiment Forest in Bainbridge, GA. Nested plots, i.e. midstory, understory, and herbaceous, were replicated four times within each overstory plot. Over 400 species were identified. Herbaceous species richness was variable for all three pine cover types. Herbaceous richness for longleaf, slash, and loblolly pine averaged 15, 13, and 12 species per m2, respectively. Longleaf pine plots had significantly more (p < 0.029) herbaceous species and greater herbaceous cover (p < 0.001) than loblolly or slash pine plots. Longleaf and slash pine plots were otherwise similar in species richness and stand structure, both having lower overstory density, midstory density, and midstory cover than loblolly pine plots. Multivariate analyses provided additional perspectives on vegetation patterns. Ordination and classification procedures consistently placed herbaceous plots into two groups which we refer to as longleaf pine benchmark (34 plots) and non-benchmark (15 plots). Benchmark plots typically contained numerous herbaceous species characteristic of relic longleaf pine/wiregrass communities found in the area. Conversely, non-benchmark plots contained fewer species characteristic of relic longleaf pine/wiregrass communities and more ruderal species common to highly disturbed sites. The benchmark group included 12 naturally regenerated longleaf plots and 22 loblolly, slash, and longleaf pine plantation plots encompassing a broad range of silvicultural disturbances. Non-benchmark plots included eight afforested old-field plantation plots and seven cutover plantation plots. Regardless of overstory species, all afforested old fields were low either in native species richness or in abundance. Varying degrees of this groundcover condition were also found in some cutover plantation plots that were classified as non-benchmark. Environmental variables strongly influencing vegetation patterns included agricultural history and fire frequency. Results suggest that land-use history, particularly related to agriculture, has a greater influence on groundcover composition and structure in southern pine forests than more recent forest management activities or pine cover type. Additional research is needed to identify the potential for afforested old fields to recover native herbaceous species. In the interim, high-yield plantation management should initially target old-field sites which already support reduced numbers of groundcover species. Sites which have not been farmed in the past 50-60 years should be considered for longleaf pine restoration and multiple-use objectives, since they have the greatest potential for supporting diverse native vegetation. (C) 2000 Elsevier Science B.V.

  4. A quasi two-dimensional benchmark experiment for the solidification of a tin lead binary alloy

    NASA Astrophysics Data System (ADS)

    Wang, Xiao Dong; Petitpas, Patrick; Garnier, Christian; Paulin, Jean-Pierre; Fautrelle, Yves

    2007-05-01

    A horizontal solidification benchmark experiment with pure tin and a binary alloy of Sn-10 wt.%Pb is proposed. The experiment consists in solidifying a rectangular sample using two lateral heat exchangers which allow the application a controlled horizontal temperature difference. An array of fifty thermocouples placed on the lateral wall permits the determination of the instantaneous temperature distribution. The cases with the temperature gradient G=0, and the cooling rates equal to 0.02 and 0.04 K/s are studied. The time evolution of the interfacial total heat flux and the temperature field are recorded and analyzed. This allows us to evaluate heat transfer evolution due to natural convection, as well as its influence on the solidification macrostructure. To cite this article: X.D. Wang et al., C. R. Mecanique 335 (2007).

  5. Demonstration of a tool for automatic learning and re-use of knowledge in the activated sludge process.

    PubMed

    Comas, J; Rodríguez-Roda, I; Poch, M; Gernaey, K V; Rosen, C; Jeppsson, U

    2006-01-01

    Wastewater treatment plant operators encounter complex operational problems related to the activated sludge process and usually respond to these by applying their own intuition and by taking advantage of what they have learnt from past experiences of similar problems. However, previous process experiences are not easy to integrate in numerical control, and new tools must be developed to enable re-use of plant operating experience. The aim of this paper is to investigate the usefulness of a case-based reasoning (CBR) approach to apply learning and re-use of knowledge gained during past incidents to confront actual complex problems through the IWA/COST Benchmark protocol. A case study shows that the proposed CBR system achieves a significant improvement of the benchmark plant performance when facing a high-flow event disturbance.

  6. Topologically guided tuning of Zr-MOF pore structures for highly selective separation of C6 alkane isomers

    DOE PAGES

    Wang, Hao; Dong, Xinglong; Lin, Junzhong; ...

    2018-05-01

    As an alternative technology to energy intensive distillations, adsorptive separation by porous solids offers lower energy cost and higher efficiency. Herein we report a topology-directed design and synthesis of a series of Zr-based metal-organic frameworks with optimized pore structure for efficient separation of C6 alkane isomers, a critical step in the petroleum refining process to produce gasoline with high octane rating. Zr 6O 4(OH) 4(bptc) 3 adsorbs a large amount of n-hexane but excluding branched isomers. The n-hexane uptake is ~70% higher than that of a benchmark adsorbent, zeolite-5A. A derivative structure, Zr 6O 4(OH) 8(H 2O) 4(abtc) 2, ismore » capable of discriminating all three C6 isomers and yielding a high separation factor for 3-methylpentane over 2,3-dimethylbutane. This property is critical for producing gasoline with further improved quality. Multicomponent breakthrough experiments provide a quantitative measure of the capability of these materials for separation of C6 alkane isomers. A detailed structural analysis reveals the unique topology, connectivity and relationship of these compounds.« less

  7. Topologically guided tuning of Zr-MOF pore structures for highly selective separation of C6 alkane isomers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hao; Dong, Xinglong; Lin, Junzhong

    As an alternative technology to energy intensive distillations, adsorptive separation by porous solids offers lower energy cost and higher efficiency. Herein we report a topology-directed design and synthesis of a series of Zr-based metal-organic frameworks with optimized pore structure for efficient separation of C6 alkane isomers, a critical step in the petroleum refining process to produce gasoline with high octane rating. Zr 6O 4(OH) 4(bptc) 3 adsorbs a large amount of n-hexane but excluding branched isomers. The n-hexane uptake is ~70% higher than that of a benchmark adsorbent, zeolite-5A. A derivative structure, Zr 6O 4(OH) 8(H 2O) 4(abtc) 2, ismore » capable of discriminating all three C6 isomers and yielding a high separation factor for 3-methylpentane over 2,3-dimethylbutane. This property is critical for producing gasoline with further improved quality. Multicomponent breakthrough experiments provide a quantitative measure of the capability of these materials for separation of C6 alkane isomers. A detailed structural analysis reveals the unique topology, connectivity and relationship of these compounds.« less

  8. Deriving Criteria-supporting Benchmark Values from Empirical Response Relationships: Comparison of Statistical Techniques and Effect of Log-transforming the Nutrient Variable

    EPA Science Inventory

    In analyses supporting the development of numeric nutrient criteria, multiple statistical techniques can be used to extract critical values from stressor response relationships. However there is little guidance for choosing among techniques, and the extent to which log-transfor...

  9. A New Measurement and Ranking System for the UK National Student Survey

    ERIC Educational Resources Information Center

    Canning, John

    2015-01-01

    Despite numerous criticisms of the UK National Student Survey (NSS) institutional managers still strongly support its use in informing student choice, quality and assurance and quality enhancement activities. This article outlines a granular and nuanced benchmarking system for the NSS which provides both a "raw" score (weighted student…

  10. Deaf Role Models Making a Critical Difference in New Mexico

    ERIC Educational Resources Information Center

    Abrams, Stacy; Gallegos, Rosemary

    2011-01-01

    In this issue of "Odyssey," Joanne Corwin describes New Mexico's statewide partnership among several agencies for the provision of early intervention services to infants and children who are deaf or hard of hearing and their families (Effective Partnering of State Agencies to Achieve Early Hearing Detection and Intervention Benchmarks,…

  11. Marketing imaging services.

    PubMed

    Bouchard, E A

    1992-01-01

    "Communication with physicians and patients is critical to successfully marketing technology," says Mr. Bouchard, author of a new handbook for radiology managers. In this section from his book, he describes the components of a successful marketing plan and tries to dispel several myths about marketing. He also describes benchmarks for success, and provides a sample plan to promote mammography services.

  12. Teacher Control over Interagency Collaboration: A Roadblock for Effective Transitioning of Youth with Disabilities

    ERIC Educational Resources Information Center

    Meadows, Denis; Davies, Michael; Beamish, Wendi

    2014-01-01

    Poor post-school outcomes for youth with disabilities have consistently been reported internationally. Interagency collaboration between school systems and post-school services is critical and key to improving post-school life for these youth. An initial Queensland study that benchmarked the teacher practice of 104 transition teachers and…

  13. Identifying, Monitoring, and Benchmarking Teacher Retention and Turnover: Guidelines for TIF Grantees

    ERIC Educational Resources Information Center

    Finster, Matthew

    2015-01-01

    Having a well-qualified, effective teacher in every classroom is a cornerstone of current educational reforms. Clearly, retaining these effective teachers is critical to achieving this goal. This brief presents a strategic accountability approach to managing teacher talent retention and turnover. The brief begins with an overview of a strategic…

  14. Building a Science, Technology, Engineering, and Math Education Agenda: An Update of State Actions

    ERIC Educational Resources Information Center

    Thomasian, John

    2011-01-01

    STEM--science, technology, engineering, and mathematics--is critical to and supportive of many education reforms being undertaken today, from adoption of common internationally benchmarked standards to better teacher preparation to enhanced coordination across the entire K-20 education system. In fact, STEM is not a separate reform movement at…

  15. Teacher Beliefs in "Testing" Times: A Lesson from Singapore

    ERIC Educational Resources Information Center

    Dixon, Mary

    2016-01-01

    International benchmarking and national testing of students at all levels of schooling have provoked teachers to critically reflect on their place in this endeavour. Many of the curriculum and pedagogical approaches associated with this type of assessment and accountability conflict with long-held beliefs about the role of teachers and the work of…

  16. A benchmark study of the sea-level equation in GIA modelling

    NASA Astrophysics Data System (ADS)

    Martinec, Zdenek; Klemann, Volker; van der Wal, Wouter; Riva, Riccardo; Spada, Giorgio; Simon, Karen; Blank, Bas; Sun, Yu; Melini, Daniele; James, Tom; Bradley, Sarah

    2017-04-01

    The sea-level load in glacial isostatic adjustment (GIA) is described by the so called sea-level equation (SLE), which represents the mass redistribution between ice sheets and oceans on a deforming earth. Various levels of complexity of SLE have been proposed in the past, ranging from a simple mean global sea level (the so-called eustatic sea level) to the load with a deforming ocean bottom, migrating coastlines and a changing shape of the geoid. Several approaches to solve the SLE have been derived, from purely analytical formulations to fully numerical methods. Despite various teams independently investigating GIA, there has been no systematic intercomparison amongst the solvers through which the methods may be validated. The goal of this paper is to present a series of benchmark experiments designed for testing and comparing numerical implementations of the SLE. Our approach starts with simple load cases even though the benchmark will not result in GIA predictions for a realistic loading scenario. In the longer term we aim for a benchmark with a realistic loading scenario, and also for benchmark solutions with rotational feedback. The current benchmark uses an earth model for which Love numbers have been computed and benchmarked in Spada et al (2011). In spite of the significant differences in the numerical methods employed, the test computations performed so far show a satisfactory agreement between the results provided by the participants. The differences found can often be attributed to the different approximations inherent to the various algorithms. Literature G. Spada, V. R. Barletta, V. Klemann, R. E. M. Riva, Z. Martinec, P. Gasperini, B. Lund, D. Wolf, L. L. A. Vermeersen, and M. A. King, 2011. A benchmark study for glacial isostatic adjustment codes. Geophys. J. Int. 185: 106-132 doi:10.1111/j.1365-

  17. Benchmarking of neutron production of heavy-ion transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Remec, I.; Ronningen, R. M.; Heilbronn, L.

    Document available in abstract form only, full text of document follows: Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in design and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondarymore » neutron production. Results are encouraging; however, further improvements in models and codes and additional benchmarking are required. (authors)« less

  18. GENOPT 2016: Design of a generalization-based challenge in global optimization

    NASA Astrophysics Data System (ADS)

    Battiti, Roberto; Sergeyev, Yaroslav; Brunato, Mauro; Kvasov, Dmitri

    2016-10-01

    While comparing results on benchmark functions is a widely used practice to demonstrate the competitiveness of global optimization algorithms, fixed benchmarks can lead to a negative data mining process. To avoid this negative effect, the GENOPT contest benchmarks can be used which are based on randomized function generators, designed for scientific experiments, with fixed statistical characteristics but individual variation of the generated instances. The generators are available to participants for off-line tests and online tuning schemes, but the final competition is based on random seeds communicated in the last phase through a cooperative process. A brief presentation and discussion of the methods and results obtained in the framework of the GENOPT contest are given in this contribution.

  19. Benchmarking worker nodes using LHCb productions and comparing with HEPSpec06

    NASA Astrophysics Data System (ADS)

    Charpentier, P.

    2017-10-01

    In order to estimate the capabilities of a computing slot with limited processing time, it is necessary to know with a rather good precision its “power”. This allows for example pilot jobs to match a task for which the required CPU-work is known, or to define the number of events to be processed knowing the CPU-work per event. Otherwise one always has the risk that the task is aborted because it exceeds the CPU capabilities of the resource. It also allows a better accounting of the consumed resources. The traditional way the CPU power is estimated in WLCG since 2007 is using the HEP-Spec06 benchmark (HS06) suite that was verified at the time to scale properly with a set of typical HEP applications. However, the hardware architecture of processors has evolved, all WLCG experiments moved to using 64-bit applications and use different compilation flags from those advertised for running HS06. It is therefore interesting to check the scaling of HS06 with the HEP applications. For this purpose, we have been using CPU intensive massive simulation productions from the LHCb experiment and compared their event throughput to the HS06 rating of the worker nodes. We also compared it with a much faster benchmark script that is used by the DIRAC framework used by LHCb for evaluating at run time the performance of the worker nodes. This contribution reports on the finding of these comparisons: the main observation is that the scaling with HS06 is no longer fulfilled, while the fast benchmarks have a better scaling but are less precise. One can also clearly see that some hardware or software features when enabled on the worker nodes may enhance their performance beyond expectation from either benchmark, depending on external factors.

  20. Surgeon's experiences of receiving peer benchmarked feedback using patient-reported outcome measures: a qualitative study.

    PubMed

    Boyce, Maria B; Browne, John P; Greenhalgh, Joanne

    2014-06-27

    The use of patient-reported outcome measures (PROMs) to provide healthcare professionals with peer benchmarked feedback is growing. However, there is little evidence on the opinions of professionals on the value of this information in practice. The purpose of this research is to explore surgeon's experiences of receiving peer benchmarked PROMs feedback and to examine whether this information led to changes in their practice. This qualitative research employed a Framework approach. Semi-structured interviews were undertaken with surgeons who received peer benchmarked PROMs feedback. The participants included eleven consultant orthopaedic surgeons in the Republic of Ireland. Five themes were identified: conceptual, methodological, practical, attitudinal, and impact. A typology was developed based on the attitudinal and impact themes from which three distinct groups emerged. 'Advocates' had positive attitudes towards PROMs and confirmed that the information promoted a self-reflective process. 'Converts' were uncertain about the value of PROMs, which reduced their inclination to use the data. 'Sceptics' had negative attitudes towards PROMs and claimed that the information had no impact on their behaviour. The conceptual, methodological and practical factors were linked to the typology. Surgeons had mixed opinions on the value of peer benchmarked PROMs data. Many appreciated the feedback as it reassured them that their practice was similar to their peers. However, PROMs information alone was considered insufficient to help identify opportunities for quality improvements. The reasons for the observed reluctance of participants to embrace PROMs can be categorised into conceptual, methodological, and practical factors. Policy makers and researchers need to increase professionals' awareness of the numerous purposes and benefits of using PROMs, challenge the current methods to measure performance using PROMs, and reduce the burden of data collection and information dissemination on routine practice.

  1. Benchmarking the ATLAS software through the Kit Validation engine

    NASA Astrophysics Data System (ADS)

    De Salvo, Alessandro; Brasolin, Franco

    2010-04-01

    The measurement of the experiment software performance is a very important metric in order to choose the most effective resources to be used and to discover the bottlenecks of the code implementation. In this work we present the benchmark techniques used to measure the ATLAS software performance through the ATLAS offline testing engine Kit Validation and the online portal Global Kit Validation. The performance measurements, the data collection, the online analysis and display of the results will be presented. The results of the measurement on different platforms and architectures will be shown, giving a full report on the CPU power and memory consumption of the Monte Carlo generation, simulation, digitization and reconstruction of the most CPU-intensive channels. The impact of the multi-core computing on the ATLAS software performance will also be presented, comparing the behavior of different architectures when increasing the number of concurrent processes. The benchmark techniques described in this paper have been used in the HEPiX group since the beginning of 2008 to help defining the performance metrics for the High Energy Physics applications, based on the real experiment software.

  2. Benchmarking a Visual-Basic based multi-component one-dimensional reactive transport modeling tool

    NASA Astrophysics Data System (ADS)

    Torlapati, Jagadish; Prabhakar Clement, T.

    2013-01-01

    We present the details of a comprehensive numerical modeling tool, RT1D, which can be used for simulating biochemical and geochemical reactive transport problems. The code can be run within the standard Microsoft EXCEL Visual Basic platform, and it does not require any additional software tools. The code can be easily adapted by others for simulating different types of laboratory-scale reactive transport experiments. We illustrate the capabilities of the tool by solving five benchmark problems with varying levels of reaction complexity. These literature-derived benchmarks are used to highlight the versatility of the code for solving a variety of practical reactive transport problems. The benchmarks are described in detail to provide a comprehensive database, which can be used by model developers to test other numerical codes. The VBA code presented in the study is a practical tool that can be used by laboratory researchers for analyzing both batch and column datasets within an EXCEL platform.

  3. Analyzing the BBOB results by means of benchmarking concepts.

    PubMed

    Mersmann, O; Preuss, M; Trautmann, H; Bischl, B; Weihs, C

    2015-01-01

    We present methods to answer two basic questions that arise when benchmarking optimization algorithms. The first one is: which algorithm is the "best" one? and the second one is: which algorithm should I use for my real-world problem? Both are connected and neither is easy to answer. We present a theoretical framework for designing and analyzing the raw data of such benchmark experiments. This represents a first step in answering the aforementioned questions. The 2009 and 2010 BBOB benchmark results are analyzed by means of this framework and we derive insight regarding the answers to the two questions. Furthermore, we discuss how to properly aggregate rankings from algorithm evaluations on individual problems into a consensus, its theoretical background and which common pitfalls should be avoided. Finally, we address the grouping of test problems into sets with similar optimizer rankings and investigate whether these are reflected by already proposed test problem characteristics, finding that this is not always the case.

  4. Adaptive Critic Neural Network-Based Terminal Area Energy Management and Approach and Landing Guidance

    NASA Technical Reports Server (NTRS)

    Grantham, Katie

    2003-01-01

    Reusable Launch Vehicles (RLVs) have different mission requirements than the Space Shuttle, which is used for benchmark guidance design. Therefore, alternative Terminal Area Energy Management (TAEM) and Approach and Landing (A/L) Guidance schemes can be examined in the interest of cost reduction. A neural network based solution for a finite horizon trajectory optimization problem is presented in this paper. In this approach the optimal trajectory of the vehicle is produced by adaptive critic based neural networks, which were trained off-line to maintain a gradual glideslope.

  5. SU-E-T-466: Implementation of An Extension Module for Dose Response Models in the TOPAS Monte Carlo Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramos-Mendez, J; Faddegon, B; Perl, J

    2015-06-15

    Purpose: To develop and verify an extension to TOPAS for calculation of dose response models (TCP/NTCP). TOPAS wraps and extends Geant4. Methods: The TOPAS DICOM interface was extended to include structure contours, for subsequent calculation of DVH’s and TCP/NTCP. The following dose response models were implemented: Lyman-Kutcher-Burman (LKB), critical element (CE), population based critical volume (CV), parallel-serials, a sigmoid-based model of Niemierko for NTCP and TCP, and a Poisson-based model for TCP. For verification, results for the parallel-serial and Poisson models, with 6 MV x-ray dose distributions calculated with TOPAS and Pinnacle v9.2, were compared to data from the benchmarkmore » configuration of the AAPM Task Group 166 (TG166). We provide a benchmark configuration suitable for proton therapy along with results for the implementation of the Niemierko, CV and CE models. Results: The maximum difference in DVH calculated with Pinnacle and TOPAS was 2%. Differences between TG166 data and Monte Carlo calculations of up to 4.2%±6.1% were found for the parallel-serial model and up to 1.0%±0.7% for the Poisson model (including the uncertainty due to lack of knowledge of the point spacing in TG166). For CE, CV and Niemierko models, the discrepancies between the Pinnacle and TOPAS results are 74.5%, 34.8% and 52.1% when using 29.7 cGy point spacing, the differences being highly sensitive to dose spacing. On the other hand, with our proposed benchmark configuration, the largest differences were 12.05%±0.38%, 3.74%±1.6%, 1.57%±4.9% and 1.97%±4.6% for the CE, CV, Niemierko and LKB models, respectively. Conclusion: Several dose response models were successfully implemented with the extension module. Reference data was calculated for future benchmarking. Dose response calculated for the different models varied much more widely for the TG166 benchmark than for the proposed benchmark, which had much lower sensitivity to the choice of DVH dose points. This work was supported by National Cancer Institute Grant R01CA140735.« less

  6. Benchmarking care for very low birthweight infants in Ireland and Northern Ireland.

    PubMed

    Murphy, B P; Armstrong, K; Ryan, C A; Jenkins, J G

    2010-01-01

    Benchmarking is that process through which best practice is identified and continuous quality improvement pursued through comparison and sharing. The Vermont Oxford Neonatal Network (VON) is the largest international external reference centre for very low birth weight (VLBW) infants. This report from 2004-7 compares survival and morbidity throughout Ireland and benchmarks these results against VON. A standardised VON database for VLBW infants was created in 14 participating centres across Ireland and Northern Ireland. Data on 716 babies were submitted in 2004, increasing to 796 babies in 2007, with centres caring for from 10 to 120 VLBW infants per year. In 2007, mortality rates in VLBW infants varied from 4% to 19%. Standardised mortality ratios indicate that the number of deaths observed was not significantly different from the number expected, based on the characteristics of infants treated. There was no difference in the incidence of severe intraventricular haemorrhage between all-Ireland and VON groups (5% vs 6%, respectively). All-Ireland rates for chronic lung disease (CLD; 15-21%) remained lower than rates seen in the VON group (24-28%). The rates of late onset nosocomial infection in the all-Ireland group (25-26%) remained double those in the VON group (12-13%). This is the first all-Ireland international benchmarking report in any medical specialty. Survival, severe intraventricular haemorrhage and CLD compare favourably with international standards, but rates of nosocomial infection in neonatal units are concerning. Benchmarking clinical outcomes is critical for quality improvement and informing decisions concerning neonatal intensive care service provision.

  7. Validation of Shielding Analysis Capability of SuperMC with SINBAD

    NASA Astrophysics Data System (ADS)

    Chen, Chaobin; Yang, Qi; Wu, Bin; Han, Yuncheng; Song, Jing

    2017-09-01

    Abstract: The shielding analysis capability of SuperMC was validated with the Shielding Integral Benchmark Archive Database (SINBAD). The SINBAD was compiled by RSICC and NEA, it includes numerous benchmark experiments performed with the D-T fusion neutron source facilities of OKTAVIAN, FNS, IPPE, etc. The results from SuperMC simulation were compared with experimental data and MCNP results. Very good agreement with deviation lower than 1% was achieved and it suggests that SuperMC is reliable in shielding calculation.

  8. Benchmark Problems of the Geothermal Technologies Office Code Comparison Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Mark D.; Podgorney, Robert; Kelkar, Sharad M.

    A diverse suite of numerical simulators is currently being applied to predict or understand the performance of enhanced geothermal systems (EGS). To build confidence and identify critical development needs for these analytical tools, the United States Department of Energy, Geothermal Technologies Office has sponsored a Code Comparison Study (GTO-CCS), with participants from universities, industry, and national laboratories. A principal objective for the study was to create a community forum for improvement and verification of numerical simulators for EGS modeling. Teams participating in the study were those representing U.S. national laboratories, universities, and industries, and each team brought unique numerical simulationmore » capabilities to bear on the problems. Two classes of problems were developed during the study, benchmark problems and challenge problems. The benchmark problems were structured to test the ability of the collection of numerical simulators to solve various combinations of coupled thermal, hydrologic, geomechanical, and geochemical processes. This class of problems was strictly defined in terms of properties, driving forces, initial conditions, and boundary conditions. Study participants submitted solutions to problems for which their simulation tools were deemed capable or nearly capable. Some participating codes were originally developed for EGS applications whereas some others were designed for different applications but can simulate processes similar to those in EGS. Solution submissions from both were encouraged. In some cases, participants made small incremental changes to their numerical simulation codes to address specific elements of the problem, and in other cases participants submitted solutions with existing simulation tools, acknowledging the limitations of the code. The challenge problems were based on the enhanced geothermal systems research conducted at Fenton Hill, near Los Alamos, New Mexico, between 1974 and 1995. The problems involved two phases of research, stimulation, development, and circulation in two separate reservoirs. The challenge problems had specific questions to be answered via numerical simulation in three topical areas: 1) reservoir creation/stimulation, 2) reactive and passive transport, and 3) thermal recovery. Whereas the benchmark class of problems were designed to test capabilities for modeling coupled processes under strictly specified conditions, the stated objective for the challenge class of problems was to demonstrate what new understanding of the Fenton Hill experiments could be realized via the application of modern numerical simulation tools by recognized expert practitioners.« less

  9. Gatemon Benchmarking and Two-Qubit Operation

    NASA Astrophysics Data System (ADS)

    Casparis, Lucas; Larsen, Thorvald; Olsen, Michael; Petersson, Karl; Kuemmeth, Ferdinand; Krogstrup, Peter; Nygard, Jesper; Marcus, Charles

    Recent experiments have demonstrated superconducting transmon qubits with semiconductor nanowire Josephson junctions. These hybrid gatemon qubits utilize field effect tunability singular to semiconductors to allow complete qubit control using gate voltages, potentially a technological advantage over conventional flux-controlled transmons. Here, we present experiments with a two-qubit gatemon circuit. We characterize qubit coherence and stability and use randomized benchmarking to demonstrate single-qubit gate errors of ~0.5 % for all gates, including voltage-controlled Z rotations. We show coherent capacitive coupling between two gatemons and coherent SWAP operations. Finally, we perform a two-qubit controlled-phase gate with an estimated fidelity of ~91 %, demonstrating the potential of gatemon qubits for building scalable quantum processors. We acknowledge financial support from Microsoft Project Q and the Danish National Research Foundation.

  10. FDNS CFD Code Benchmark for RBCC Ejector Mode Operation

    NASA Technical Reports Server (NTRS)

    Holt, James B.; Ruf, Joe

    1999-01-01

    Computational Fluid Dynamics (CFD) analysis results are compared with benchmark quality test data from the Propulsion Engineering Research Center's (PERC) Rocket Based Combined Cycle (RBCC) experiments to verify fluid dynamic code and application procedures. RBCC engine flowpath development will rely on CFD applications to capture the multi-dimensional fluid dynamic interactions and to quantify their effect on the RBCC system performance. Therefore, the accuracy of these CFD codes must be determined through detailed comparisons with test data. The PERC experiments build upon the well-known 1968 rocket-ejector experiments of Odegaard and Stroup by employing advanced optical and laser based diagnostics to evaluate mixing and secondary combustion. The Finite Difference Navier Stokes (FDNS) code was used to model the fluid dynamics of the PERC RBCC ejector mode configuration. Analyses were performed for both Diffusion and Afterburning (DAB) and Simultaneous Mixing and Combustion (SMC) test conditions. Results from both the 2D and the 3D models are presented.

  11. Assessing validity of observational intervention studies – the Benchmarking Controlled Trials

    PubMed Central

    Malmivaara, Antti

    2016-01-01

    Abstract Background: Benchmarking Controlled Trial (BCT) is a concept which covers all observational studies aiming to assess impact of interventions or health care system features to patients and populations. Aims: To create and pilot test a checklist for appraising methodological validity of a BCT. Methods: The checklist was created by extracting the most essential elements from the comprehensive set of criteria in the previous paper on BCTs. Also checklists and scientific papers on observational studies and respective systematic reviews were utilized. Ten BCTs published in the Lancet and in the New England Journal of Medicine were used to assess feasibility of the created checklist. Results: The appraised studies seem to have several methodological limitations, some of which could be avoided in planning, conducting and reporting phases of the studies. Conclusions: The checklist can be used for planning, conducting, reporting, reviewing, and critical reading of observational intervention studies. However, the piloted checklist should be validated in further studies.Key messagesBenchmarking Controlled Trial (BCT) is a concept which covers all observational studies aiming to assess impact of interventions or health care system features to patients and populations.This paper presents a checklist for appraising methodological validity of BCTs and pilot-tests the checklist with ten BCTs published in leading medical journals. The appraised studies seem to have several methodological limitations, some of which could be avoided in planning, conducting and reporting phases of the studies.The checklist can be used for planning, conducting, reporting, reviewing, and critical reading of observational intervention studies. PMID:27238631

  12. Simple mathematical law benchmarks human confrontations.

    PubMed

    Johnson, Neil F; Medina, Pablo; Zhao, Guannan; Messinger, Daniel S; Horgan, John; Gill, Paul; Bohorquez, Juan Camilo; Mattson, Whitney; Gangi, Devon; Qi, Hong; Manrique, Pedro; Velasquez, Nicolas; Morgenstern, Ana; Restrepo, Elvira; Johnson, Nicholas; Spagat, Michael; Zarama, Roberto

    2013-12-10

    Many high-profile societal problems involve an individual or group repeatedly attacking another - from child-parent disputes, sexual violence against women, civil unrest, violent conflicts and acts of terror, to current cyber-attacks on national infrastructure and ultrafast cyber-trades attacking stockholders. There is an urgent need to quantify the likely severity and timing of such future acts, shed light on likely perpetrators, and identify intervention strategies. Here we present a combined analysis of multiple datasets across all these domains which account for >100,000 events, and show that a simple mathematical law can benchmark them all. We derive this benchmark and interpret it, using a minimal mechanistic model grounded by state-of-the-art fieldwork. Our findings provide quantitative predictions concerning future attacks; a tool to help detect common perpetrators and abnormal behaviors; insight into the trajectory of a 'lone wolf'; identification of a critical threshold for spreading a message or idea among perpetrators; an intervention strategy to erode the most lethal clusters; and more broadly, a quantitative starting point for cross-disciplinary theorizing about human aggression at the individual and group level, in both real and online worlds.

  13. Simple mathematical law benchmarks human confrontations

    NASA Astrophysics Data System (ADS)

    Johnson, Neil F.; Medina, Pablo; Zhao, Guannan; Messinger, Daniel S.; Horgan, John; Gill, Paul; Bohorquez, Juan Camilo; Mattson, Whitney; Gangi, Devon; Qi, Hong; Manrique, Pedro; Velasquez, Nicolas; Morgenstern, Ana; Restrepo, Elvira; Johnson, Nicholas; Spagat, Michael; Zarama, Roberto

    2013-12-01

    Many high-profile societal problems involve an individual or group repeatedly attacking another - from child-parent disputes, sexual violence against women, civil unrest, violent conflicts and acts of terror, to current cyber-attacks on national infrastructure and ultrafast cyber-trades attacking stockholders. There is an urgent need to quantify the likely severity and timing of such future acts, shed light on likely perpetrators, and identify intervention strategies. Here we present a combined analysis of multiple datasets across all these domains which account for >100,000 events, and show that a simple mathematical law can benchmark them all. We derive this benchmark and interpret it, using a minimal mechanistic model grounded by state-of-the-art fieldwork. Our findings provide quantitative predictions concerning future attacks; a tool to help detect common perpetrators and abnormal behaviors; insight into the trajectory of a `lone wolf' identification of a critical threshold for spreading a message or idea among perpetrators; an intervention strategy to erode the most lethal clusters; and more broadly, a quantitative starting point for cross-disciplinary theorizing about human aggression at the individual and group level, in both real and online worlds.

  14. MIPS: The good, the bad and the useful

    NASA Technical Reports Server (NTRS)

    Richardson, Jerry K.

    1987-01-01

    Many authors are critical of the use of MIPS (Millions of Instructions per Second) as a measure of computer power. Some feel that MIPS are meaningless. While there is justification for some of the criticism of MIPS, sometimes the criticism is carried too far. MIPS can be a useful number for planning and estimating purposes when used in a homogeneous computer environmnet. Comparisons between published MIPS ratings and benchmark results reveal that there does exist a high positive correlation between MIPS and tested performance, given a homogeneous computer environment. MIPS should be understood so as not to be misused. It is not correct that the use of MIPS is always inappropriate or inaccurate

  15. Outcome Evaluation of the Minnesota Reading Corps PreK Program. Issue Brief

    ERIC Educational Resources Information Center

    Markovitz, Carrie E.; Hernandez, Marc W.; Hedberg, Eric C.; Silberglitt, Benjamin

    2015-01-01

    Two of the most important educational benchmarks are readiness for kindergarten and reading at grade-level by third grade. Focusing on emergent literacy skills during the PreK years boosts reading abilities and helps young learners reach these critical goals. Research validates what works in teaching children to read: Well-trained tutors and…

  16. The Employee Survey: An Important Tool for Changing the Culture of an Organization

    ERIC Educational Resources Information Center

    Drapeau, Suzanne

    2004-01-01

    A regularly administered employee opinion survey is an important institutional outcomes measurement tool. It can provide robust benchmarks and standards for a whole range of dimensions of a healthy workplace. This kind of survey should also be a critically important component of the process of engaging employees in the development of the…

  17. Identification of the Most Critical Content Knowledge Base for Middle School Science Teachers

    ERIC Educational Resources Information Center

    Saderholm, Jon C.; Tretter, Thomas R.

    2008-01-01

    Much has been said about what science content students need to learn (e.g., "Benchmarks for Science Literacy, National Science Education Standards"). Less has been said about what science content teachers need to know to teach the content students are expected to learn. This study analyzed four standards documents and assessment frameworks to…

  18. Family Wellbeing of Individuals with Autism Spectrum Disorder: A Scoping Review

    ERIC Educational Resources Information Center

    Tint, Ami; Weiss, Jonathan A.

    2016-01-01

    Families play an important role in supporting individuals with autism spectrum disorder across the lifespan. Indicators of family wellbeing can help to establish benchmarks for service provision and evaluation; however, a critical first step is a clear understanding of the construct in question. The purpose of the current scoping review was to (a)…

  19. Discussions in a Socrates Café: Implications for Critical Thinking in Teacher Education

    ERIC Educational Resources Information Center

    Piro, Jody; Anderson, Gina

    2015-01-01

    The purpose of this study was to benchmark the types of Socratic questioning that were occurring in a Socrates Café, an online discussion forum, in a graduate-level diversity course in teacher education. The Universal Intellectual Standards were used to analyze Socratic questioning. Results suggested that the nine Universal Intellectual Standards…

  20. Toward an Understanding of People Management Issues in SMEs: a South-Eastern European Perspective

    ERIC Educational Resources Information Center

    Szamosi, Leslie T.; Duxbury, Linda; Higgins, Chris

    2004-01-01

    The focus of this paper is on developing an understanding, and benchmarking, human resource management HRM issues in small and medium enterprises SMEs in South-Eastern Europe. The importance of SMEs in helping transition-based economies develop is critical, but at the same time the research indicates that the movement toward westernized business…

  1. A formative evaluation of CU-SeeMe

    NASA Astrophysics Data System (ADS)

    Bibeau, Michael

    1995-02-01

    CU-SeeMe is a video conferencing software package that was designed and programmed at Cornell University. The program works with the TCP/IP network protocol and allows two or more parties to conduct a real-time video conference with full audio support. In this paper we evaluate CU-SeeMe through the process of Formative Evaluation. We first perform a Critical Review of the software using a subset of the Smith and Mosier Guidelines for Human-Computer Interaction. Next, we empirically review the software interface through a series of benchmark tests that are derived directly from a set of scenarios. The scenarios attempt to model real world situations that might be encountered by an individual in the target user class. Designing benchmark tasks becomes a natural and straightforward process when they are derived from the scenario set. Empirical measures are taken for each task, including completion times and error counts. These measures are accompanied by critical incident analysis 2 7 13 which serves to identify problems with the interface and the cognitive roots of those problems. The critical incidents reported by participants are accompanied by explanations of what caused the problem and why This helps in the process of formulating solutions for observed usability problems. All the testing results are combined in the Appendix in an illustrated partial redesign of the CU-SeeMe Interface.

  2. The preanalytical optimization of blood cultures: a review and the clinical importance of benchmarking in 5 Belgian hospitals.

    PubMed

    Willems, Elise; Smismans, Annick; Cartuyvels, Reinoud; Coppens, Guy; Van Vaerenbergh, Kristien; Van den Abeele, Anne-Marie; Frans, Johan

    2012-05-01

    Bloodstream infections remain a major challenge in medicine. Optimal detection of pathogens is only possible if the quality of preanalytical factors is thoroughly controlled. Since the laboratory is responsible for this preanalytical phase, the quality control of critical factors should be integrated in its quality control program. The numerous recommendations regarding blood culture collection contain controversies. Only an unambiguous guideline permits standardization and interlaboratory quality control. We present an evidence-based concise guideline of critical preanalytical determinants for blood culture collection and summarize key performance indicators with their concomitant target values. In an attempt to benchmark, we compared the true-positive rate, contamination rate, and collected blood volume of blood culture bottles in 5 Belgian hospital laboratories. The true-positive blood culture rate fell within previously defined acceptation criteria by Baron et al. (2005) in all 5 hospitals, whereas the contamination rate exceeded the target value in 4 locations. Most unexpected, in each of the 5 laboratories, more than one third of the blood culture bottles were incorrectly filled, irrespective of the manufacturer of the blood culture vials. As a consequence of this shortcoming, one manufacturer recently developed an automatic blood volume monitoring system. In conclusion, clear recommendations for standardized blood culture collection combined with quality control of critical factors of the preanalytical phase are essential for diagnostic blood culture improvement. Copyright © 2012 Elsevier Inc. All rights reserved.

  3. Adaptive unified continuum FEM modeling of a 3D FSI benchmark problem.

    PubMed

    Jansson, Johan; Degirmenci, Niyazi Cem; Hoffman, Johan

    2017-09-01

    In this paper, we address a 3D fluid-structure interaction benchmark problem that represents important characteristics of biomedical modeling. We present a goal-oriented adaptive finite element methodology for incompressible fluid-structure interaction based on a streamline diffusion-type stabilization of the balance equations for mass and momentum for the entire continuum in the domain, which is implemented in the Unicorn/FEniCS software framework. A phase marker function and its corresponding transport equation are introduced to select the constitutive law, where the mesh tracks the discontinuous fluid-structure interface. This results in a unified simulation method for fluids and structures. We present detailed results for the benchmark problem compared with experiments, together with a mesh convergence study. Copyright © 2016 John Wiley & Sons, Ltd.

  4. Model Prediction Results for 2007 Ultrasonic Benchmark Problems

    NASA Astrophysics Data System (ADS)

    Kim, Hak-Joon; Song, Sung-Jin

    2008-02-01

    The World Federation of NDE Centers (WFNDEC) has addressed two types of problems for the 2007 ultrasonic benchmark problems: prediction of side-drilled hole responses with 45° and 60° refracted shear waves, and effects of surface curvatures on the ultrasonic responses of flat-bottomed hole. To solve this year's ultrasonic benchmark problems, we applied multi-Gaussian beam models for calculation of ultrasonic beam fields and the Kirchhoff approximation and the separation of variables method for calculation of far-field scattering amplitudes of flat-bottomed holes and side-drilled holes respectively In this paper, we present comparison results of model predictions to experiments for side-drilled holes and discuss effect of interface curvatures on ultrasonic responses by comparison of peak-to-peak amplitudes of flat-bottomed hole responses with different sizes and interface curvatures.

  5. Mechanical design criteria for intervertebral disc tissue engineering.

    PubMed

    Nerurkar, Nandan L; Elliott, Dawn M; Mauck, Robert L

    2010-04-19

    Due to the inability of current clinical practices to restore function to degenerated intervertebral discs, the arena of disc tissue engineering has received substantial attention in recent years. Despite tremendous growth and progress in this field, translation to clinical implementation has been hindered by a lack of well-defined functional benchmarks. Because successful replacement of the disc is contingent upon replication of some or all of its complex mechanical behaviors, it is critically important that disc mechanics be well characterized in order to establish discrete functional goals for tissue engineering. In this review, the key functional signatures of the intervertebral disc are discussed and used to propose a series of native tissue benchmarks to guide the development of engineered replacement tissues. These benchmarks include measures of mechanical function under tensile, compressive, and shear deformations for the disc and its substructures. In some cases, important functional measures are identified that have yet to be measured in the native tissue. Ultimately, native tissue benchmark values are compared to measurements that have been made on engineered disc tissues, identifying where functional equivalence was achieved, and where there remain opportunities for advancement. Several excellent reviews exist regarding disc composition and structure, as well as recent tissue engineering strategies; therefore this review will remain focused on the functional aspects of disc tissue engineering. Copyright 2009 Elsevier Ltd. All rights reserved.

  6. Properties of model-averaged BMDLs: a study of model averaging in dichotomous response risk estimation.

    PubMed

    Wheeler, Matthew W; Bailer, A John

    2007-06-01

    Model averaging (MA) has been proposed as a method of accounting for model uncertainty in benchmark dose (BMD) estimation. The technique has been used to average BMD dose estimates derived from dichotomous dose-response experiments, microbial dose-response experiments, as well as observational epidemiological studies. While MA is a promising tool for the risk assessor, a previous study suggested that the simple strategy of averaging individual models' BMD lower limits did not yield interval estimators that met nominal coverage levels in certain situations, and this performance was very sensitive to the underlying model space chosen. We present a different, more computationally intensive, approach in which the BMD is estimated using the average dose-response model and the corresponding benchmark dose lower bound (BMDL) is computed by bootstrapping. This method is illustrated with TiO(2) dose-response rat lung cancer data, and then systematically studied through an extensive Monte Carlo simulation. The results of this study suggest that the MA-BMD, estimated using this technique, performs better, in terms of bias and coverage, than the previous MA methodology. Further, the MA-BMDL achieves nominal coverage in most cases, and is superior to picking the "best fitting model" when estimating the benchmark dose. Although these results show utility of MA for benchmark dose risk estimation, they continue to highlight the importance of choosing an adequate model space as well as proper model fit diagnostics.

  7. Nonlinear viscoplasticity in ASPECT: benchmarking and applications to subduction

    NASA Astrophysics Data System (ADS)

    Glerum, Anne; Thieulot, Cedric; Fraters, Menno; Blom, Constantijn; Spakman, Wim

    2018-03-01

    ASPECT (Advanced Solver for Problems in Earth's ConvecTion) is a massively parallel finite element code originally designed for modeling thermal convection in the mantle with a Newtonian rheology. The code is characterized by modern numerical methods, high-performance parallelism and extensibility. This last characteristic is illustrated in this work: we have extended the use of ASPECT from global thermal convection modeling to upper-mantle-scale applications of subduction.

    Subduction modeling generally requires the tracking of multiple materials with different properties and with nonlinear viscous and viscoplastic rheologies. To this end, we implemented a frictional plasticity criterion that is combined with a viscous diffusion and dislocation creep rheology. Because ASPECT uses compositional fields to represent different materials, all material parameters are made dependent on a user-specified number of fields.

    The goal of this paper is primarily to describe and verify our implementations of complex, multi-material rheology by reproducing the results of four well-known two-dimensional benchmarks: the indentor benchmark, the brick experiment, the sandbox experiment and the slab detachment benchmark. Furthermore, we aim to provide hands-on examples for prospective users by demonstrating the use of multi-material viscoplasticity with three-dimensional, thermomechanical models of oceanic subduction, putting ASPECT on the map as a community code for high-resolution, nonlinear rheology subduction modeling.

  8. Treatment planning for spinal radiosurgery : A competitive multiplatform benchmark challenge.

    PubMed

    Moustakis, Christos; Chan, Mark K H; Kim, Jinkoo; Nilsson, Joakim; Bergman, Alanah; Bichay, Tewfik J; Palazon Cano, Isabel; Cilla, Savino; Deodato, Francesco; Doro, Raffaela; Dunst, Jürgen; Eich, Hans Theodor; Fau, Pierre; Fong, Ming; Haverkamp, Uwe; Heinze, Simon; Hildebrandt, Guido; Imhoff, Detlef; de Klerck, Erik; Köhn, Janett; Lambrecht, Ulrike; Loutfi-Krauss, Britta; Ebrahimi, Fatemeh; Masi, Laura; Mayville, Alan H; Mestrovic, Ante; Milder, Maaike; Morganti, Alessio G; Rades, Dirk; Ramm, Ulla; Rödel, Claus; Siebert, Frank-Andre; den Toom, Wilhelm; Wang, Lei; Wurster, Stefan; Schweikard, Achim; Soltys, Scott G; Ryu, Samuel; Blanck, Oliver

    2018-05-25

    To investigate the quality of treatment plans of spinal radiosurgery derived from different planning and delivery systems. The comparisons include robotic delivery and intensity modulated arc therapy (IMAT) approaches. Multiple centers with equal systems were used to reduce a bias based on individual's planning abilities. The study used a series of three complex spine lesions to maximize the difference in plan quality among the various approaches. Internationally recognized experts in the field of treatment planning and spinal radiosurgery from 12 centers with various treatment planning systems participated. For a complex spinal lesion, the results were compared against a previously published benchmark plan derived for CyberKnife radiosurgery (CKRS) using circular cones only. For two additional cases, one with multiple small lesions infiltrating three vertebrae and a single vertebra lesion treated with integrated boost, the results were compared against a benchmark plan generated using a best practice guideline for CKRS. All plans were rated based on a previously established ranking system. All 12 centers could reach equality (n = 4) or outperform (n = 8) the benchmark plan. For the multiple lesions and the single vertebra lesion plan only 5 and 3 of the 12 centers, respectively, reached equality or outperformed the best practice benchmark plan. However, the absolute differences in target and critical structure dosimetry were small and strongly planner-dependent rather than system-dependent. Overall, gantry-based IMAT with simple planning techniques (two coplanar arcs) produced faster treatments and significantly outperformed static gantry intensity modulated radiation therapy (IMRT) and multileaf collimator (MLC) or non-MLC CKRS treatment plan quality regardless of the system (mean rank out of 4 was 1.2 vs. 3.1, p = 0.002). High plan quality for complex spinal radiosurgery was achieved among all systems and all participating centers in this planning challenge. This study concludes that simple IMAT techniques can generate significantly better plan quality compared to previous established CKRS benchmarks.

  9. Paediatric International Nursing Study: using person-centred key performance indicators to benchmark children's services.

    PubMed

    McCance, Tanya; Wilson, Val; Kornman, Kelly

    2016-07-01

    The aim of the Paediatric International Nursing Study was to explore the utility of key performance indicators in developing person-centred practice across a range of services provided to sick children. The objective addressed in this paper was evaluating the use of these indicators to benchmark services internationally. This study builds on primary research, which produced indicators that were considered novel both in terms of their positive orientation and use in generating data that privileges the patient voice. This study extends this research through wider testing on an international platform within paediatrics. The overall methodological approach was a realistic evaluation used to evaluate the implementation of the key performance indicators, which combined an integrated development and evaluation methodology. The study involved children's wards/hospitals in Australia (six sites across three states) and Europe (seven sites across four countries). Qualitative and quantitative methods were used during the implementation process, however, this paper reports the quantitative data only, which used survey, observations and documentary review. The findings demonstrate the quality of care being delivered to children and their families across different international sites. The benchmarking does, however, highlight some differences between paediatric and general hospitals, and between the different key performance indicators across all the sites. The findings support the use of the key performance indicators as a novel method to benchmark services internationally. Whilst the data collected across 20 paediatric sites suggest services are more similar than different, benchmarking illuminates variations that encourage a critical dialogue about what works and why. The transferability of the key performance indicators and measurement framework across different settings has significant implications for practice. The findings offer an approach to benchmarking and celebrating the successes within practice, while learning from partners across the globe in further developing person-centred cultures. © 2016 John Wiley & Sons Ltd.

  10. Adsorption structures and energetics of molecules on metal surfaces: Bridging experiment and theory

    NASA Astrophysics Data System (ADS)

    Maurer, Reinhard J.; Ruiz, Victor G.; Camarillo-Cisneros, Javier; Liu, Wei; Ferri, Nicola; Reuter, Karsten; Tkatchenko, Alexandre

    2016-05-01

    Adsorption geometry and stability of organic molecules on surfaces are key parameters that determine the observable properties and functions of hybrid inorganic/organic systems (HIOSs). Despite many recent advances in precise experimental characterization and improvements in first-principles electronic structure methods, reliable databases of structures and energetics for large adsorbed molecules are largely amiss. In this review, we present such a database for a range of molecules adsorbed on metal single-crystal surfaces. The systems we analyze include noble-gas atoms, conjugated aromatic molecules, carbon nanostructures, and heteroaromatic compounds adsorbed on five different metal surfaces. The overall objective is to establish a diverse benchmark dataset that enables an assessment of current and future electronic structure methods, and motivates further experimental studies that provide ever more reliable data. Specifically, the benchmark structures and energetics from experiment are here compared with the recently developed van der Waals (vdW) inclusive density-functional theory (DFT) method, DFT + vdWsurf. In comparison to 23 adsorption heights and 17 adsorption energies from experiment we find a mean average deviation of 0.06 Å and 0.16 eV, respectively. This confirms the DFT + vdWsurf method as an accurate and efficient approach to treat HIOSs. A detailed discussion identifies remaining challenges to be addressed in future development of electronic structure methods, for which the here presented benchmark database may serve as an important reference.

  11. Energy benchmarking of commercial buildings: a low-cost pathway toward urban sustainability

    NASA Astrophysics Data System (ADS)

    Cox, Matt; Brown, Marilyn A.; Sun, Xiaojing

    2013-09-01

    US cities are beginning to experiment with a regulatory approach to address information failures in the real estate market by mandating the energy benchmarking of commercial buildings. Understanding how a commercial building uses energy has many benefits; for example, it helps building owners and tenants identify poor-performing buildings and subsystems and it enables high-performing buildings to achieve greater occupancy rates, rents, and property values. This paper estimates the possible impacts of a national energy benchmarking mandate through analysis chiefly utilizing the Georgia Tech version of the National Energy Modeling System (GT-NEMS). Correcting input discount rates results in a 4.0% reduction in projected energy consumption for seven major classes of equipment relative to the reference case forecast in 2020, rising to 8.7% in 2035. Thus, the official US energy forecasts appear to overestimate future energy consumption by underestimating investments in energy-efficient equipment. Further discount rate reductions spurred by benchmarking policies yield another 1.3-1.4% in energy savings in 2020, increasing to 2.2-2.4% in 2035. Benchmarking would increase the purchase of energy-efficient equipment, reducing energy bills, CO2 emissions, and conventional air pollution. Achieving comparable CO2 savings would require more than tripling existing US solar capacity. Our analysis suggests that nearly 90% of the energy saved by a national benchmarking policy would benefit metropolitan areas, and the policy’s benefits would outweigh its costs, both to the private sector and society broadly.

  12. Benchmark study on glyphosate-resistant crop systems in the United States. Part 2: Perspectives.

    PubMed

    Owen, Micheal D K; Young, Bryan G; Shaw, David R; Wilson, Robert G; Jordan, David L; Dixon, Philip M; Weller, Stephen C

    2011-07-01

    A six-state, 5 year field project was initiated in 2006 to study weed management methods that foster the sustainability of genetically engineered (GE) glyphosate-resistant (GR) crop systems. The benchmark study field-scale experiments were initiated following a survey, conducted in the winter of 2005-2006, of farmer opinions on weed management practices and their views on GR weeds and management tactics. The main survey findings supported the premise that growers were generally less aware of the significance of evolved herbicide resistance and did not have a high recognition of the strong selection pressure from herbicides on the evolution of herbicide-resistant (HR) weeds. The results of the benchmark study survey indicated that there are educational challenges to implement sustainable GR-based crop systems and helped guide the development of the field-scale benchmark study. Paramount is the need to develop consistent and clearly articulated science-based management recommendations that enable farmers to reduce the potential for HR weeds. This paper provides background perspectives about the use of GR crops, the impact of these crops and an overview of different opinions about the use of GR crops on agriculture and society, as well as defining how the benchmark study will address these issues. Copyright © 2011 Society of Chemical Industry.

  13. TRIPOLI-4® - MCNP5 ITER A-lite neutronic model benchmarking

    NASA Astrophysics Data System (ADS)

    Jaboulay, J.-C.; Cayla, P.-Y.; Fausser, C.; Lee, Y.-K.; Trama, J.-C.; Li-Puma, A.

    2014-06-01

    The aim of this paper is to present the capability of TRIPOLI-4®, the CEA Monte Carlo code, to model a large-scale fusion reactor with complex neutron source and geometry. In the past, numerous benchmarks were conducted for TRIPOLI-4® assessment on fusion applications. Experiments (KANT, OKTAVIAN, FNG) analysis and numerical benchmarks (between TRIPOLI-4® and MCNP5) on the HCLL DEMO2007 and ITER models were carried out successively. In this previous ITER benchmark, nevertheless, only the neutron wall loading was analyzed, its main purpose was to present MCAM (the FDS Team CAD import tool) extension for TRIPOLI-4®. Starting from this work a more extended benchmark has been performed about the estimation of neutron flux, nuclear heating in the shielding blankets and tritium production rate in the European TBMs (HCLL and HCPB) and it is presented in this paper. The methodology to build the TRIPOLI-4® A-lite model is based on MCAM and the MCNP A-lite model (version 4.1). Simplified TBMs (from KIT) have been integrated in the equatorial-port. Comparisons of neutron wall loading, flux, nuclear heating and tritium production rate show a good agreement between the two codes. Discrepancies are mainly included in the Monte Carlo codes statistical error.

  14. Audit of anesthetic trainees' 'hands-on' operating room experience in an Australian tertiary children's hospital.

    PubMed

    Hogan, Bridget; Keating, Matthew; Chambers, Neil A; von Ungern-Sternberg, Britta

    2016-05-01

    There are no internationally accepted guidelines about what constitutes adequate clinical exposure during pediatric anesthetic training. In Australia, no data have been published on the level of experience obtained by anesthetic trainees in pediatric anesthesia. There is, however, a new ANZCA (Australian and New Zealand College of Anaesthetists) curriculum that quantifies new training requirements. To quantify our trainees' exposure to clinical work in order to assess compliance with new curriculum and to provide other institutions with a benchmark for pediatric anesthetic training. We performed a prospective audit to estimate and quantify our anesthetic registrars' exposure to pediatric anesthesia during their 6-month rotation at our institution, a tertiary pediatric hospital in Perth, Western Australia. Our data suggest that trainees at our institution will achieve the new ANZCA training standards comfortably, in terms of the required volume and breadth of exposure. Experience, however, of some advanced pediatric anesthetic procedures appears limited. Experience gained at our hospital easily meets the new College requirements. Experience of fiber-optic intubation and regional blocks would appear insufficient to develop sufficient skills or confidence. The study provides other institutions with information to benchmark against their own trainee experience. © 2016 John Wiley & Sons Ltd.

  15. A comprehensive analysis of sodium levels in the Canadian packaged food supply

    PubMed Central

    Arcand, JoAnne; Au, Jennifer T.C.; Schermel, Alyssa; L’Abbe, Mary R.

    2016-01-01

    Background Population-wide sodium reduction strategies aim to reduce the cardiovascular burden of excess dietary sodium. Lowering sodium in packaged foods, which contribute the most sodium to the diet, is an important intervention to lower population intakes. Purpose To determine sodium levels in Canadian packaged foods and evaluate the proportion of foods meeting sodium benchmark targets set by Health Canada. Methods A cross-sectional analysis of 7234 packaged foods available in Canada in 2010–11. Sodium values were obtained from the Nutrition Facts table. Results Overall, 51.4% of foods met one of the sodium benchmark levels: 11.5% met Phase 1, 11.1% met Phase 2, and 28.7% met 2016 goal (Phase 3) benchmarks. Food groups with the greatest proportion meeting goal benchmarks were dairy (52.0%) and breakfast cereals (42.2%). Overall 48.6% of foods did not meet any benchmark level and 25% of all products exceeded maximum levels. Meats (61.2%) and canned vegetables/legumes and legumes (29.6%) had the most products exceeding maximum levels. There was large variability in the range of sodium within and between food categories. Food categories highest in sodium (mg/serving) were dry, condensed and ready-to-serve soups (834 ± 256, 754 ± 163, and 636 ± 173, respectively), oriental noodles (783 ± 433), broth (642 ± 239), and frozen appetizers/sides (642 ± 292). Conclusion These data provide a critical baseline assessment for monitoring sodium levels in Canadian foods. While some segments of the market are making progress towards sodium reduction, all sectors need encouragement to continue to reduce the amount of sodium added during food processing. PMID:24842740

  16. Potential of mean force for electrical conductivity of dense plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Starrett, C. E.

    The electrical conductivity in dense plasmas can be calculated with the relaxation-time approximation provided that the interaction potential between the scattering electron and the ion is known. To date there has been considerable uncertainty as to the best way to define this interaction potential so that it correctly includes the effects of ionic structure, screening by electrons and partial ionization. The current approximations lead to significantly different results with varying levels of agreement when compared to bench-mark calculations and experiments. Here, we present a new way to define this potential, drawing on ideas from classical fluid theory to define amore » potential of mean force. This new potential results in significantly improved agreement with experiments and bench-mark calculations, and includes all the aforementioned physics self-consistently.« less

  17. Potential of mean force for electrical conductivity of dense plasmas

    DOE PAGES

    Starrett, C. E.

    2017-09-28

    The electrical conductivity in dense plasmas can be calculated with the relaxation-time approximation provided that the interaction potential between the scattering electron and the ion is known. To date there has been considerable uncertainty as to the best way to define this interaction potential so that it correctly includes the effects of ionic structure, screening by electrons and partial ionization. The current approximations lead to significantly different results with varying levels of agreement when compared to bench-mark calculations and experiments. Here, we present a new way to define this potential, drawing on ideas from classical fluid theory to define amore » potential of mean force. This new potential results in significantly improved agreement with experiments and bench-mark calculations, and includes all the aforementioned physics self-consistently.« less

  18. Potential of mean force for electrical conductivity of dense plasmas

    NASA Astrophysics Data System (ADS)

    Starrett, C. E.

    2017-12-01

    The electrical conductivity in dense plasmas can be calculated with the relaxation-time approximation provided that the interaction potential between the scattering electron and the ion is known. To date there has been considerable uncertainty as to the best way to define this interaction potential so that it correctly includes the effects of ionic structure, screening by electrons and partial ionization. Current approximations lead to significantly different results with varying levels of agreement when compared to bench-mark calculations and experiments. We present a new way to define this potential, drawing on ideas from classical fluid theory to define a potential of mean force. This new potential results in significantly improved agreement with experiments and bench-mark calculations, and includes all the aforementioned physics self-consistently.

  19. Benchmark studies of induced radioactivity produced in LHC materials, Part II: Remanent dose rates.

    PubMed

    Brugger, M; Khater, H; Mayer, S; Prinz, A; Roesler, S; Ulrici, L; Vincke, H

    2005-01-01

    A new method to estimate remanent dose rates, to be used with the Monte Carlo code FLUKA, was benchmarked against measurements from an experiment that was performed at the CERN-EU high-energy reference field facility. An extensive collection of samples of different materials were placed downstream of, and laterally to, a copper target, intercepting a positively charged mixed hadron beam with a momentum of 120 GeV c(-1). Emphasis was put on the reduction of uncertainties by taking measures such as careful monitoring of the irradiation parameters, using different instruments to measure dose rates, adopting detailed elemental analyses of the irradiated materials and making detailed simulations of the irradiation experiment. The measured and calculated dose rates are in good agreement.

  20. Benchmark tests of JENDL-3.2 for thermal and fast reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takano, Hideki; Akie, Hiroshi; Kikuchi, Yasuyuki

    1994-12-31

    Benchmark calculations for a variety of thermal and fast reactors have been performed by using the newly evaluated JENDL-3 Version-2 (JENDL-3.2) file. In the thermal reactor calculations for the uranium and plutonium fueled cores of TRX and TCA, the k{sub eff} and lattice parameters were well predicted. The fast reactor calculations for ZPPR-9 and FCA assemblies showed that the k{sub eff} reactivity worths of Doppler, sodium void and control rod, and reaction rate distribution were in a very good agreement with the experiments.

  1. The National Practice Benchmark for oncology, 2014 report on 2013 data.

    PubMed

    Towle, Elaine L; Barr, Thomas R; Senese, James L

    2014-11-01

    The National Practice Benchmark (NPB) is a unique tool to measure oncology practices against others across the country in a way that allows meaningful comparisons despite differences in practice size or setting. In today's economic environment every oncology practice, regardless of business structure or affiliation, should be able to produce, monitor, and benchmark basic metrics to meet current business pressures for increased efficiency and efficacy of care. Although we recognize that the NPB survey results do not capture the experience of all oncology practices, practices that can and do participate demonstrate exceptional managerial capability, and this year those practices are recognized for their participation. In this report, we continue to emphasize the methodology introduced last year in which we reported medical revenue net of the cost of the drugs as net medical revenue for the hematology/oncology product line. The effect of this is to capture only the gross margin attributable to drugs as revenue. New this year, we introduce six measures of clinical data density and expand the radiation oncology benchmarks. Copyright © 2014 by American Society of Clinical Oncology.

  2. Application of ab initio many-body perturbation theory with Gaussian basis sets to the singlet and triplet excitations of organic molecules

    NASA Astrophysics Data System (ADS)

    Hamed, Samia; Rangel, Tonatiuh; Bruneval, Fabien; Neaton, Jeffrey B.

    Quantitative understanding of charged and neutral excitations of organic molecules is critical in diverse areas of study that include astrophysics and the development of energy technologies that are clean and efficient. The recent use of local basis sets with ab initio many-body perturbation theory in the GW approximation and the Bethe-Saltpeter equation approach (BSE), methods traditionally applied to periodic condensed phases with a plane-wave basis, has opened the door to detailed study of such excitations for molecules, as well as accurate numerical benchmarks. Here, through a series of systematic benchmarks with a Gaussian basis, we report on the extent to which the predictive power and utility of this approach depend critically on interdependent underlying approximations and choices for molecules, including the mean-field starting point (eg optimally-tuned range separated hybrids, pure DFT functionals, and untuned hybrids), the GW scheme, and the Tamm Dancoff approximation. We demonstrate the effects of these choices in the context of Thiels' set while drawing analogies to linear-response time-dependent DFT and making comparisons to best theoretical estimates from higher-order wavefunction-based theories.

  3. Can Propensity Score Analysis Approximate Randomized Experiments Using Pretest and Demographic Information in Pre-K Intervention Research?

    PubMed

    Dong, Nianbo; Lipsey, Mark W

    2017-01-01

    It is unclear whether propensity score analysis (PSA) based on pretest and demographic covariates will meet the ignorability assumption for replicating the results of randomized experiments. This study applies within-study comparisons to assess whether pre-Kindergarten (pre-K) treatment effects on achievement outcomes estimated using PSA based on a pretest and demographic covariates can approximate those found in a randomized experiment. Data-Four studies with samples of pre-K children each provided data on two math achievement outcome measures with baseline pretests and child demographic variables that included race, gender, age, language spoken at home, and mother's highest education. Research Design and Data Analysis-A randomized study of a pre-K math curriculum provided benchmark estimates of effects on achievement measures. Comparison samples from other pre-K studies were then substituted for the original randomized control and the effects were reestimated using PSA. The correspondence was evaluated using multiple criteria. The effect estimates using PSA were in the same direction as the benchmark estimates, had similar but not identical statistical significance, and did not differ from the benchmarks at statistically significant levels. However, the magnitude of the effect sizes differed and displayed both absolute and relative bias larger than required to show statistical equivalence with formal tests, but those results were not definitive because of the limited statistical power. We conclude that treatment effect estimates based on a single pretest and demographic covariates in PSA correspond to those from a randomized experiment on the most general criteria for equivalence.

  4. Benchmarking computational fluid dynamics models of lava flow simulation for hazard assessment, forecasting, and risk management

    USGS Publications Warehouse

    Dietterich, Hannah; Lev, Einat; Chen, Jiangzhi; Richardson, Jacob A.; Cashman, Katharine V.

    2017-01-01

    Numerical simulations of lava flow emplacement are valuable for assessing lava flow hazards, forecasting active flows, designing flow mitigation measures, interpreting past eruptions, and understanding the controls on lava flow behavior. Existing lava flow models vary in simplifying assumptions, physics, dimensionality, and the degree to which they have been validated against analytical solutions, experiments, and natural observations. In order to assess existing models and guide the development of new codes, we conduct a benchmarking study of computational fluid dynamics (CFD) models for lava flow emplacement, including VolcFlow, OpenFOAM, FLOW-3D, COMSOL, and MOLASSES. We model viscous, cooling, and solidifying flows over horizontal planes, sloping surfaces, and into topographic obstacles. We compare model results to physical observations made during well-controlled analogue and molten basalt experiments, and to analytical theory when available. Overall, the models accurately simulate viscous flow with some variability in flow thickness where flows intersect obstacles. OpenFOAM, COMSOL, and FLOW-3D can each reproduce experimental measurements of cooling viscous flows, and OpenFOAM and FLOW-3D simulations with temperature-dependent rheology match results from molten basalt experiments. We assess the goodness-of-fit of the simulation results and the computational cost. Our results guide the selection of numerical simulation codes for different applications, including inferring emplacement conditions of past lava flows, modeling the temporal evolution of ongoing flows during eruption, and probabilistic assessment of lava flow hazard prior to eruption. Finally, we outline potential experiments and desired key observational data from future flows that would extend existing benchmarking data sets.

  5. Calculated criticality for sup 235 U/graphite systems using the VIM Monte Carlo code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, P.J.; Grasseschi, G.L.; Olsen, D.N.

    1992-01-01

    Calculations for highly enriched uranium and graphite systems gained renewed interest recently for the new production modular high-temperature gas-cooled reactor (MHTGR). Experiments to validate the physics calculations for these systems are being prepared for the Transient Reactor Test Facility (TREAT) reactor at Argonne National Laboratory (ANL-West) and in the Compact Nuclear Power Source facility at Los Alamos National Laboratory. The continuous-energy Monte Carlo code VIM, or equivalently the MCNP code, can utilize fully detailed models of the MHTGR and serve as benchmarks for the approximate multigroup methods necessary in full reactor calculations. Validation of these codes and their associated nuclearmore » data did not exist for highly enriched {sup 235}U/graphite systems. Experimental data, used in development of more approximate methods, dates back to the 1960s. The authors have selected two independent sets of experiments for calculation with the VIM code. The carbon-to-uranium (C/U) ratios encompass the range of 2,000, representative of the new production MHTGR, to the ratio of 10,000 in the fuel of TREAT. Calculations used the ENDF/B-V data.« less

  6. The Capillary Flow Experiments Aboard the International Space Station: Increments 9-15

    NASA Technical Reports Server (NTRS)

    Jenson, Ryan M.; Weislogel, Mark M.; Tavan, Noel T.; Chen, Yongkang; Semerjian, Ben; Bunnell, Charles T.; Collicott, Steven H.; Klatte, Jorg; dreyer, Michael E.

    2009-01-01

    This report provides a summary of the experimental, analytical, and numerical results of the Capillary Flow Experiment (CFE) performed aboard the International Space Station (ISS). The experiments were conducted in space beginning with Increment 9 through Increment 16, beginning August 2004 and ending December 2007. Both primary and extra science experiments were conducted during 19 operations performed by 7 astronauts including: M. Fincke, W. McArthur, J. Williams, S. Williams, M. Lopez-Alegria, C. Anderson, and P. Whitson. CFE consists of 6 approximately 1 to 2 kg handheld experiment units designed to investigate a selection of capillary phenomena of fundamental and applied importance, such as large length scale contact line dynamics (CFE-Contact Line), critical wetting in discontinuous structures (CFE-Vane Gap), and capillary flows and passive phase separations in complex containers (CFE-Interior Corner Flow). Highly quantitative video from the simply performed flight experiments provide data helpful in benchmarking numerical methods, confirming theoretical models, and guiding new model development. In an extensive executive summary, a brief history of the experiment is reviewed before introducing the science investigated. A selection of experimental results and comparisons with both analytic and numerical predictions is given. The subsequent chapters provide additional details of the experimental and analytical methods developed and employed. These include current presentations of the state of the data reduction which we anticipate will continue throughout the year and culminate in several more publications. An extensive appendix is used to provide support material such as an experiment history, dissemination items to date (CFE publication, etc.), detailed design drawings, and crew procedures. Despite the simple nature of the experiments and procedures, many of the experimental results may be practically employed to enhance the design of spacecraft engineering systems involving capillary interface dynamics.

  7. Superconductivity in correlated BEDT-TTF molecular conductors: Critical temperatures and gap symmetries

    NASA Astrophysics Data System (ADS)

    Zantout, Karim; Altmeyer, Michaela; Backes, Steffen; Valentí, Roser

    2018-01-01

    Starting from an ab initio-derived two-site dimer Hubbard Hamiltonian on a triangular lattice, we calculate the superconducting gap functions and critical temperatures for representative κ -(BEDT-TTF ) 2X superconductors by solving the linearized Eliashberg equation using the two-particle self-consistent approach (TPSC) extended to multisite problems. Such an extension allows for the inclusion of molecule degrees of freedom in the description of these systems. We present both benchmarking results for the half-filled dimer model as well as detailed investigations for the 3/4-filled molecule model. Remarkably, we find in the latter model that the phase boundary between the two most competing gap symmetries discussed in the context of these materials—dx y and the recently proposed eight-node s +dx2-y2 gap symmetry—is located within the regime of realistic model parameters and is especially sensitive to the degree of in-plane anisotropy in the materials as well as to the value of the on-site Hubbard repulsion. We show that these results provide a more complete and accurate description of the superconducting properties of κ -(BEDT-TTF ) 2X than previous random phase approximation (RPA) calculations and, in particular, we discuss predicted critical temperatures in comparison to experiments. Finally, our findings suggest that it may be even easier to experimentally switch between the two pairing symmetries as previously anticipated by invoking pressure, chemical doping, or disorder effects.

  8. Experimental detailed power distribution in a fast spectrum thermionic reactor fuel element at the core/BeO reflector interface region

    NASA Technical Reports Server (NTRS)

    Klann, P. G.; Lantz, E.

    1973-01-01

    A zero-power critical assembly was designed, constructed, and operated for the prupose of conducting a series of benchmark experiments dealing with the physics characteristics of a UN-fueled, Li-7-cooled, Mo-reflected, drum-controlled compact fast reactor for use with a space-power conversion system. The critical assembly was modified to simulate a fast spectrum advanced thermionics reactor by: (1) using BeO as a reflector in place of some of the existing molybdenum, (2) substituting Nb-1Zr tubing for some of the existing Ta tubing, and (3) inserting four full-scale mockups of thermionic type fuel elements near the core and BeO reflector boundary. These mockups were surrounded with a buffer zone having the equivalent thermionic core composition. In addition to measuring the critical mass of this thermionic configuration, a detailed power distribution in one of the thermionic element stages in the mixed spectrum region was measured. A power peak to average ratio of two was observed for this fuel stage at the midplane of the core and adjacent to the reflector. Also, the power on the outer surface adjacent to the BeO was slightly more than a factor of two larger than the power on the inside surface of a 5.08 cm (2.0 in.) high annular fuel segment with a 2.52 cm (0.993 in. ) o.d. and a 1.86 cm (0.731 in.) i.d.

  9. Validation of numerical codes for impact and explosion cratering: Impacts on strengthless and metal targets

    NASA Astrophysics Data System (ADS)

    Pierazzo, E.; Artemieva, N.; Asphaug, E.; Baldwin, E. C.; Cazamias, J.; Coker, R.; Collins, G. S.; Crawford, D. A.; Davison, T.; Elbeshausen, D.; Holsapple, K. A.; Housen, K. R.; Korycansky, D. G.; Wünnemann, K.

    2008-12-01

    Over the last few decades, rapid improvement of computer capabilities has allowed impact cratering to be modeled with increasing complexity and realism, and has paved the way for a new era of numerical modeling of the impact process, including full, three-dimensional (3D) simulations. When properly benchmarked and validated against observation, computer models offer a powerful tool for understanding the mechanics of impact crater formation. This work presents results from the first phase of a project to benchmark and validate shock codes. A variety of 2D and 3D codes were used in this study, from commercial products like AUTODYN, to codes developed within the scientific community like SOVA, SPH, ZEUS-MP, iSALE, and codes developed at U.S. National Laboratories like CTH, SAGE/RAGE, and ALE3D. Benchmark calculations of shock wave propagation in aluminum-on-aluminum impacts were performed to examine the agreement between codes for simple idealized problems. The benchmark simulations show that variability in code results is to be expected due to differences in the underlying solution algorithm of each code, artificial stability parameters, spatial and temporal resolution, and material models. Overall, the inter-code variability in peak shock pressure as a function of distance is around 10 to 20%. In general, if the impactor is resolved by at least 20 cells across its radius, the underestimation of peak shock pressure due to spatial resolution is less than 10%. In addition to the benchmark tests, three validation tests were performed to examine the ability of the codes to reproduce the time evolution of crater radius and depth observed in vertical laboratory impacts in water and two well-characterized aluminum alloys. Results from these calculations are in good agreement with experiments. There appears to be a general tendency of shock physics codes to underestimate the radius of the forming crater. Overall, the discrepancy between the model and experiment results is between 10 and 20%, similar to the inter-code variability.

  10. Evaluating the Performance of Repeated Measures Approaches in Replicating Experimental Benchmark Results

    ERIC Educational Resources Information Center

    McConeghy, Kevin; Wing, Coady; Wong, Vivian C.

    2015-01-01

    Randomized experiments have long been established as the gold standard for addressing causal questions. However, experiments are not always feasible or desired, so observational methods are also needed. When multiple observations on the same variable are available, a repeated measures design may be used to assess whether a treatment administered…

  11. Benchmark Eye Movement Effects during Natural Reading in Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Howard, Philippa L.; Liversedge, Simon P.; Benson, Valerie

    2017-01-01

    In 2 experiments, eye tracking methodology was used to assess on-line lexical, syntactic and semantic processing in autism spectrum disorder (ASD). In Experiment 1, lexical identification was examined by manipulating the frequency of target words. Both typically developed (TD) and ASD readers showed normal frequency effects, suggesting that the…

  12. Which Factors Can Protect Against Range Stress in Everyday Usage of Battery Electric Vehicles? Toward Enhancing Sustainability of Electric Mobility Systems.

    PubMed

    Franke, Thomas; Rauh, Nadine; Günther, Madlen; Trantow, Maria; Krems, Josef F

    2016-02-01

    The objective of the present research was to advance understanding of factors that can protect against range anxiety, specifically range stress in everyday usage of battery electric vehicles (BEVs). Range anxiety is a major barrier to the broad adoption of sustainable electric mobility systems. To develop strategies aimed at overcoming range anxiety, a clear understanding of this phenomenon and influencing factors is needed. We examined range anxiety in the form of everyday range stress (ERS) in a field study setting. Seventy-two customers leased a BEV for 3 months. The field study was specifically designed to enable examination of factors that can contribute to lower ERS. In particular, study design and sample recruitment were targeted at generating vehicle usage profiles that would lead to relatively frequent experience of situations requiring active management of range resources and thereby potentially leading to experienced range stress. Less frequent encounter with critical range situations, higher practical experience, subjective range competence, tolerance of low range, and experienced trustworthiness of the range estimation system were related to lower ERS. Moreover, range stress was found to be related to range satisfaction and BEV acceptance. The results underline the importance of the human factors perspective to overcome range anxiety and enhance sustainability of electric mobility systems. Trustworthiness should be employed as a key benchmark variable in the design of range estimation systems, and assistance systems should target increasing drivers' adaptive capacity (i.e., resilience) to cope with critical range situations. © 2015, Human Factors and Ergonomics Society.

  13. FDNS CFD Code Benchmark for RBCC Ejector Mode Operation: Continuing Toward Dual Rocket Effects

    NASA Technical Reports Server (NTRS)

    West, Jeff; Ruf, Joseph H.; Turner, James E. (Technical Monitor)

    2000-01-01

    Computational Fluid Dynamics (CFD) analysis results are compared with benchmark quality test data from the Propulsion Engineering Research Center's (PERC) Rocket Based Combined Cycle (RBCC) experiments to verify fluid dynamic code and application procedures. RBCC engine flowpath development will rely on CFD applications to capture the multi -dimensional fluid dynamic interactions and to quantify their effect on the RBCC system performance. Therefore, the accuracy of these CFD codes must be determined through detailed comparisons with test data. The PERC experiments build upon the well-known 1968 rocket-ejector experiments of Odegaard and Stroup by employing advanced optical and laser based diagnostics to evaluate mixing and secondary combustion. The Finite Difference Navier Stokes (FDNS) code [2] was used to model the fluid dynamics of the PERC RBCC ejector mode configuration. Analyses were performed for the Diffusion and Afterburning (DAB) test conditions at the 200-psia thruster operation point, Results with and without downstream fuel injection are presented.

  14. The MCUCN simulation code for ultracold neutron physics

    NASA Astrophysics Data System (ADS)

    Zsigmond, G.

    2018-02-01

    Ultracold neutrons (UCN) have very low kinetic energies 0-300 neV, thereby can be stored in specific material or magnetic confinements for many hundreds of seconds. This makes them a very useful tool in probing fundamental symmetries of nature (for instance charge-parity violation by neutron electric dipole moment experiments) and contributing important parameters for the Big Bang nucleosynthesis (neutron lifetime measurements). Improved precision experiments are in construction at new and planned UCN sources around the world. MC simulations play an important role in the optimization of such systems with a large number of parameters, but also in the estimation of systematic effects, in benchmarking of analysis codes, or as part of the analysis. The MCUCN code written at PSI has been extensively used for the optimization of the UCN source optics and in the optimization and analysis of (test) experiments within the nEDM project based at PSI. In this paper we present the main features of MCUCN and interesting benchmark and application examples.

  15. Introduction: Framing 'Post-AIDS' and Global Health Discourses in 2015 and Beyond.

    PubMed

    O'Connell, Gráinne

    2017-05-19

    This special issue, entitled "Post-AIDS' and Global Health Discourses: Interdisciplinary Perspectives,' emerged from a one day Medical Humanities symposium at the Leeds Centre for Medical Humanities, at the University of Leeds, England, on February 27th 2015. This special issue focusses on the perceived deprioritising of HIV and AIDS in the Sustainable Development Goals, or SDGs, that were launched in 2015. The SDGs function as policy benchmarks for all entities within the United Nations system and they supersede the Millennium Development Goals, or MDGs, which expired in 2015. As the word millennium indicates, the MDGs were launched in 2000 and 2015 was designated as the benchmark year when the successes and shortcomings of the MDGs would be critically assessed. One key difference between the MDGs and the SDGs, which D'Ambruoso foregrounds (2013), is that the writing process underpinning the SDGs involved lengthy consultations, and feedback, with communities and health care practitioners around the world. By contrast, because the MDGs were mainly written by government officials, policy makers and health care practitioners without consulting wider communities, the processes underpinning the SDGs consultations are more inclusive than the MDGs. What is most critical about the SDGs for this special issue, however, is that they reflect a clear shift away from 'HIV exceptionalism' and towards what critics have described as 'post-AIDS' rhetoric, specifically when one compares the MDG health goal 6 and the SDG health goal 3.

  16. Introduction to benchmark dose methods and U.S. EPA's benchmark dose software (BMDS) version 2.1.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, J. Allen, E-mail: davis.allen@epa.gov; Gift, Jeffrey S.; Zhao, Q. Jay

    2011-07-15

    Traditionally, the No-Observed-Adverse-Effect-Level (NOAEL) approach has been used to determine the point of departure (POD) from animal toxicology data for use in human health risk assessments. However, this approach is subject to substantial limitations that have been well defined, such as strict dependence on the dose selection, dose spacing, and sample size of the study from which the critical effect has been identified. Also, the NOAEL approach fails to take into consideration the shape of the dose-response curve and other related information. The benchmark dose (BMD) method, originally proposed as an alternative to the NOAEL methodology in the 1980s, addressesmore » many of the limitations of the NOAEL method. It is less dependent on dose selection and spacing, and it takes into account the shape of the dose-response curve. In addition, the estimation of a BMD 95% lower bound confidence limit (BMDL) results in a POD that appropriately accounts for study quality (i.e., sample size). With the recent advent of user-friendly BMD software programs, including the U.S. Environmental Protection Agency's (U.S. EPA) Benchmark Dose Software (BMDS), BMD has become the method of choice for many health organizations world-wide. This paper discusses the BMD methods and corresponding software (i.e., BMDS version 2.1.1) that have been developed by the U.S. EPA, and includes a comparison with recently released European Food Safety Authority (EFSA) BMD guidance.« less

  17. Analysis of a Neutronic Experiment on a Simulated Mercury Spallation Neutron Target Assembly Bombarded by Giga-Electron-Volt Protons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maekawa, Fujio; Meigo, Shin-ichiro; Kasugai, Yoshimi

    2005-05-15

    A neutronic benchmark experiment on a simulated spallation neutron target assembly was conducted by using the Alternating Gradient Synchrotron at Brookhaven National Laboratory and was analyzed to investigate the prediction capability of Monte Carlo simulation codes used in neutronic designs of spallation neutron sources. The target assembly consisting of a mercury target, a light water moderator, and a lead reflector was bombarded by 1.94-, 12-, and 24-GeV protons, and the fast neutron flux distributions around the target and the spectra of thermal neutrons leaking from the moderator were measured in the experiment. In this study, the Monte Carlo particle transportmore » simulation codes NMTC/JAM, MCNPX, and MCNP-4A with associated cross-section data in JENDL and LA-150 were verified based on benchmark analysis of the experiment. As a result, all the calculations predicted the measured quantities adequately; calculated integral fluxes of fast and thermal neutrons agreed approximately within {+-}40% with the experiments although the overall energy range encompassed more than 12 orders of magnitude. Accordingly, it was concluded that these simulation codes and cross-section data were adequate for neutronics designs of spallation neutron sources.« less

  18. Simple mathematical law benchmarks human confrontations

    PubMed Central

    Johnson, Neil F.; Medina, Pablo; Zhao, Guannan; Messinger, Daniel S.; Horgan, John; Gill, Paul; Bohorquez, Juan Camilo; Mattson, Whitney; Gangi, Devon; Qi, Hong; Manrique, Pedro; Velasquez, Nicolas; Morgenstern, Ana; Restrepo, Elvira; Johnson, Nicholas; Spagat, Michael; Zarama, Roberto

    2013-01-01

    Many high-profile societal problems involve an individual or group repeatedly attacking another – from child-parent disputes, sexual violence against women, civil unrest, violent conflicts and acts of terror, to current cyber-attacks on national infrastructure and ultrafast cyber-trades attacking stockholders. There is an urgent need to quantify the likely severity and timing of such future acts, shed light on likely perpetrators, and identify intervention strategies. Here we present a combined analysis of multiple datasets across all these domains which account for >100,000 events, and show that a simple mathematical law can benchmark them all. We derive this benchmark and interpret it, using a minimal mechanistic model grounded by state-of-the-art fieldwork. Our findings provide quantitative predictions concerning future attacks; a tool to help detect common perpetrators and abnormal behaviors; insight into the trajectory of a ‘lone wolf'; identification of a critical threshold for spreading a message or idea among perpetrators; an intervention strategy to erode the most lethal clusters; and more broadly, a quantitative starting point for cross-disciplinary theorizing about human aggression at the individual and group level, in both real and online worlds. PMID:24322528

  19. Benchmarking an unstructured grid sediment model in an energetic estuary

    DOE PAGES

    Lopez, Jesse E.; Baptista, António M.

    2016-12-14

    A sediment model coupled to the hydrodynamic model SELFE is validated against a benchmark combining a set of idealized tests and an application to a field-data rich energetic estuary. After sensitivity studies, model results for the idealized tests largely agree with previously reported results from other models in addition to analytical, semi-analytical, or laboratory results. Results of suspended sediment in an open channel test with fixed bottom are sensitive to turbulence closure and treatment for hydrodynamic bottom boundary. Results for the migration of a trench are very sensitive to critical stress and erosion rate, but largely insensitive to turbulence closure.more » The model is able to qualitatively represent sediment dynamics associated with estuarine turbidity maxima in an idealized estuary. Applied to the Columbia River estuary, the model qualitatively captures sediment dynamics observed by fixed stations and shipborne profiles. Representation of the vertical structure of suspended sediment degrades when stratification is underpredicted. Across all tests, skill metrics of suspended sediments lag those of hydrodynamics even when qualitatively representing dynamics. The benchmark is fully documented in an openly available repository to encourage unambiguous comparisons against other models.« less

  20. A Monte-Carlo Benchmark of TRIPOLI-4® and MCNP on ITER neutronics

    NASA Astrophysics Data System (ADS)

    Blanchet, David; Pénéliau, Yannick; Eschbach, Romain; Fontaine, Bruno; Cantone, Bruno; Ferlet, Marc; Gauthier, Eric; Guillon, Christophe; Letellier, Laurent; Proust, Maxime; Mota, Fernando; Palermo, Iole; Rios, Luis; Guern, Frédéric Le; Kocan, Martin; Reichle, Roger

    2017-09-01

    Radiation protection and shielding studies are often based on the extensive use of 3D Monte-Carlo neutron and photon transport simulations. ITER organization hence recommends the use of MCNP-5 code (version 1.60), in association with the FENDL-2.1 neutron cross section data library, specifically dedicated to fusion applications. The MCNP reference model of the ITER tokamak, the `C-lite', is being continuously developed and improved. This article proposes to develop an alternative model, equivalent to the 'C-lite', but for the Monte-Carlo code TRIPOLI-4®. A benchmark study is defined to test this new model. Since one of the most critical areas for ITER neutronics analysis concerns the assessment of radiation levels and Shutdown Dose Rates (SDDR) behind the Equatorial Port Plugs (EPP), the benchmark is conducted to compare the neutron flux through the EPP. This problem is quite challenging with regard to the complex geometry and considering the important neutron flux attenuation ranging from 1014 down to 108 n•cm-2•s-1. Such code-to-code comparison provides independent validation of the Monte-Carlo simulations, improving the confidence in neutronic results.

  1. Achieving excellence in veterans healthcare--a balanced scorecard approach.

    PubMed

    Biro, Lawrence A; Moreland, Michael E; Cowgill, David E

    2003-01-01

    This article provides healthcare administrators and managers with a framework and model for developing a balanced scorecard and demonstrates the remarkable success of this process, which brings focus to leadership decisions about the allocation of resources. This scorecard was developed as a top management tool designed to structure multiple priorities of a large, complex, integrated healthcare system and to establish benchmarks to measure success in achieving targets for performance in identified areas. Significant benefits and positive results were derived from the implementation of the balanced scorecard, based upon benchmarks considered to be critical success factors. The network's chief executive officer and top leadership team set and articulated the network's primary operating principles: quality and efficiency in the provision of comprehensive healthcare and support services. Under the weighted benchmarks of the balanced scorecard, the facilities in the network were mandated to adhere to one non-negotiable tenet: providing care that is second to none. The balanced scorecard approach to leadership continuously ensures that this is the primary goal and focal point for all activity within the network. To that end, systems are always in place to ensure that the network is fully successful on all performance measures relating to quality.

  2. Model benchmarking and reference signals for angled-beam shear wave ultrasonic nondestructive evaluation (NDE) inspections

    NASA Astrophysics Data System (ADS)

    Aldrin, John C.; Hopkins, Deborah; Datuin, Marvin; Warchol, Mark; Warchol, Lyudmila; Forsyth, David S.; Buynak, Charlie; Lindgren, Eric A.

    2017-02-01

    For model benchmark studies, the accuracy of the model is typically evaluated based on the change in response relative to a selected reference signal. The use of a side drilled hole (SDH) in a plate was investigated as a reference signal for angled beam shear wave inspection for aircraft structure inspections of fastener sites. Systematic studies were performed with varying SDH depth and size, and varying the ultrasonic probe frequency, focal depth, and probe height. Increased error was observed with the simulation of angled shear wave beams in the near-field. Even more significant, asymmetry in real probes and the inherent sensitivity of signals in the near-field to subtle test conditions were found to provide a greater challenge with achieving model agreement. To achieve quality model benchmark results for this problem, it is critical to carefully align the probe with the part geometry, to verify symmetry in probe response, and ideally avoid using reference signals from the near-field response. Suggested reference signals for angled beam shear wave inspections include using the `through hole' corner specular reflection signal and the full skip' signal off of the far wall from the side drilled hole.

  3. The use of patient experience survey data by out-of-hours primary care services: a qualitative interview study.

    PubMed

    Barry, Heather E; Campbell, John L; Asprey, Anthea; Richards, Suzanne H

    2016-11-01

    English National Quality Requirements mandate out-of-hours primary care services to routinely audit patient experience, but do not state how it should be done. We explored how providers collect patient feedback data and use it to inform service provision. We also explored staff views on the utility of out-of-hours questions from the English General Practice Patient Survey (GPPS). A qualitative study was conducted with 31 staff (comprising service managers, general practitioners and administrators) from 11 out-of-hours primary care providers in England, UK. Staff responsible for patient experience audits within their service were sampled and data collected via face-to-face semistructured interviews. Although most providers regularly audited their patients' experiences by using patient surveys, many participants expressed a strong preference for additional qualitative feedback. Staff provided examples of small changes to service delivery resulting from patient feedback, but service-wide changes were not instigated. Perceptions that patients lacked sufficient understanding of the urgent care system in which out-of-hours primary care services operate were common and a barrier to using feedback to enable change. Participants recognised the value of using patient experience feedback to benchmark services, but perceived weaknesses in the out-of-hours items from the GPPS led them to question the validity of using these data for benchmarking in its current form. The lack of clarity around how out-of-hours providers should audit patient experience hinders the utility of the National Quality Requirements. Although surveys were common, patient feedback data had only a limited role in service change. Data derived from the GPPS may be used to benchmark service providers, but refinement of the out-of-hours items is needed. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  4. International Standards for Genomes, Transcriptomes, and Metagenomes

    PubMed Central

    Mason, Christopher E.; Afshinnekoo, Ebrahim; Tighe, Scott; Wu, Shixiu; Levy, Shawn

    2017-01-01

    Challenges and biases in preparing, characterizing, and sequencing DNA and RNA can have significant impacts on research in genomics across all kingdoms of life, including experiments in single-cells, RNA profiling, and metagenomics (across multiple genomes). Technical artifacts and contamination can arise at each point of sample manipulation, extraction, sequencing, and analysis. Thus, the measurement and benchmarking of these potential sources of error are of paramount importance as next-generation sequencing (NGS) projects become more global and ubiquitous. Fortunately, a variety of methods, standards, and technologies have recently emerged that improve measurements in genomics and sequencing, from the initial input material to the computational pipelines that process and annotate the data. Here we review current standards and their applications in genomics, including whole genomes, transcriptomes, mixed genomic samples (metagenomes), and the modified bases within each (epigenomes and epitranscriptomes). These standards, tools, and metrics are critical for quantifying the accuracy of NGS methods, which will be essential for robust approaches in clinical genomics and precision medicine. PMID:28337071

  5. Critical appraisal of the Vienna consensus: performance indicators for assisted reproductive technology laboratories.

    PubMed

    Lopez-Regalado, María Luisa; Martínez-Granados, Luis; González-Utor, Antonio; Ortiz, Nereyda; Iglesias, Miriam; Ardoy, Manuel; Castilla, Jose A

    2018-05-24

    The Vienna consensus, based on the recommendations of an expert panel, has identified 19 performance indicators for assisted reproductive technology (ART) laboratories. Two levels of reference values are established for these performance indicators: competence and benchmark. For over 10 years, the Spanish embryology association (ASEBIR) has participated in the definition and design of ART performance indicators, seeking to establish specific guidelines for ART laboratories to enhance quality, safety and patient welfare. Four years ago, ASEBIR took part in an initiative by AENOR, the Spanish Association for Standardization and Certification, to develop a national standard in this field (UNE 17900:2013 System of quality management for assisted reproduction laboratories), extending the former requirements, based on ISO 9001, to include performance indicators. Considering the experience acquired, we discuss various aspects of the Vienna consensus and consider certain discrepancies in performance indicators between the consensus and UNE 179007:2013, and analyse the definitions, methodology and reference values used. Copyright © 2018. Published by Elsevier Ltd.

  6. Defect measurement and analysis of JPL ground software: a case study

    NASA Technical Reports Server (NTRS)

    Powell, John D.; Spagnuolo, John N., Jr.

    2004-01-01

    Ground software systems at JPL must meet high assurance standards while remaining on schedule due to relatively immovable launch dates for spacecraft that will be controlled by such systems. Toward this end, the Software Quality Improvement (SQI) project's Measurement and Benchmarking (M&B) team is collecting and analyzing defect data of JPL ground system software projects to build software defect prediction models. The aim of these models is to improve predictability with regard to software quality activities. Predictive models will quantitatively define typical trends for JPL ground systems as well as Critical Discriminators (CDs) to provide explanations for atypical deviations from the norm at JPL. CDs are software characteristics that can be estimated or foreseen early in a software project's planning. Thus, these CDs will assist in planning for the predicted degree to which software quality activities for a project are likely to deviation from the normal JPL ground system based on pasted experience across the lab.

  7. Plurality of opinion, scientific discourse and pseudoscience: an in depth analysis of the Séralini et al. study claiming that Roundup™ Ready corn or the herbicide Roundup™ cause cancer in rats.

    PubMed

    Arjó, Gemma; Portero, Manuel; Piñol, Carme; Viñas, Juan; Matias-Guiu, Xavier; Capell, Teresa; Bartholomaeus, Andrew; Parrott, Wayne; Christou, Paul

    2013-04-01

    A recent paper published in the journal Food and Chemical Toxicology presents the results of a long-term toxicity study related to a widely-used commercial herbicide (Roundup™) and a Roundup-tolerant genetically modified variety of maize, concluding that both the herbicide and the maize varieties are toxic. Here we discuss the many errors and inaccuracies in the published article resulting in highly misleading conclusions, whose publication in the scientific literature and in the wider media has caused damage to the credibility of science and researchers in the field. We and many others have criticized the study, and in particular the manner in which the experiments were planned, implemented, analyzed, interpreted and communicated. The study appeared to sweep aside all known benchmarks of scientific good practice and, more importantly, to ignore the minimal standards of scientific and ethical conduct in particular concerning the humane treatment of experimental animals.

  8. In-core flux sensor evaluations at the ATR critical facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Troy Unruh; Benjamin Chase; Joy Rempe

    2014-09-01

    Flux detector evaluations were completed as part of a joint Idaho State University (ISU) / Idaho National Laboratory (INL) / French Atomic Energy commission (CEA) ATR National Scientific User Facility (ATR NSUF) project to compare the accuracy, response time, and long duration performance of several flux detectors. Special fixturing developed by INL allows real-time flux detectors to be inserted into various ATRC core positions and perform lobe power measurements, axial flux profile measurements, and detector cross-calibrations. Detectors initially evaluated in this program include the French Atomic Energy Commission (CEA)-developed miniature fission chambers; specialized self-powered neutron detectors (SPNDs) developed by themore » Argentinean National Energy Commission (CNEA); specially developed commercial SPNDs from Argonne National Laboratory. As shown in this article, data obtained from this program provides important insights related to flux detector accuracy and resolution for subsequent ATR and CEA experiments and flux data required for bench-marking models in the ATR V&V Upgrade Initiative.« less

  9. Quantifying risk and benchmarking performance in the adult intensive care unit.

    PubMed

    Higgins, Thomas L

    2007-01-01

    Morbidity, mortality, and length-of-stay outcomes in patients receiving critical care are difficult to interpret unless they are risk-stratified for diagnosis, presenting severity of illness, and other patient characteristics. Acuity adjustment systems for adults include the Acute Physiology And Chronic Health Evaluation (APACHE), the Mortality Probability Model (MPM), and the Simplified Acute Physiology Score (SAPS). All have recently been updated and recalibrated to reflect contemporary results. Specialized scores are also available for patient subpopulations where general acuity scores have drawbacks. Demand for outcomes data is likely to grow with pay-for-performance initiatives as well as for routine clinical, prognostic, administrative, and research applications. It is important for clinicians to understand how these scores are derived and how they are properly applied to quantify patient severity of illness and benchmark intensive care unit performance.

  10. Student Progress to Graduation in New York City High Schools: A Metric Designed by New Visions for Public Schools. Part I: Core Components

    ERIC Educational Resources Information Center

    Fairchild, Susan; Gunton, Brad; Donohue, Beverly; Berry, Carolyn; Genn, Ruth; Knevals, Jessica

    2011-01-01

    Students who achieve critical academic benchmarks such as high attendance rates, continuous levels of credit accumulation, and high grades have a greater likelihood of success throughout high school and beyond. However, keeping students on track toward meeting graduation requirements and quickly identifying students who are at risk of falling off…

  11. Student Technology Use in the Information-Seeking and Information-Gathering Process: A Critical Incident Approach for Benchmarking Performance

    ERIC Educational Resources Information Center

    Cordes, Sean

    2012-01-01

    This article is an exploratory study of student behavior using online tools to do project-based work for a library science course at a mid-sized Midwestern public university. The population was 22 net generation students aged 18-24, who were enrolled in an Introduction to Information Resources course. The study was designed to better understand…

  12. Family, Friend, and Neighbor Care: Strengthening a Critical Resource to Help Young Children Succeed. 2006 KIDS COUNT Essay

    ERIC Educational Resources Information Center

    Annie E. Casey Foundation, 2006

    2006-01-01

    KIDS COUNT, a project of the Annie E. Casey Foundation, is a national and state-by-state effort to track the status of children in the United States. By providing policymakers and citizens with benchmarks of child well-being, KIDS COUNT seeks to enrich local, state, and national discussions concerning ways to secure better futures for all…

  13. The SCALE Verified, Archived Library of Inputs and Data - VALID

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, William BJ J; Rearden, Bradley T

    The Verified, Archived Library of Inputs and Data (VALID) at ORNL contains high quality, independently reviewed models and results that improve confidence in analysis. VALID is developed and maintained according to a procedure of the SCALE quality assurance (QA) plan. This paper reviews the origins of the procedure and its intended purpose, the philosophy of the procedure, some highlights of its implementation, and the future of the procedure and associated VALID library. The original focus of the procedure was the generation of high-quality models that could be archived at ORNL and applied to many studies. The review process associated withmore » model generation minimized the chances of errors in these archived models. Subsequently, the scope of the library and procedure was expanded to provide high quality, reviewed sensitivity data files for deployment through the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE). Sensitivity data files for approximately 400 such models are currently available. The VALID procedure and library continue fulfilling these multiple roles. The VALID procedure is based on the quality assurance principles of ISO 9001 and nuclear safety analysis. Some of these key concepts include: independent generation and review of information, generation and review by qualified individuals, use of appropriate references for design data and documentation, and retrievability of the models, results, and documentation associated with entries in the library. Some highlights of the detailed procedure are discussed to provide background on its implementation and to indicate limitations of data extracted from VALID for use by the broader community. Specifically, external users of data generated within VALID must take responsibility for ensuring that the files are used within the QA framework of their organization and that use is appropriate. The future plans for the VALID library include expansion to include additional experiments from the IHECSBE, to include experiments from areas beyond criticality safety, such as reactor physics and shielding, and to include application models. In the future, external SCALE users may also obtain qualification under the VALID procedure and be involved in expanding the library. The VALID library provides a pathway for the criticality safety community to leverage modeling and analysis expertise at ORNL.« less

  14. A novel hybrid meta-heuristic technique applied to the well-known benchmark optimization problems

    NASA Astrophysics Data System (ADS)

    Abtahi, Amir-Reza; Bijari, Afsane

    2017-03-01

    In this paper, a hybrid meta-heuristic algorithm, based on imperialistic competition algorithm (ICA), harmony search (HS), and simulated annealing (SA) is presented. The body of the proposed hybrid algorithm is based on ICA. The proposed hybrid algorithm inherits the advantages of the process of harmony creation in HS algorithm to improve the exploitation phase of the ICA algorithm. In addition, the proposed hybrid algorithm uses SA to make a balance between exploration and exploitation phases. The proposed hybrid algorithm is compared with several meta-heuristic methods, including genetic algorithm (GA), HS, and ICA on several well-known benchmark instances. The comprehensive experiments and statistical analysis on standard benchmark functions certify the superiority of the proposed method over the other algorithms. The efficacy of the proposed hybrid algorithm is promising and can be used in several real-life engineering and management problems.

  15. A note on bound constraints handling for the IEEE CEC'05 benchmark function suite.

    PubMed

    Liao, Tianjun; Molina, Daniel; de Oca, Marco A Montes; Stützle, Thomas

    2014-01-01

    The benchmark functions and some of the algorithms proposed for the special session on real parameter optimization of the 2005 IEEE Congress on Evolutionary Computation (CEC'05) have played and still play an important role in the assessment of the state of the art in continuous optimization. In this article, we show that if bound constraints are not enforced for the final reported solutions, state-of-the-art algorithms produce infeasible best candidate solutions for the majority of functions of the IEEE CEC'05 benchmark function suite. This occurs even though the optima of the CEC'05 functions are within the specified bounds. This phenomenon has important implications on algorithm comparisons, and therefore on algorithm designs. This article's goal is to draw the attention of the community to the fact that some authors might have drawn wrong conclusions from experiments using the CEC'05 problems.

  16. Piping benchmark problems. Volume 1. Dynamic analysis uniform support motion response spectrum method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bezler, P.; Hartzman, M.; Reich, M.

    1980-08-01

    A set of benchmark problems and solutions have been developed for verifying the adequacy of computer programs used for dynamic analysis and design of nuclear piping systems by the Response Spectrum Method. The problems range from simple to complex configurations which are assumed to experience linear elastic behavior. The dynamic loading is represented by uniform support motion, assumed to be induced by seismic excitation in three spatial directions. The solutions consist of frequencies, participation factors, nodal displacement components and internal force and moment components. Solutions to associated anchor point motion static problems are not included.

  17. Turbofan forced mixer-nozzle internal flowfield. Volume 1: A benchmark experimental study

    NASA Technical Reports Server (NTRS)

    Paterson, R. W.

    1982-01-01

    An experimental investigation of the flow field within a model turbofan forced mixer nozzle is described. Velocity and thermodynamic state variable data for use in assessing the accuracy and assisting the further development of computational procedures for predicting the flow field within mixer nozzles are provided. Velocity and temperature data suggested that the nozzle mixing process was dominated by circulations (secondary flows) of a length scale on the order the lobe dimensions which were associated with strong radial velocities observed near the lobe exit plane. The 'benchmark' model mixer experiment conducted for code assessment purposes is discussed.

  18. Multitasking and microtasking experience on the NA S Cray-2 and ACF Cray X-MP

    NASA Technical Reports Server (NTRS)

    Raiszadeh, Farhad

    1987-01-01

    The fast Fourier transform (FFT) kernel of the NAS benchmark program has been utilized to experiment with the multitasking library on the Cray-2 and Cray X-MP/48, and microtasking directives on the Cray X-MP. Some performance figures are shown, and the state of multitasking software is described.

  19. The Sophomore RA Experience: An Examination of Job Satisfaction, Turnover Intentions, and RA Self-Efficacy

    ERIC Educational Resources Information Center

    Brandt Brecheisen, Shannon M.

    2014-01-01

    The purpose of this national, quantitative study was to (1) provide psychometrics for the ACUHO-I/EBI RA Survey, a joint project between between Educational Benchmarking, Inc (EBI) and The Association of College and University Housing Officers--International (ACUHO-I), and (2) explore the sophomore resident assistant (RA) experience. This study…

  20. PHITS simulations of the Matroshka experiment

    NASA Astrophysics Data System (ADS)

    Gustafsson, Katarina; Sihver, Lembit; Mancusi, Davide; Sato, Tatsuhiko

    In order to design a more secure space exploration, radiation exposure estimations are necessary; the radiation environment in space is very different from the one on Earth and it is harmful for humans and for electronic equipments. The threat origins from two sources: Galactic Cosmic Rays and Solar Particle Events. It is important to understand what happens when these particles strike matter such as space vehicle walls, human organs and electronics. We are therefore developing a tool able to estimate the radiation exposure to both humans and electronics. The tool will be based on PHITS, the Particle and Heavy-Ion Transport code System, a three dimensional Monte Carlo code which can calculate interactions and transport of particles and heavy ions in matter. PHITS is developed by a collaboration between RIST (Research Organization for Information Science & Technology), JAEA (Japan Atomic Energy Agency), KEK (High Energy Accelerator Research Organization), Japan and Chalmers University of Technology, Sweden. A method for benchmarking and developing the code is to simulate experiments performed in space or on Earth. We have carried out simulations of the Matroshka experiment which focus on determining the radiation load on astronauts inside and outside the International Space Station by using a torso of a tissue equivalent human phantom, filled with active and passive detectors located in the positions of critical tissues and organs. We will present status and results of our simulations.

  1. Solute transport in streams of varying morphology inferred from a high resolution network of potentiometric wireless chloride sensors

    NASA Astrophysics Data System (ADS)

    Klaus, Julian; Smettem, Keith; Pfister, Laurent; Harris, Nick

    2017-04-01

    There is ongoing interest in understanding and quantifying the travel times and dispersion of solutes moving through stream environments, including the hyporheic zone and/or in-channel dead zones where retention affects biogeochemical cycling processes that are critical to stream ecosystem functioning. Modelling these transport and retention processes requires acquisition of tracer data from injection experiments where the concentrations are recorded downstream. Such experiments are often time consuming and costly, which may be the reason many modelling studies of chemical transport have tended to rely on relatively few well documented field case studies. This leads to the need of fast and cheap distributed sensor arrays that respond instantly and record chemical transport at points of interest on timescales of seconds at various locations in the stream environment. To tackle this challenge we present data from several tracer experiments carried out in the Attert river catchment in Luxembourg employing low-cost (in the order of a euro per sensor) potentiometric chloride sensors in a distributed array. We injected NaCl under various baseflow conditions in streams of different morphologies and observed solute transport at various distances and locations. This data is used to benchmark the sensors to data obtained from more expensive electrical conductivity meters. Furthermore, the data allowed spatial resolution of hydrodynamic mixing processes and identification of chemical 'dead zones' in the study reaches.

  2. Assessing model sensitivity and uncertainty across multiple Free-Air CO2 Enrichment experiments.

    NASA Astrophysics Data System (ADS)

    Cowdery, E.; Dietze, M.

    2015-12-01

    As atmospheric levels of carbon dioxide levels continue to increase, it is critical that terrestrial ecosystem models can accurately predict ecological responses to the changing environment. Current predictions of net primary productivity (NPP) in response to elevated atmospheric CO2 concentrations are highly variable and contain a considerable amount of uncertainty. It is necessary that we understand which factors are driving this uncertainty. The Free-Air CO2 Enrichment (FACE) experiments have equipped us with a rich data source that can be used to calibrate and validate these model predictions. To identify and evaluate the assumptions causing inter-model differences we performed model sensitivity and uncertainty analysis across ambient and elevated CO2 treatments using the Data Assimilation Linked Ecosystem Carbon (DALEC) model and the Ecosystem Demography Model (ED2), two process-based models ranging from low to high complexity respectively. These modeled process responses were compared to experimental data from the Kennedy Space Center Open Top Chamber Experiment, the Nevada Desert Free Air CO2 Enrichment Facility, the Rhinelander FACE experiment, the Wyoming Prairie Heating and CO2 Enrichment Experiment, the Duke Forest Face experiment and the Oak Ridge Experiment on CO2 Enrichment. By leveraging data access proxy and data tilling services provided by the BrownDog data curation project alongside analysis modules available in the Predictive Ecosystem Analyzer (PEcAn), we produced automated, repeatable benchmarking workflows that are generalized to incorporate different sites and ecological models. Combining the observed patterns of uncertainty between the two models with results of the recent FACE-model data synthesis project (FACE-MDS) can help identify which processes need further study and additional data constraints. These findings can be used to inform future experimental design and in turn can provide informative starting point for data assimilation.

  3. Benchmarking the x-ray phase contrast imaging for ICF DT ice characterization using roughened surrogates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dewald, E; Kozioziemski, B; Moody, J

    2008-06-26

    We use x-ray phase contrast imaging to characterize the inner surface roughness of DT ice layers in capsules planned for future ignition experiments. It is therefore important to quantify how well the x-ray data correlates with the actual ice roughness. We benchmarked the accuracy of our system using surrogates with fabricated roughness characterized with high precision standard techniques. Cylindrical artifacts with azimuthally uniform sinusoidal perturbations with 100 um period and 1 um amplitude demonstrated 0.02 um accuracy limited by the resolution of the imager and the source size of our phase contrast system. Spherical surrogates with random roughness close tomore » that required for the DT ice for a successful ignition experiment were used to correlate the actual surface roughness to that obtained from the x-ray measurements. When comparing average power spectra of individual measurements, the accuracy mode number limits of the x-ray phase contrast system benchmarked against surface characterization performed by Atomic Force Microscopy are 60 and 90 for surrogates smoother and rougher than the required roughness for the ice. These agreement mode number limits are >100 when comparing matching individual measurements. We will discuss the implications for interpreting DT ice roughness data derived from phase-contrast x-ray imaging.« less

  4. Performance Evaluation and Benchmarking of Next Intelligent Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    del Pobil, Angel; Madhavan, Raj; Bonsignorio, Fabio

    Performance Evaluation and Benchmarking of Intelligent Systems presents research dedicated to the subject of performance evaluation and benchmarking of intelligent systems by drawing from the experiences and insights of leading experts gained both through theoretical development and practical implementation of intelligent systems in a variety of diverse application domains. This contributed volume offers a detailed and coherent picture of state-of-the-art, recent developments, and further research areas in intelligent systems. The chapters cover a broad range of applications, such as assistive robotics, planetary surveying, urban search and rescue, and line tracking for automotive assembly. Subsystems or components described in this bookmore » include human-robot interaction, multi-robot coordination, communications, perception, and mapping. Chapters are also devoted to simulation support and open source software for cognitive platforms, providing examples of the type of enabling underlying technologies that can help intelligent systems to propagate and increase in capabilities. Performance Evaluation and Benchmarking of Intelligent Systems serves as a professional reference for researchers and practitioners in the field. This book is also applicable to advanced courses for graduate level students and robotics professionals in a wide range of engineering and related disciplines including computer science, automotive, healthcare, manufacturing, and service robotics.« less

  5. Establishing objective benchmarks in robotic virtual reality simulation at the level of a competent surgeon using the RobotiX Mentor simulator.

    PubMed

    Watkinson, William; Raison, Nicholas; Abe, Takashige; Harrison, Patrick; Khan, Shamim; Van der Poel, Henk; Dasgupta, Prokar; Ahmed, Kamran

    2018-05-01

    To establish objective benchmarks at the level of a competent robotic surgeon across different exercises and metrics for the RobotiX Mentor virtual reality (VR) simulator suitable for use within a robotic surgical training curriculum. This retrospective observational study analysed results from multiple data sources, all of which used the RobotiX Mentor VR simulator. 123 participants with varying experience from novice to expert completed the exercises. Competency was established as the 25th centile of the mean advanced intermediate score. Three basic skill exercises and two advanced skill exercises were used. King's College London. 84 Novice, 26 beginner intermediates, 9 advanced intermediates and 4 experts were used in this retrospective observational study. Objective benchmarks derived from the 25th centile of the mean scores of the advanced intermediates provided suitably challenging yet also achievable targets for training surgeons. The disparity in scores was greatest for the advanced exercises. Novice surgeons are able to achieve the benchmarks across all exercises in the majority of metrics. We have successfully created this proof-of-concept study, which requires validation in a larger cohort. Objective benchmarks obtained from the 25th centile of the mean scores of advanced intermediates provide clinically relevant benchmarks at the standard of a competent robotic surgeon that are challenging yet also attainable. That can be used within a VR training curriculum allowing participants to track and monitor their progress in a structured and progressional manner through five exercises. Providing clearly defined targets, ensuring that a universal training standard has been achieved across training surgeons. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  6. Collegiate Aviation Research and Education Solutions to Critical Safety Issues

    NASA Technical Reports Server (NTRS)

    Bowen, Brent (Editor)

    2002-01-01

    This Conference Proceedings is a collection of 6 abstracts and 3 papers presented April 19-20, 2001 in Denver, CO. The conference focus was "Best Practices and Benchmarking in Collegiate and Industry Programs". Topics covered include: satellite-based aviation navigation; weather safety training; human-behavior and aircraft maintenance issues; disaster preparedness; the collegiate aviation emergency response checklist; aviation safety research; and regulatory status of maintenance resource management.

  7. AWARE: Adaptive Software Monitoring and Dynamic Reconfiguration for Critical Infrastructure Protection

    DTIC Science & Technology

    2015-04-29

    in which we applied these adaptation patterns to an adaptive news web server intended to tolerate extremely heavy, unexpected loads. To address...collection of existing models used as benchmarks for OO-based refactoring and an existing web -based repository called REMODD to provide users with model...invariant properties. Specifically, we developed Avida- MDE (based on the Avida digital evolution platform) to support the automatic generation of software

  8. Biased ART: a neural architecture that shifts attention toward previously disregarded features following an incorrect prediction.

    PubMed

    Carpenter, Gail A; Gaddam, Sai Chaitanya

    2010-04-01

    Memories in Adaptive Resonance Theory (ART) networks are based on matched patterns that focus attention on those portions of bottom-up inputs that match active top-down expectations. While this learning strategy has proved successful for both brain models and applications, computational examples show that attention to early critical features may later distort memory representations during online fast learning. For supervised learning, biased ARTMAP (bARTMAP) solves the problem of over-emphasis on early critical features by directing attention away from previously attended features after the system makes a predictive error. Small-scale, hand-computed analog and binary examples illustrate key model dynamics. Two-dimensional simulation examples demonstrate the evolution of bARTMAP memories as they are learned online. Benchmark simulations show that featural biasing also improves performance on large-scale examples. One example, which predicts movie genres and is based, in part, on the Netflix Prize database, was developed for this project. Both first principles and consistent performance improvements on all simulation studies suggest that featural biasing should be incorporated by default in all ARTMAP systems. Benchmark datasets and bARTMAP code are available from the CNS Technology Lab Website: http://techlab.bu.edu/bART/. Copyright 2009 Elsevier Ltd. All rights reserved.

  9. Validating the performance of correlated fission multiplicity implementation in radiation transport codes with subcritical neutron multiplication benchmark experiments

    DOE PAGES

    Arthur, Jennifer; Bahran, Rian; Hutchinson, Jesson; ...

    2018-06-14

    Historically, radiation transport codes have uncorrelated fission emissions. In reality, the particles emitted by both spontaneous and induced fissions are correlated in time, energy, angle, and multiplicity. This work validates the performance of various current Monte Carlo codes that take into account the underlying correlated physics of fission neutrons, specifically neutron multiplicity distributions. The performance of 4 Monte Carlo codes - MCNP®6.2, MCNP®6.2/FREYA, MCNP®6.2/CGMF, and PoliMi - was assessed using neutron multiplicity benchmark experiments. In addition, MCNP®6.2 simulations were run using JEFF-3.2 and JENDL-4.0, rather than ENDF/B-VII.1, data for 239Pu and 240Pu. The sensitive benchmark parameters that in this workmore » represent the performance of each correlated fission multiplicity Monte Carlo code include the singles rate, the doubles rate, leakage multiplication, and Feynman histograms. Although it is difficult to determine which radiation transport code shows the best overall performance in simulating subcritical neutron multiplication inference benchmark measurements, it is clear that correlations exist between the underlying nuclear data utilized by (or generated by) the various codes, and the correlated neutron observables of interest. This could prove useful in nuclear data validation and evaluation applications, in which a particular moment of the neutron multiplicity distribution is of more interest than the other moments. It is also quite clear that, because transport is handled by MCNP®6.2 in 3 of the 4 codes, with the 4th code (PoliMi) being based on an older version of MCNP®, the differences in correlated neutron observables of interest are most likely due to the treatment of fission event generation in each of the different codes, as opposed to the radiation transport.« less

  10. Towards evidence-based computational statistics: lessons from clinical research on the role and design of real-data benchmark studies.

    PubMed

    Boulesteix, Anne-Laure; Wilson, Rory; Hapfelmeier, Alexander

    2017-09-09

    The goal of medical research is to develop interventions that are in some sense superior, with respect to patient outcome, to interventions currently in use. Similarly, the goal of research in methodological computational statistics is to develop data analysis tools that are themselves superior to the existing tools. The methodology of the evaluation of medical interventions continues to be discussed extensively in the literature and it is now well accepted that medicine should be at least partly "evidence-based". Although we statisticians are convinced of the importance of unbiased, well-thought-out study designs and evidence-based approaches in the context of clinical research, we tend to ignore these principles when designing our own studies for evaluating statistical methods in the context of our methodological research. In this paper, we draw an analogy between clinical trials and real-data-based benchmarking experiments in methodological statistical science, with datasets playing the role of patients and methods playing the role of medical interventions. Through this analogy, we suggest directions for improvement in the design and interpretation of studies which use real data to evaluate statistical methods, in particular with respect to dataset inclusion criteria and the reduction of various forms of bias. More generally, we discuss the concept of "evidence-based" statistical research, its limitations and its impact on the design and interpretation of real-data-based benchmark experiments. We suggest that benchmark studies-a method of assessment of statistical methods using real-world datasets-might benefit from adopting (some) concepts from evidence-based medicine towards the goal of more evidence-based statistical research.

  11. Validating the performance of correlated fission multiplicity implementation in radiation transport codes with subcritical neutron multiplication benchmark experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arthur, Jennifer; Bahran, Rian; Hutchinson, Jesson

    Historically, radiation transport codes have uncorrelated fission emissions. In reality, the particles emitted by both spontaneous and induced fissions are correlated in time, energy, angle, and multiplicity. This work validates the performance of various current Monte Carlo codes that take into account the underlying correlated physics of fission neutrons, specifically neutron multiplicity distributions. The performance of 4 Monte Carlo codes - MCNP®6.2, MCNP®6.2/FREYA, MCNP®6.2/CGMF, and PoliMi - was assessed using neutron multiplicity benchmark experiments. In addition, MCNP®6.2 simulations were run using JEFF-3.2 and JENDL-4.0, rather than ENDF/B-VII.1, data for 239Pu and 240Pu. The sensitive benchmark parameters that in this workmore » represent the performance of each correlated fission multiplicity Monte Carlo code include the singles rate, the doubles rate, leakage multiplication, and Feynman histograms. Although it is difficult to determine which radiation transport code shows the best overall performance in simulating subcritical neutron multiplication inference benchmark measurements, it is clear that correlations exist between the underlying nuclear data utilized by (or generated by) the various codes, and the correlated neutron observables of interest. This could prove useful in nuclear data validation and evaluation applications, in which a particular moment of the neutron multiplicity distribution is of more interest than the other moments. It is also quite clear that, because transport is handled by MCNP®6.2 in 3 of the 4 codes, with the 4th code (PoliMi) being based on an older version of MCNP®, the differences in correlated neutron observables of interest are most likely due to the treatment of fission event generation in each of the different codes, as opposed to the radiation transport.« less

  12. Food Recognition: A New Dataset, Experiments, and Results.

    PubMed

    Ciocca, Gianluigi; Napoletano, Paolo; Schettini, Raimondo

    2017-05-01

    We propose a new dataset for the evaluation of food recognition algorithms that can be used in dietary monitoring applications. Each image depicts a real canteen tray with dishes and foods arranged in different ways. Each tray contains multiple instances of food classes. The dataset contains 1027 canteen trays for a total of 3616 food instances belonging to 73 food classes. The food on the tray images has been manually segmented using carefully drawn polygonal boundaries. We have benchmarked the dataset by designing an automatic tray analysis pipeline that takes a tray image as input, finds the regions of interest, and predicts for each region the corresponding food class. We have experimented with three different classification strategies using also several visual descriptors. We achieve about 79% of food and tray recognition accuracy using convolutional-neural-networks-based features. The dataset, as well as the benchmark framework, are available to the research community.

  13. All Entangled States can Demonstrate Nonclassical Teleportation.

    PubMed

    Cavalcanti, Daniel; Skrzypczyk, Paul; Šupić, Ivan

    2017-09-15

    Quantum teleportation, the process by which Alice can transfer an unknown quantum state to Bob by using preshared entanglement and classical communication, is one of the cornerstones of quantum information. The standard benchmark for certifying quantum teleportation consists in surpassing the maximum average fidelity between the teleported and the target states that can be achieved classically. According to this figure of merit, not all entangled states are useful for teleportation. Here we propose a new benchmark that uses the full information available in a teleportation experiment and prove that all entangled states can implement a quantum channel which cannot be reproduced classically. We introduce the idea of nonclassical teleportation witness to certify if a teleportation experiment is genuinely quantum and discuss how to quantify this phenomenon. Our work provides new techniques for studying teleportation that can be immediately applied to certify the quality of quantum technologies.

  14. CFD validation experiments for hypersonic flows

    NASA Technical Reports Server (NTRS)

    Marvin, Joseph G.

    1992-01-01

    A roadmap for CFD code validation is introduced. The elements of the roadmap are consistent with air-breathing vehicle design requirements and related to the important flow path components: forebody, inlet, combustor, and nozzle. Building block and benchmark validation experiments are identified along with their test conditions and measurements. Based on an evaluation criteria, recommendations for an initial CFD validation data base are given and gaps identified where future experiments could provide new validation data.

  15. Critical radiation fluxes and luminosities of black holes and relativistic stars

    NASA Technical Reports Server (NTRS)

    Lamb, Frederick K.; Miller, M. Coleman

    1995-01-01

    The critial luminosity at which the outward force of radiation balances the inward force of gravity plays an important role in many astrophysical systems. We present expressions for the radiation force on particles with arbitrary cross sections and analyze the radiation field produced by radiating matter, such as a disk, ring, boundary layer, or stellar surface, that rotates slowly around a slowly rotating gravitating mass. We then use these results to investigate the critical radiation flux and, where possible, the critical luminosity of such a system in genral relativity. We demonstrate that if the radiation source is axisymmetric and emission is back-front symmetric with repect to the local direction of motion of the radiating matter, as seen in the comoving frame, then the radial component of the radiation flux and the diagonal components of the radiation stress-energy tensor outside the source are the same, to first order in the rotation rates, as they would be if the radiation source and gravitating mass were not rotating. We argue that the critical radiation flux for matter at rest in the locally nonrotating frame is often satisfactory as an astrophysical benchmark flux and show that if this benchmark is adopted, many of the complications potentially introduced by rotation of the radiation source and the gravitating mass are avoided. We show that if the radiation field in the absence of rotation would be spherically symmetric and the opacity is independent of frequency and direction, one can define a critical luminosity for the system that is independent of frequency and direction, one can define a critical luminosity for the system that is independent of the spectrum and angular size of the radiation source and is unaffected by rotation of the source and mass and orbital motion of the matter, to first order. Finally, we analyze the conditions under which the maximum possible luminosity of a star or black hole powered by steady spherically symmetric radial accretion is the same in general relativity as in the Newtonian limit.

  16. Some experiences in aircraft aeroelastic design using Preliminary Aeroelastic Design of Structures (PAD)

    NASA Technical Reports Server (NTRS)

    Radovcich, N. A.

    1984-01-01

    The design experience associated with a benchmark aeroelastic design of an out of production transport aircraft is discussed. Current work being performed on a high aspect ratio wing design is reported. The Preliminary Aeroelastic Design of Structures (PADS) system is briefly summarized and some operational aspects of generating the design in an automated aeroelastic design environment are discussed.

  17. Robust performance of multiple tasks by a mobile robot

    NASA Technical Reports Server (NTRS)

    Beckerman, Martin; Barnett, Deanna L.; Dickens, Mike; Weisbin, Charles R.

    1989-01-01

    While there have been many successful mobile robot experiments, only a few papers have addressed issues pertaining to the range of applicability, or robustness, of robotic systems. The purpose of this paper is to report results of a series of benchmark experiments done to determine and quantify the robustness of an integrated hardware and software system of a mobile robot.

  18. What can one learn from experiments about the elusive transition state?

    PubMed Central

    Chang, Iksoo; Cieplak, Marek; Banavar, Jayanth R.; Maritan, Amos

    2004-01-01

    We present the results of an exact analysis of a model energy landscape of a protein to clarify the idea of the transition state and the physical meaning of the φ values determined in protein engineering experiments. We benchmark our findings to various theoretical approaches proposed in the literature for the identification and characterization of the transition state. PMID:15295118

  19. Present Status and Extensions of the Monte Carlo Performance Benchmark

    NASA Astrophysics Data System (ADS)

    Hoogenboom, J. Eduard; Petrovic, Bojan; Martin, William R.

    2014-06-01

    The NEA Monte Carlo Performance benchmark started in 2011 aiming to monitor over the years the abilities to perform a full-size Monte Carlo reactor core calculation with a detailed power production for each fuel pin with axial distribution. This paper gives an overview of the contributed results thus far. It shows that reaching a statistical accuracy of 1 % for most of the small fuel zones requires about 100 billion neutron histories. The efficiency of parallel execution of Monte Carlo codes on a large number of processor cores shows clear limitations for computer clusters with common type computer nodes. However, using true supercomputers the speedup of parallel calculations is increasing up to large numbers of processor cores. More experience is needed from calculations on true supercomputers using large numbers of processors in order to predict if the requested calculations can be done in a short time. As the specifications of the reactor geometry for this benchmark test are well suited for further investigations of full-core Monte Carlo calculations and a need is felt for testing other issues than its computational performance, proposals are presented for extending the benchmark to a suite of benchmark problems for evaluating fission source convergence for a system with a high dominance ratio, for coupling with thermal-hydraulics calculations to evaluate the use of different temperatures and coolant densities and to study the correctness and effectiveness of burnup calculations. Moreover, other contemporary proposals for a full-core calculation with realistic geometry and material composition will be discussed.

  20. A new numerical benchmark for variably saturated variable-density flow and transport in porous media

    NASA Astrophysics Data System (ADS)

    Guevara, Carlos; Graf, Thomas

    2016-04-01

    In subsurface hydrological systems, spatial and temporal variations in solute concentration and/or temperature may affect fluid density and viscosity. These variations could lead to potentially unstable situations, in which a dense fluid overlies a less dense fluid. These situations could produce instabilities that appear as dense plume fingers migrating downwards counteracted by vertical upwards flow of freshwater (Simmons et al., Transp. Porous Medium, 2002). As a result of unstable variable-density flow, solute transport rates are increased over large distances and times as compared to constant-density flow. The numerical simulation of variable-density flow in saturated and unsaturated media requires corresponding benchmark problems against which a computer model is validated (Diersch and Kolditz, Adv. Water Resour, 2002). Recorded data from a laboratory-scale experiment of variable-density flow and solute transport in saturated and unsaturated porous media (Simmons et al., Transp. Porous Medium, 2002) is used to define a new numerical benchmark. The HydroGeoSphere code (Therrien et al., 2004) coupled with PEST (www.pesthomepage.org) are used to obtain an optimized parameter set capable of adequately representing the data set by Simmons et al., (2002). Fingering in the numerical model is triggered using random hydraulic conductivity fields. Due to the inherent randomness, a large number of simulations were conducted in this study. The optimized benchmark model adequately predicts the plume behavior and the fate of solutes. This benchmark is useful for model verification of variable-density flow problems in saturated and/or unsaturated media.

  1. Assessing the Impact of Telemedicine on Nursing Care in Intensive Care Units.

    PubMed

    Kleinpell, Ruth; Barden, Connie; Rincon, Teresa; McCarthy, Mary; Zapatochny Rufo, Rebecca J

    2016-01-01

    Information on the impact of tele-intensive care on nursing and priority areas of nursing care is limited. To conduct a national benchmarking survey of nurses working in intensive care telemedicine facilities in the United States. In a 2-phased study, an online survey was used to assess nurses' perceptions of intensive care telemedicine, and a modified 2-round Delphi study was used to identify priority areas of nursing. In phase 1, most of the 1213 respondents agreed to strongly agreed that using tele-intensive care enables them to accomplish tasks more quickly (63%), improves collaboration (65.9%), improves job performance (63.6%) and communication (60.4%), is useful in nursing assessments (60%), and improves care by providing more time for patient care (45.6%). Benefits of tele-intensive care included ability to detect trends in vital signs, detect unstable physiological status, provide medical management, and enhance patient safety. Barriers included technical problems (audio and video), interruptions in care, perceptions of telemedicine as an interference, and attitudes of staff. In phase 2, 60 nurses ranked 15 priority areas of care, including critical thinking skills, intensive care experience, skillful communication, mutual respect, and management of emergency patient care. The findings can be used to further inform the development of competencies for tele-intensive care nursing, match the tele-intensive care nursing practice guidelines of the American Association of Critical-Care Nurses, and highlight concepts related to the association's standards for establishing and sustaining healthy work environments. ©2016 American Association of Critical-Care Nurses.

  2. A Mind of Their Own: Using Inquiry-based Teaching to Build Critical Thinking Skills and Intellectual Engagement in an Undergraduate Neuroanatomy Course

    PubMed Central

    Greenwald, Ralf R.; Quitadamo, Ian J.

    2014-01-01

    A changing undergraduate demographic and the need to help students develop advanced critical thinking skills in neuroanatomy courses has prompted many faculty to consider new teaching methods including clinical case studies. This study compared primarily conventional and inquiry-based clinical case (IBCC) teaching methods to determine which would produce greater gains in critical thinking and content knowledge. Results showed students in the conventional neuroanatomy course gained less than 3 national percentile ranks while IBCC students gained over 7.5 within one academic term using the valid and reliable California Critical Thinking Skills Test. In addition to 2.5 times greater gains in critical thinking, IBCC teaching methods also produced 12% greater final exam performance and 11% higher grades using common grade performance benchmarks. Classroom observations also indicated that IBCC students were more intellectually engaged and participated to a greater extent in classroom discussions. Through the results of this study, it is hoped that faculty who teach neuroanatomy and desire greater critical thinking and content student learning outcomes will consider using the IBCC method. PMID:24693256

  3. A Mind of Their Own: Using Inquiry-based Teaching to Build Critical Thinking Skills and Intellectual Engagement in an Undergraduate Neuroanatomy Course.

    PubMed

    Greenwald, Ralf R; Quitadamo, Ian J

    2014-01-01

    A changing undergraduate demographic and the need to help students develop advanced critical thinking skills in neuroanatomy courses has prompted many faculty to consider new teaching methods including clinical case studies. This study compared primarily conventional and inquiry-based clinical case (IBCC) teaching methods to determine which would produce greater gains in critical thinking and content knowledge. Results showed students in the conventional neuroanatomy course gained less than 3 national percentile ranks while IBCC students gained over 7.5 within one academic term using the valid and reliable California Critical Thinking Skills Test. In addition to 2.5 times greater gains in critical thinking, IBCC teaching methods also produced 12% greater final exam performance and 11% higher grades using common grade performance benchmarks. Classroom observations also indicated that IBCC students were more intellectually engaged and participated to a greater extent in classroom discussions. Through the results of this study, it is hoped that faculty who teach neuroanatomy and desire greater critical thinking and content student learning outcomes will consider using the IBCC method.

  4. Information System Implementation: Benchmarking the Stages.

    ERIC Educational Resources Information Center

    Calbos, Dennis P.

    1984-01-01

    The evolution of administrative data processing systems at the University of Georgia is summarized. Nolan's revised stage model was used as a framework to present the university's experience and relate it to the growing body of system implementation research. (Author/MLW)

  5. 45 CFR 156.20 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... adjustments made pursuant to the benchmark standards described in § 156.110 of this subchapter. Benefit design... this subchapter. Enrollee satisfaction survey vendor means an organization that has relevant survey administration experience (for example, CAHPS® surveys), organizational survey capacity, and quality control...

  6. Starting a learning progression for agricultural literacy: A qualitative study of urban elementary student understandings of agricultural and science education benchmarks

    NASA Astrophysics Data System (ADS)

    Hess, Alexander Jay

    Science and agriculture professional organizations have argued for agricultural literacy as a goal for K-12 public education. Due to the complexity of our modern agri-food system, with social, economic, and environmental concerns embedded, an agriculturally literate society is needed for informed decision making, democratic participation, and system reform. While grade-span specific benchmarks for gauging agri-food system literacy have been developed, little attention has been paid to existing ideas individuals hold about the agri-food system, how these existing ideas relate to benchmarks, how experience shapes such ideas, or how ideas change overtime. Developing a body of knowledge on students' agri-food system understandings as they develop across K-12 grades can ground efforts seeking to promote a learning progression toward agricultural literacy. This study compares existing perceptions held by 18 upper elementary students from a large urban center in California to agri-food system literacy benchmarks and examines the perceptions against student background and experiences. Data were collected via semi-structured interviews and analyzed using the constant comparative method. Constructivist theoretical perspectives framed the study. No student had ever grown their own food, raised a plant, or cared for an animal. Participation in school fieldtrips to farms or visits to a relative's garden were agricultural experiences most frequently mentioned. Students were able to identify common food items, but could not elaborate on their origins, especially those that were highly processed. Students' understanding of post-production activities (i.e. food processing, manufacturing, or food marketing) was not apparent. Students' understanding of farms reflected a 1900's subsistence farming operation commonly found in a literature written for the primary grades. Students were unaware that plants and animals were selected for production based on desired genetic traits. Obtaining food from areas with favorable growing conditions and supporting technology (such as transportation and refrigeration) was an understanding lacking in the group. Furthermore, most spoilage prevention technologies employed today were not an expressed part of student's schema. Students' backgrounds and experiences did not appear to support the development of a robust agri-food system schema. An agricultural science and technology schema appears poorly developed in each of the students.

  7. Sensitivity and Uncertainty Analysis of the GFR MOX Fuel Subassembly

    NASA Astrophysics Data System (ADS)

    Lüley, J.; Vrban, B.; Čerba, Š.; Haščík, J.; Nečas, V.; Pelloni, S.

    2014-04-01

    We performed sensitivity and uncertainty analysis as well as benchmark similarity assessment of the MOX fuel subassembly designed for the Gas-Cooled Fast Reactor (GFR) as a representative material of the core. Material composition was defined for each assembly ring separately allowing us to decompose the sensitivities not only for isotopes and reactions but also for spatial regions. This approach was confirmed by direct perturbation calculations for chosen materials and isotopes. Similarity assessment identified only ten partly comparable benchmark experiments that can be utilized in the field of GFR development. Based on the determined uncertainties, we also identified main contributors to the calculation bias.

  8. Completing the Feedback Loop: The Impact of Chlorophyll Data Assimilation on the Ocean State

    NASA Technical Reports Server (NTRS)

    Borovikov, Anna; Keppenne, Christian; Kovach, Robin

    2015-01-01

    In anticipation of the integration of a full biochemical model into the next generation GMAO coupled system, an intermediate solution has been implemented to estimate the penetration depth (1Kd_PAR) of ocean radiation based on the chlorophyll concentration. The chlorophyll is modeled as a tracer with sources-sinks coming from the assimilation of MODIS chlorophyll data. Two experiments were conducted with the coupled ocean-atmosphere model. In the first, climatological values of Kpar were used. In the second, retrieved daily chlorophyll concentrations were assimilated and Kd_PAR was derived according to Morel et al (2007). No other data was assimilated to isolate the effects of the time-evolving chlorophyll field. The daily MODIS Kd_PAR product was used to validate the skill of the penetration depth estimation and the MERRA-OCEAN re-analysis was used as a benchmark to study the sensitivity of the upper ocean heat content and vertical temperature distribution to the chlorophyll input. In the experiment with daily chlorophyll data assimilation, the penetration depth was estimated more accurately, especially in the tropics. As a result, the temperature bias of the model was reduced. A notably robust albeit small (2-5 percent) improvement was found across the equatorial Pacific ocean, which is a critical region for seasonal to inter-annual prediction.

  9. The effect of lexical factors on recall from working memory: Generalizing the neighborhood size effect.

    PubMed

    Derraugh, Lesley S; Neath, Ian; Surprenant, Aimée M; Beaudry, Olivia; Saint-Aubin, Jean

    2017-03-01

    The word-length effect, the finding that lists of short words are better recalled than lists of long words, is 1 of the 4 benchmark phenomena that guided development of the phonological loop component of working memory. However, previous work has noted a confound in word-length studies: The short words used had more orthographic neighbors (valid words that can be made by changing a single letter in the target word) than long words. The confound is that words with more neighbors are better recalled than otherwise comparable words with fewer neighbors. Two experiments are reported that address criticisms of the neighborhood-size account of the word-length effect by (1) testing 2 new stimulus sets, (2) using open rather than closed pools of words, and (3) using stimuli from a language other than English. In both experiments, words from large neighborhoods were better recalled than words from small neighborhoods. The results add to the growing number of studies demonstrating the substantial contribution of long-term memory to what have traditionally been identified as working memory tasks. The data are more easily explained by models incorporating the concept of redintegration rather than by frameworks such as the phonological loop that posit decay offset by rehearsal. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  10. Fusion neutron source blanket: requirements for calculation accuracy and benchmark experiment precision

    NASA Astrophysics Data System (ADS)

    Zhirkin, A. V.; Alekseev, P. N.; Batyaev, V. F.; Gurevich, M. I.; Dudnikov, A. A.; Kuteev, B. V.; Pavlov, K. V.; Titarenko, Yu. E.; Titarenko, A. Yu.

    2017-06-01

    In this report the calculation accuracy requirements of the main parameters of the fusion neutron source, and the thermonuclear blankets with a DT fusion power of more than 10 MW, are formulated. To conduct the benchmark experiments the technical documentation and calculation models were developed for two blanket micro-models: the molten salt and the heavy water solid-state blankets. The calculations of the neutron spectra, and 37 dosimetric reaction rates that are widely used for the registration of thermal, resonance and threshold (0.25-13.45 MeV) neutrons, were performed for each blanket micro-model. The MCNP code and the neutron data library ENDF/B-VII were used for the calculations. All the calculations were performed for two kinds of neutron source: source I is the fusion source, source II is the source of neutrons generated by the 7Li target irradiated by protons with energy 24.6 MeV. The spectral indexes ratios were calculated to describe the spectrum variations from different neutron sources. The obtained results demonstrate the advantage of using the fusion neutron source in future experiments.

  11. Relativistic nuclear magnetic resonance J-coupling with ultrasoft pseudopotentials and the zeroth-order regular approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Green, Timothy F. G., E-mail: tim.green@materials.ox.ac.uk; Yates, Jonathan R., E-mail: jonathan.yates@materials.ox.ac.uk

    2014-06-21

    We present a method for the first-principles calculation of nuclear magnetic resonance (NMR) J-coupling in extended systems using state-of-the-art ultrasoft pseudopotentials and including scalar-relativistic effects. The use of ultrasoft pseudopotentials is allowed by extending the projector augmented wave (PAW) method of Joyce et al. [J. Chem. Phys. 127, 204107 (2007)]. We benchmark it against existing local-orbital quantum chemical calculations and experiments for small molecules containing light elements, with good agreement. Scalar-relativistic effects are included at the zeroth-order regular approximation level of theory and benchmarked against existing local-orbital quantum chemical calculations and experiments for a number of small molecules containing themore » heavy row six elements W, Pt, Hg, Tl, and Pb, with good agreement. Finally, {sup 1}J(P-Ag) and {sup 2}J(P-Ag-P) couplings are calculated in some larger molecular crystals and compared against solid-state NMR experiments. Some remarks are also made as to improving the numerical stability of dipole perturbations using PAW.« less

  12. Parallel Ada benchmarks for the SVMS

    NASA Technical Reports Server (NTRS)

    Collard, Philippe E.

    1990-01-01

    The use of parallel processing paradigm to design and develop faster and more reliable computers appear to clearly mark the future of information processing. NASA started the development of such an architecture: the Spaceborne VHSIC Multi-processor System (SVMS). Ada will be one of the languages used to program the SVMS. One of the unique characteristics of Ada is that it supports parallel processing at the language level through the tasking constructs. It is important for the SVMS project team to assess how efficiently the SVMS architecture will be implemented, as well as how efficiently Ada environment will be ported to the SVMS. AUTOCLASS II, a Bayesian classifier written in Common Lisp, was selected as one of the benchmarks for SVMS configurations. The purpose of the R and D effort was to provide the SVMS project team with the version of AUTOCLASS II, written in Ada, that would make use of Ada tasking constructs as much as possible so as to constitute a suitable benchmark. Additionally, a set of programs was developed that would measure Ada tasking efficiency on parallel architectures as well as determine the critical parameters influencing tasking efficiency. All this was designed to provide the SVMS project team with a set of suitable tools in the development of the SVMS architecture.

  13. Reviews and syntheses: Field data to benchmark the carbon cycle models for tropical forests

    USGS Publications Warehouse

    Clark, Deborah A.; Asao, Shinichi; Fisher, Rosie A.; Reed, Sasha C.; Reich, Peter B.; Ryan, Michael G.; Wood, Tana E.; Yang, Xiaojuan

    2017-01-01

    For more accurate projections of both the global carbon (C) cycle and the changing climate, a critical current need is to improve the representation of tropical forests in Earth system models. Tropical forests exchange more C, energy, and water with the atmosphere than any other class of land ecosystems. Further, tropical-forest C cycling is likely responding to the rapid global warming, intensifying water stress, and increasing atmospheric CO2 levels. Projections of the future C balance of the tropics vary widely among global models. A current effort of the modeling community, the ILAMB (International Land Model Benchmarking) project, is to compile robust observations that can be used to improve the accuracy and realism of the land models for all major biomes. Our goal with this paper is to identify field observations of tropical-forest ecosystem C stocks and fluxes, and of their long-term trends and climatic and CO2 sensitivities, that can serve this effort. We propose criteria for reference-level field data from this biome and present a set of documented examples from old-growth lowland tropical forests. We offer these as a starting point towards the goal of a regularly updated consensus set of benchmark field observations of C cycling in tropical forests.

  14. Leveraging long read sequencing from a single individual to provide a comprehensive resource for benchmarking variant calling methods

    PubMed Central

    Mu, John C.; Tootoonchi Afshar, Pegah; Mohiyuddin, Marghoob; Chen, Xi; Li, Jian; Bani Asadi, Narges; Gerstein, Mark B.; Wong, Wing H.; Lam, Hugo Y. K.

    2015-01-01

    A high-confidence, comprehensive human variant set is critical in assessing accuracy of sequencing algorithms, which are crucial in precision medicine based on high-throughput sequencing. Although recent works have attempted to provide such a resource, they still do not encompass all major types of variants including structural variants (SVs). Thus, we leveraged the massive high-quality Sanger sequences from the HuRef genome to construct by far the most comprehensive gold set of a single individual, which was cross validated with deep Illumina sequencing, population datasets, and well-established algorithms. It was a necessary effort to completely reanalyze the HuRef genome as its previously published variants were mostly reported five years ago, suffering from compatibility, organization, and accuracy issues that prevent their direct use in benchmarking. Our extensive analysis and validation resulted in a gold set with high specificity and sensitivity. In contrast to the current gold sets of the NA12878 or HS1011 genomes, our gold set is the first that includes small variants, deletion SVs and insertion SVs up to a hundred thousand base-pairs. We demonstrate the utility of our HuRef gold set to benchmark several published SV detection tools. PMID:26412485

  15. Reviews and syntheses: Field data to benchmark the carbon cycle models for tropical forests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, Deborah A.; Asao, Shinichi; Fisher, Rosie

    For more accurate projections of both the global carbon (C) cycle and the changing climate, a critical current need is to improve the representation of tropical forests in Earth system models. Tropical forests exchange more C, energy, and water with the atmosphere than any other class of land ecosystems. Further, tropical-forest C cycling is likely responding to the rapid global warming, intensifying water stress, and increasing atmospheric CO 2 levels. Projections of the future C balance of the tropics vary widely among global models. A current effort of the modeling community, the ILAMB (International Land Model Benchmarking) project, is tomore » compile robust observations that can be used to improve the accuracy and realism of the land models for all major biomes. Our goal with this paper is to identify field observations of tropical-forest ecosystem C stocks and fluxes, and of their long-term trends and climatic and CO 2 sensitivities, that can serve this effort. We propose criteria for reference-level field data from this biome and present a set of documented examples from old-growth lowland tropical forests. We offer these as a starting point towards the goal of a regularly updated consensus set of benchmark field observations of C cycling in tropical forests.« less

  16. A systematic benchmark of the ab initio Bethe-Salpeter equation approach for low-lying optical excitations of small organic molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bruneval, Fabien; Molecular Foundry, Lawrence Berkeley National Laboratory, Berkeley, California 94720; Department of Physics, University of California, Berkeley, California 94720

    2015-06-28

    The predictive power of the ab initio Bethe-Salpeter equation (BSE) approach, rigorously based on many-body Green’s function theory but incorporating information from density functional theory, has already been demonstrated for the optical gaps and spectra of solid-state systems. Interest in photoactive hybrid organic/inorganic systems has recently increased and so has the use of the BSE for computing neutral excitations of organic molecules. However, no systematic benchmarks of the BSE for neutral electronic excitations of organic molecules exist. Here, we study the performance of the BSE for the 28 small molecules in Thiel’s widely used time-dependent density functional theory benchmark setmore » [Schreiber et al., J. Chem. Phys. 128, 134110 (2008)]. We observe that the BSE produces results that depend critically on the mean-field starting point employed in the perturbative approach. We find that this starting point dependence is mainly introduced through the quasiparticle energies obtained at the intermediate GW step and that with a judicious choice of starting mean-field, singlet excitation energies obtained from BSE are in excellent quantitative agreement with higher-level wavefunction methods. The quality of the triplet excitations is slightly less satisfactory.« less

  17. Reviews and syntheses: Field data to benchmark the carbon cycle models for tropical forests

    NASA Astrophysics Data System (ADS)

    Clark, Deborah A.; Asao, Shinichi; Fisher, Rosie; Reed, Sasha; Reich, Peter B.; Ryan, Michael G.; Wood, Tana E.; Yang, Xiaojuan

    2017-10-01

    For more accurate projections of both the global carbon (C) cycle and the changing climate, a critical current need is to improve the representation of tropical forests in Earth system models. Tropical forests exchange more C, energy, and water with the atmosphere than any other class of land ecosystems. Further, tropical-forest C cycling is likely responding to the rapid global warming, intensifying water stress, and increasing atmospheric CO2 levels. Projections of the future C balance of the tropics vary widely among global models. A current effort of the modeling community, the ILAMB (International Land Model Benchmarking) project, is to compile robust observations that can be used to improve the accuracy and realism of the land models for all major biomes. Our goal with this paper is to identify field observations of tropical-forest ecosystem C stocks and fluxes, and of their long-term trends and climatic and CO2 sensitivities, that can serve this effort. We propose criteria for reference-level field data from this biome and present a set of documented examples from old-growth lowland tropical forests. We offer these as a starting point towards the goal of a regularly updated consensus set of benchmark field observations of C cycling in tropical forests.

  18. Reviews and syntheses: Field data to benchmark the carbon cycle models for tropical forests

    DOE PAGES

    Clark, Deborah A.; Asao, Shinichi; Fisher, Rosie; ...

    2017-10-23

    For more accurate projections of both the global carbon (C) cycle and the changing climate, a critical current need is to improve the representation of tropical forests in Earth system models. Tropical forests exchange more C, energy, and water with the atmosphere than any other class of land ecosystems. Further, tropical-forest C cycling is likely responding to the rapid global warming, intensifying water stress, and increasing atmospheric CO 2 levels. Projections of the future C balance of the tropics vary widely among global models. A current effort of the modeling community, the ILAMB (International Land Model Benchmarking) project, is tomore » compile robust observations that can be used to improve the accuracy and realism of the land models for all major biomes. Our goal with this paper is to identify field observations of tropical-forest ecosystem C stocks and fluxes, and of their long-term trends and climatic and CO 2 sensitivities, that can serve this effort. We propose criteria for reference-level field data from this biome and present a set of documented examples from old-growth lowland tropical forests. We offer these as a starting point towards the goal of a regularly updated consensus set of benchmark field observations of C cycling in tropical forests.« less

  19. Dismantlement of the TSF-SNAP Reactor Assembly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peretz, Fred J

    2009-01-01

    This paper describes the dismantlement of the Tower Shielding Facility (TSF)?Systems for Nuclear Auxiliary Power (SNAP) reactor, a SNAP-10A reactor used to validate radiation source terms and shield performance models at Oak Ridge National Laboratory (ORNL) from 1967 through 1973. After shutdown, it was placed in storage at the Y-12 National Security Complex (Y-12), eventually falling under the auspices of the Highly Enriched Uranium (HEU) Disposition Program. To facilitate downblending of the HEU present in the fuel elements, the TSF-SNAP was moved to ORNL on June 24, 2006. The reactor assembly was removed from its packaging, inspected, and the sodium-potassiummore » (NaK) coolant was drained. A superheated steam process was used to chemically react the residual NaK inside the reactor assembly. The heat exchanger assembly was removed from the top of the reactor vessel, and the criticality safety sleeve was exchanged for a new safety sleeve that allowed for the removal of the vessel lid. A chain-mounted tubing cutter was used to separate the lid from the vessel, and the 36 fuel elements were removed and packaged in four U.S. Department of Transportation 2R/6M containers. The fuel elements were returned to Y-12 on July 13, 2006. The return of the fuel elements and disposal of all other reactor materials accomplished the formal objectives of the dismantlement project. In addition, a project model was established for the handling of a fully fueled liquid-metal?cooled reactor assembly. Current criticality safety codes have been benchmarked against experiments performed by Atomics International in the 1950s and 1960s. Execution of this project provides valuable experience applicable to future projects addressing space and liquid-metal-cooled reactors.« less

  20. Terrestrial Microgravity Model and Threshold Gravity Simulation using Magnetic Levitation

    NASA Technical Reports Server (NTRS)

    Ramachandran, N.

    2005-01-01

    What is the threshold gravity (minimum gravity level) required for the nominal functioning of the human system? What dosage is required? Do human cell lines behave differently in microgravity in response to an external stimulus? The critical need for such a gravity simulator is emphasized by recent experiments on human epithelial cells and lymphocytes on the Space Shuttle clearly showing that cell growth and function are markedly different from those observed terrestrially. Those differences are also dramatic between cells grown in space and those in Rotating Wall Vessels (RWV), or NASA bioreactor often used to simulate microgravity, indicating that although morphological growth patterns (three dimensional growth) can be successfully simulated using RWVs, cell function performance is not reproduced - a critical difference. If cell function is dramatically affected by gravity off-loading, then cell response to stimuli such as radiation, stress, etc. can be very different from terrestrial cell lines. Yet, we have no good gravity simulator for use in study of these phenomena. This represents a profound shortcoming for countermeasures research. We postulate that we can use magnetic levitation of cells and tissue, through the use of strong magnetic fields and field gradients, as a terrestrial microgravity model to study human cells. Specific objectives of the research are: 1. To develop a tried, tested and benchmarked terrestrial microgravity model for cell culture studies; 2. Gravity threshold determination; 3. Dosage (magnitude and duration) of g-level required for nominal functioning of cells; 4. Comparisons of magnetic levitation model to other models such as RWV, hind limb suspension, etc. and 5. Cellular response to reduced gravity levels of Moon and Mars. The paper will discuss experiments md modeling work to date in support of this project.

  1. GLOFRIM v1.0 - A globally applicable computational framework for integrated hydrological-hydrodynamic modelling

    NASA Astrophysics Data System (ADS)

    Hoch, Jannis M.; Neal, Jeffrey C.; Baart, Fedor; van Beek, Rens; Winsemius, Hessel C.; Bates, Paul D.; Bierkens, Marc F. P.

    2017-10-01

    We here present GLOFRIM, a globally applicable computational framework for integrated hydrological-hydrodynamic modelling. GLOFRIM facilitates spatially explicit coupling of hydrodynamic and hydrologic models and caters for an ensemble of models to be coupled. It currently encompasses the global hydrological model PCR-GLOBWB as well as the hydrodynamic models Delft3D Flexible Mesh (DFM; solving the full shallow-water equations and allowing for spatially flexible meshing) and LISFLOOD-FP (LFP; solving the local inertia equations and running on regular grids). The main advantages of the framework are its open and free access, its global applicability, its versatility, and its extensibility with other hydrological or hydrodynamic models. Before applying GLOFRIM to an actual test case, we benchmarked both DFM and LFP for a synthetic test case. Results show that for sub-critical flow conditions, discharge response to the same input signal is near-identical for both models, which agrees with previous studies. We subsequently applied the framework to the Amazon River basin to not only test the framework thoroughly, but also to perform a first-ever benchmark of flexible and regular grids on a large-scale. Both DFM and LFP produce comparable results in terms of simulated discharge with LFP exhibiting slightly higher accuracy as expressed by a Kling-Gupta efficiency of 0.82 compared to 0.76 for DFM. However, benchmarking inundation extent between DFM and LFP over the entire study area, a critical success index of 0.46 was obtained, indicating that the models disagree as often as they agree. Differences between models in both simulated discharge and inundation extent are to a large extent attributable to the gridding techniques employed. In fact, the results show that both the numerical scheme of the inundation model and the gridding technique can contribute to deviations in simulated inundation extent as we control for model forcing and boundary conditions. This study shows that the presented computational framework is robust and widely applicable. GLOFRIM is designed as open access and easily extendable, and thus we hope that other large-scale hydrological and hydrodynamic models will be added. Eventually, more locally relevant processes would be captured and more robust model inter-comparison, benchmarking, and ensemble simulations of flood hazard on a large scale would be allowed for.

  2. NRL 1989 Beam Propagation Studies in Support of the ATA Multi-Pulse Propagation Experiment

    DTIC Science & Technology

    1990-08-31

    papers presented here were all written prior to the completion of the experiment. The first of these papers presents simulation results which modeled ...beam stability and channel evolution for an entire five pulse burst. The second paper describes a new air chemistry model used in the SARLAC...Experiment: A new air chemistry model for use in the propagation codes simulating the MPPE was developed by making analytic fits to benchmark runs with

  3. Critical Phenomena of Rainfall in Ecuador

    NASA Astrophysics Data System (ADS)

    Serrano, Sh.; Vasquez, N.; Jacome, P.; Basile, L.

    2014-02-01

    Self-organized criticality (SOC) is characterized by a power law behavior over complex systems like earthquakes and avalanches. We study rainfall using data of one day, 3 hours and 10 min temporal resolution from INAMHI (Instituto Nacional de Meteorologia e Hidrologia) station at Izobamba, DMQ (Metropolitan District of Quito), satellite data over Ecuador from Tropical Rainfall Measure Mission (TRMM,) and REMMAQ (Red Metropolitana de Monitoreo Atmosferico de Quito) meteorological stations over, respectively. Our results show a power law behavior of the number of rain events versus mm of rainfall measured for the high resolution case (10 min), and as the resolution decreases this behavior gets lost. This statistical property is the fingerprint of a self-organized critical process (Peter and Christensen, 2002) and may serve as a benchmark for models of precipitation based in phase transitions between water vapor and precipitation (Peter and Neeling, 2006).

  4. Security during the Construction of New Nuclear Power Plants: Technical Basis for Access Authorization and Fitness-For-Duty Requirements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Branch, Kristi M.; Baker, Kathryn A.

    A technical letter report to the NRC summarizing the findings of a benchmarking study, literature review, and workshop with experts on current industry standards and expert judgments about needs for security during the construction phase of critical infrastructure facilities in the post-September 11 U.S. context, with a special focus on the construction phase of nuclear power plants and personnel security measures.

  5. An Assessment of Air Force Development Portfolio Management Practices

    DTIC Science & Technology

    2002-01-01

    of Product Innovation Management , 12, 235–246. Cooper, R. G., & Kleinschmidt, E. J. (1995). Benchmarking the firm’s criti- cal success factors in new...product development. Journal of Product Innovation Management , 12, 374– 391. Cooper, R. G., Edgett, S. J., & Klein- schmidt, E. J. (1998a). Best...Scott, G. M. (2000). Critical technology management issues of new product development in high-tech companies. Journal of Product Innovation Management , 17

  6. MPI, HPF or OpenMP: A Study with the NAS Benchmarks

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Frumkin, Michael; Hribar, Michelle; Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1999-01-01

    Porting applications to new high performance parallel and distributed platforms is a challenging task. Writing parallel code by hand is time consuming and costly, but the task can be simplified by high level languages and would even better be automated by parallelizing tools and compilers. The definition of HPF (High Performance Fortran, based on data parallel model) and OpenMP (based on shared memory parallel model) standards has offered great opportunity in this respect. Both provide simple and clear interfaces to language like FORTRAN and simplify many tedious tasks encountered in writing message passing programs. In our study we implemented the parallel versions of the NAS Benchmarks with HPF and OpenMP directives. Comparison of their performance with the MPI implementation and pros and cons of different approaches will be discussed along with experience of using computer-aided tools to help parallelize these benchmarks. Based on the study,potentials of applying some of the techniques to realistic aerospace applications will be presented

  7. MPI, HPF or OpenMP: A Study with the NAS Benchmarks

    NASA Technical Reports Server (NTRS)

    Jin, H.; Frumkin, M.; Hribar, M.; Waheed, A.; Yan, J.; Saini, Subhash (Technical Monitor)

    1999-01-01

    Porting applications to new high performance parallel and distributed platforms is a challenging task. Writing parallel code by hand is time consuming and costly, but this task can be simplified by high level languages and would even better be automated by parallelizing tools and compilers. The definition of HPF (High Performance Fortran, based on data parallel model) and OpenMP (based on shared memory parallel model) standards has offered great opportunity in this respect. Both provide simple and clear interfaces to language like FORTRAN and simplify many tedious tasks encountered in writing message passing programs. In our study, we implemented the parallel versions of the NAS Benchmarks with HPF and OpenMP directives. Comparison of their performance with the MPI implementation and pros and cons of different approaches will be discussed along with experience of using computer-aided tools to help parallelize these benchmarks. Based on the study, potentials of applying some of the techniques to realistic aerospace applications will be presented.

  8. Benchmarking comparison and validation of MCNP photon interaction data

    NASA Astrophysics Data System (ADS)

    Colling, Bethany; Kodeli, I.; Lilley, S.; Packer, L. W.

    2017-09-01

    The objective of the research was to test available photoatomic data libraries for fusion relevant applications, comparing against experimental and computational neutronics benchmarks. Photon flux and heating was compared using the photon interaction data libraries (mcplib 04p, 05t, 84p and 12p). Suitable benchmark experiments (iron and water) were selected from the SINBAD database and analysed to compare experimental values with MCNP calculations using mcplib 04p, 84p and 12p. In both the computational and experimental comparisons, the majority of results with the 04p, 84p and 12p photon data libraries were within 1σ of the mean MCNP statistical uncertainty. Larger differences were observed when comparing computational results with the 05t test photon library. The Doppler broadening sampling bug in MCNP-5 is shown to be corrected for fusion relevant problems through use of the 84p photon data library. The recommended libraries for fusion neutronics are 84p (or 04p) with MCNP6 and 84p if using MCNP-5.

  9. How to Use Benchmark and Cross-section Studies to Improve Data Libraries and Models

    NASA Astrophysics Data System (ADS)

    Wagner, V.; Suchopár, M.; Vrzalová, J.; Chudoba, P.; Svoboda, O.; Tichý, P.; Krása, A.; Majerle, M.; Kugler, A.; Adam, J.; Baldin, A.; Furman, W.; Kadykov, M.; Solnyshkin, A.; Tsoupko-Sitnikov, S.; Tyutyunikov, S.; Vladimirovna, N.; Závorka, L.

    2016-06-01

    Improvements of the Monte Carlo transport codes and cross-section libraries are very important steps towards usage of the accelerator-driven transmutation systems. We have conducted a lot of benchmark experiments with different set-ups consisting of lead, natural uranium and moderator irradiated by relativistic protons and deuterons within framework of the collaboration “Energy and Transmutation of Radioactive Waste”. Unfortunately, the knowledge of the total or partial cross-sections of important reactions is insufficient. Due to this reason we have started extensive studies of different reaction cross-sections. We measure cross-sections of important neutron reactions by means of the quasi-monoenergetic neutron sources based on the cyclotrons at Nuclear Physics Institute in Řež and at The Svedberg Laboratory in Uppsala. Measurements of partial cross-sections of relativistic deuteron reactions were the second direction of our studies. The new results obtained during last years will be shown. Possible use of these data for improvement of libraries, models and benchmark studies will be discussed.

  10. Algorithms for Lightweight Key Exchange.

    PubMed

    Alvarez, Rafael; Caballero-Gil, Cándido; Santonja, Juan; Zamora, Antonio

    2017-06-27

    Public-key cryptography is too slow for general purpose encryption, with most applications limiting its use as much as possible. Some secure protocols, especially those that enable forward secrecy, make a much heavier use of public-key cryptography, increasing the demand for lightweight cryptosystems that can be implemented in low powered or mobile devices. This performance requirements are even more significant in critical infrastructure and emergency scenarios where peer-to-peer networks are deployed for increased availability and resiliency. We benchmark several public-key key-exchange algorithms, determining those that are better for the requirements of critical infrastructure and emergency applications and propose a security framework based on these algorithms and study its application to decentralized node or sensor networks.

  11. Production and Testing of the VITAMIN-B7 Fine-Group and BUGLE-B7 Broad-Group Coupled Neutron/Gamma Cross-Section Libraries Derived from ENDF/B-VII.0 Nuclear Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Risner, J. M.; Wiarda, D.; Dunn, M. E.

    2011-09-30

    New coupled neutron-gamma cross-section libraries have been developed for use in light water reactor (LWR) shielding applications, including pressure vessel dosimetry calculations. The libraries, which were generated using Evaluated Nuclear Data File/B Version VII Release 0 (ENDF/B-VII.0), use the same fine-group and broad-group energy structures as the VITAMIN-B6 and BUGLE-96 libraries. The processing methodology used to generate both libraries is based on the methods used to develop VITAMIN-B6 and BUGLE-96 and is consistent with ANSI/ANS 6.1.2. The ENDF data were first processed into the fine-group pseudo-problem-independent VITAMIN-B7 library and then collapsed into the broad-group BUGLE-B7 library. The VITAMIN-B7 library containsmore » data for 391 nuclides. This represents a significant increase compared to the VITAMIN-B6 library, which contained data for 120 nuclides. The BUGLE-B7 library contains data for the same nuclides as BUGLE-96, and maintains the same numeric IDs for those nuclides. The broad-group data includes nuclides which are infinitely dilute and group collapsed using a concrete weighting spectrum, as well as nuclides which are self-shielded and group collapsed using weighting spectra representative of important regions of LWRs. The verification and validation of the new libraries includes a set of critical benchmark experiments, a set of regression tests that are used to evaluate multigroup crosssection libraries in the SCALE code system, and three pressure vessel dosimetry benchmarks. Results of these tests confirm that the new libraries are appropriate for use in LWR shielding analyses and meet the requirements of Regulatory Guide 1.190.« less

  12. Baseline Assessment of TREAT for Modeling and Analysis Needs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bess, John Darrell; DeHart, Mark David

    2015-10-01

    TREAT is an air-cooled, graphite moderated, thermal, heterogeneous test facility designed to evaluate reactor fuels and structural materials under conditions simulating various types of nuclear excursions and transient undercooling situations that could occur in a nuclear reactor. After 21 years in a standby mode, TREAT is being re-activated to revive transient testing capabilities. Given the time elapsed and the concurrent loss of operating experience, current generation and advanced computational methods are being applied to begin TREAT modeling and simulation prior to renewed at-power operations. Such methods have limited value in predicting the behavior of TREAT without proper validation. Hence, themore » U.S. DOE has developed a number of programs to support development of benchmarks for both critical and transient operations. Extensive effort has been expended at INL to collect detailed descriptions, drawings and specifications for all aspects of TREAT, and to resolve conflicting data found through this process. This report provides a collection of these data, with updated figures that are significantly more readable than historic drawings and illustrations, compositions, and dimensions based on the best available sources. This document is not nor should it be considered to be a benchmark report. Rather, it is intended to provide one-stop shopping, to the extent possible, for other work that seeks to prepare detailed, accurate models of the core and its components. Given the nature of the variety of historic documents available and the loss of institutional memory, the only completely accurate database of TREAT data is TREAT itself. Unfortunately, disassembly of TREAT for inspection, assay, and measurement is highly unlikely. Hence the data provided herein is intended serve as a best-estimate substitute.« less

  13. Using Publicly Reported Nursing-Sensitive Screening Indicators to Measure Hospital Performance: The Netherlands Experience in 2011.

    PubMed

    Stalpers, Dewi; van der Linden, Dimitri; Kaljouw, Marian J; Schuurmans, Marieke J

    2016-01-01

    Deliberate screening allows detection of health risks that are otherwise not noticeable and allows expedient intervention to minimize complications and optimize outcomes, especially during critical events like hospitalization. Little research has evaluated the usefulness of screening performance and outcome indicators as measures to differentiate nursing quality, although policymakers are using them to benchmark hospitals. The aims of this study were to examine hospital performance based on nursing-sensitive screening indicators and to assess associations with hospital characteristics and nursing-sensitive outcomes for patients. A secondary use of nursing-sensitive data from the Dutch Health Care Inspectorate was performed, including the mandatory screening and outcome indicators related to delirium, malnutrition, pain and pressure ulcers. The sample consisted of all 93 hospitals in the Netherlands in 2011. High- and low-performing hospitals were determined based on the overall proportion of screened patients. Descriptive statistics and analysis of variance were used to examine screening performances in relation to hospital characteristics and nursing-sensitive outcomes. Over all hospitals, the average screening rates ranged from 59% (delirium) to 94% (pain). Organizational characteristics were not different in high- and low-performing hospitals. The hospitals with the best overall screening performances had significantly better results regarding protein intake within malnourished patients (p < .01). For mortality, marginal significant effects did not remain after controlling for organizational structures. No associations were found with prevalence of pressure ulcers and patient self-reported pain scores. The screening for patient risks is an important nursing task. Our findings suggest that nursing-sensitive screening indicators may be relevant measures for benchmarking nursing quality in hospitals. Time-trend studies are required to support our findings and to further investigate relations with nursing-sensitive outcomes.

  14. Frontal Polymerization in Microgravity

    NASA Technical Reports Server (NTRS)

    Pojman, John A.

    1999-01-01

    Frontal polymerization systems, with their inherent large thermal and compositional gradients, are greatly affected by buoyancy-driven convection. Sounding rocket experiments allowed the preparation of benchmark materials and demonstrated that methods to suppress the Rayleigh-Taylor instability in ground-based research did not significantly affect the molecular weight of the polymer. Experiments under weightlessness show clearly that bubbles produced during the reaction interact very differently than under 1 g.

  15. SAS Code for Calculating Intraclass Correlation Coefficients and Effect Size Benchmarks for Site-Randomized Education Experiments

    ERIC Educational Resources Information Center

    Brandon, Paul R.; Harrison, George M.; Lawton, Brian E.

    2013-01-01

    When evaluators plan site-randomized experiments, they must conduct the appropriate statistical power analyses. These analyses are most likely to be valid when they are based on data from the jurisdictions in which the studies are to be conducted. In this method note, we provide software code, in the form of a SAS macro, for producing statistical…

  16. Benchmarking Gas Path Diagnostic Methods: A Public Approach

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Bird, Jeff; Davison, Craig; Volponi, Al; Iverson, R. Eugene

    2008-01-01

    Recent technology reviews have identified the need for objective assessments of engine health management (EHM) technology. The need is two-fold: technology developers require relevant data and problems to design and validate new algorithms and techniques while engine system integrators and operators need practical tools to direct development and then evaluate the effectiveness of proposed solutions. This paper presents a publicly available gas path diagnostic benchmark problem that has been developed by the Propulsion and Power Systems Panel of The Technical Cooperation Program (TTCP) to help address these needs. The problem is coded in MATLAB (The MathWorks, Inc.) and coupled with a non-linear turbofan engine simulation to produce "snap-shot" measurements, with relevant noise levels, as if collected from a fleet of engines over their lifetime of use. Each engine within the fleet will experience unique operating and deterioration profiles, and may encounter randomly occurring relevant gas path faults including sensor, actuator and component faults. The challenge to the EHM community is to develop gas path diagnostic algorithms to reliably perform fault detection and isolation. An example solution to the benchmark problem is provided along with associated evaluation metrics. A plan is presented to disseminate this benchmark problem to the engine health management technical community and invite technology solutions.

  17. Experimental validation benchmark data for CFD of transient convection from forced to natural with flow reversal on a vertical flat plate

    DOE PAGES

    Lance, Blake W.; Smith, Barton L.

    2016-06-23

    Transient convection has been investigated experimentally for the purpose of providing Computational Fluid Dynamics (CFD) validation benchmark data. A specialized facility for validation benchmark experiments called the Rotatable Buoyancy Tunnel was used to acquire thermal and velocity measurements of flow over a smooth, vertical heated plate. The initial condition was forced convection downward with subsequent transition to mixed convection, ending with natural convection upward after a flow reversal. Data acquisition through the transient was repeated for ensemble-averaged results. With simple flow geometry, validation data were acquired at the benchmark level. All boundary conditions (BCs) were measured and their uncertainties quantified.more » Temperature profiles on all four walls and the inlet were measured, as well as as-built test section geometry. Inlet velocity profiles and turbulence levels were quantified using Particle Image Velocimetry. System Response Quantities (SRQs) were measured for comparison with CFD outputs and include velocity profiles, wall heat flux, and wall shear stress. Extra effort was invested in documenting and preserving the validation data. Details about the experimental facility, instrumentation, experimental procedure, materials, BCs, and SRQs are made available through this paper. As a result, the latter two are available for download and the other details are included in this work.« less

  18. CERN Computing in Commercial Clouds

    NASA Astrophysics Data System (ADS)

    Cordeiro, C.; Field, L.; Garrido Bear, B.; Giordano, D.; Jones, B.; Keeble, O.; Manzi, A.; Martelli, E.; McCance, G.; Moreno-García, D.; Traylen, S.

    2017-10-01

    By the end of 2016 more than 10 Million core-hours of computing resources have been delivered by several commercial cloud providers to the four LHC experiments to run their production workloads, from simulation to full chain processing. In this paper we describe the experience gained at CERN in procuring and exploiting commercial cloud resources for the computing needs of the LHC experiments. The mechanisms used for provisioning, monitoring, accounting, alarming and benchmarking will be discussed, as well as the involvement of the LHC collaborations in terms of managing the workflows of the experiments within a multicloud environment.

  19. Student Learning: Education's Field of Dreams.

    ERIC Educational Resources Information Center

    Blackwell, Peggy L.

    2003-01-01

    Discusses seven research-based benchmarks providing a framework for the student-learning-focused reform of teacher education: knowledge and understanding based on previous experience, usable content knowledge, transfer of learning/the learning context, strategic thinking, motivation and affect, development and individual differences, and standards…

  20. New evaluation of thermal neutron scattering libraries for light and heavy water

    NASA Astrophysics Data System (ADS)

    Marquez Damian, Jose Ignacio; Granada, Jose Rolando; Cantargi, Florencia; Roubtsov, Danila

    2017-09-01

    In order to improve the design and safety of thermal nuclear reactors and for verification of criticality safety conditions on systems with significant amount of fissile materials and water, it is necessary to perform high-precision neutron transport calculations and estimate uncertainties of the results. These calculations are based on neutron interaction data distributed in evaluated nuclear data libraries. To improve the evaluations of thermal scattering sub-libraries, we developed a set of thermal neutron scattering cross sections (scattering kernels) for hydrogen bound in light water, and deuterium and oxygen bound in heavy water, in the ENDF-6 format from room temperature up to the critical temperatures of molecular liquids. The new evaluations were generated and processable with NJOY99 and also with NJOY-2012 with minor modifications (updates), and with the new version of NJOY-2016. The new TSL libraries are based on molecular dynamics simulations with GROMACS and recent experimental data, and result in an improvement of the calculation of single neutron scattering quantities. In this work, we discuss the importance of taking into account self-diffusion in liquids to accurately describe the neutron scattering at low neutron energies (quasi-elastic peak problem). To improve modeling of heavy water, it is important to take into account temperature-dependent static structure factors and apply Sköld approximation to the coherent inelastic components of the scattering matrix. The usage of the new set of scattering matrices and cross-sections improves the calculation of thermal critical systems moderated and/or reflected with light/heavy water obtained from the International Criticality Safety Benchmark Evaluation Project (ICSBEP) handbook. For example, the use of the new thermal scattering library for heavy water, combined with the ROSFOND-2010 evaluation of the cross sections for deuterium, results in an improvement of the C/E ratio in 48 out of 65 international benchmark cases calculated with the Monte Carlo code MCNP5, in comparison with the existing library based on the ENDF/B-VII.0 evaluation.

  1. Optimal channel efficiency in a sensory network

    NASA Astrophysics Data System (ADS)

    Mosqueiro, Thiago S.; Maia, Leonardo P.

    2013-07-01

    Spontaneous neural activity has been increasingly recognized as a subject of key relevance in neuroscience. It exhibits nontrivial spatiotemporal structure reflecting the organization of the underlying neural network and has proved to be closely intertwined with stimulus-induced activity patterns. As an additional contribution in this regard, we report computational studies that strongly suggest that a stimulus-free feature rules the behavior of an important psychophysical measure of the sensibility of a sensory system to a stimulus, the so-called dynamic range. Indeed in this paper we show that the entropy of the distribution of avalanche lifetimes (information efficiency, since it can be interpreted as the efficiency of the network seen as a communication channel) always accompanies the dynamic range in the benchmark model for sensory systems. Specifically, by simulating the Kinouchi-Copelli (KC) model on two broad families of model networks, we generically observed that both quantities always increase or decrease together as functions of the average branching ratio (the control parameter of the KC model) and that the information efficiency typically exhibits critical optimization jointly with the dynamic range (i.e., both quantities are optimized at the same value of that control parameter, that turns out to be the critical point of a nonequilibrium phase transition). In contrast with the practice of taking power laws to identify critical points in most studies describing measured neuronal avalanches, we rely on data collapses as more robust signatures of criticality to claim that critical optimization may happen even when the distribution of avalanche lifetimes is not a power law, as suggested by a recent experiment. Finally, we note that the entropy of the size distribution of avalanches (information capacity) does not always follow the dynamic range and the information efficiency when they are critically optimized, despite being more widely used than the latter to describe the computational capabilities of a neural network. This strongly suggests that dynamical rules allowing a proper temporal matching of the states of the interacting neurons is the key for achieving good performance in information processing, rather than increasing the number of available units.

  2. Development of a two-fluid drag law for clustered particles using direct numerical simulation and validation through experiments

    NASA Astrophysics Data System (ADS)

    Abbasi Baharanchi, Ahmadreza

    This dissertation focused on development and utilization of numerical and experimental approaches to improve the CFD modeling of fluidization flow of cohesive micron size particles. The specific objectives of this research were: (1) Developing a cluster prediction mechanism applicable to Two-Fluid Modeling (TFM) of gas-solid systems (2) Developing more accurate drag models for Two-Fluid Modeling (TFM) of gas-solid fluidization flow with the presence of cohesive interparticle forces (3) using the developed model to explore the improvement of accuracy of TFM in simulation of fluidization flow of cohesive powders (4) Understanding the causes and influential factor which led to improvements and quantification of improvements (5) Gathering data from a fast fluidization flow and use these data for benchmark validations. Simulation results with two developed cluster-aware drag models showed that cluster prediction could effectively influence the results in both the first and second cluster-aware models. It was proven that improvement of accuracy of TFM modeling using three versions of the first hybrid model was significant and the best improvements were obtained by using the smallest values of the switch parameter which led to capturing the smallest chances of cluster prediction. In the case of the second hybrid model, dependence of critical model parameter on only Reynolds number led to the fact that improvement of accuracy was significant only in dense section of the fluidized bed. This finding may suggest that a more sophisticated particle resolved DNS model, which can span wide range of solid volume fraction, can be used in the formulation of the cluster-aware drag model. The results of experiment suing high speed imaging indicated the presence of particle clusters in the fluidization flow of FCC inside the riser of FIU-CFB facility. In addition, pressure data was successfully captured along the fluidization column of the facility and used as benchmark validation data for the second hybrid model developed in the present dissertation. It was shown the second hybrid model could predict the pressure data in the dense section of the fluidization column with better accuracy.

  3. Calculation of the Phenix end-of-life test 'Control Rod Withdrawal' with the ERANOS code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tiberi, V.

    2012-07-01

    The Inst. of Radiological Protection and Nuclear Safety (IRSN) acts as technical support to French public authorities. As such, IRSN is in charge of safety assessment of operating and under construction reactors, as well as future projects. In this framework, one current objective of IRSN is to evaluate the ability and accuracy of numerical tools to foresee consequences of accidents. Neutronic studies step in the safety assessment from different points of view among which the core design and its protection system. They are necessary to evaluate the core behavior in case of accident in order to assess the integrity ofmore » the first barrier and the absence of a prompt criticality risk. To reach this objective one main physical quantity has to be evaluated accurately: the neutronic power distribution in core during whole reactor lifetime. Phenix end of life tests, carried out in 2009, aim at increasing the experience feedback on sodium cooled fast reactors. These experiments have been done in the framework of the development of the 4. generation of nuclear reactors. Ten tests have been carried out: 6 on neutronic and fuel aspects, 2 on thermal hydraulics and 2 for the emergency shutdown. Two of them have been chosen for an international exercise on thermal hydraulics and neutronics in the frame of an IAEA Coordinated Research Project. Concerning neutronics, the Control Rod Withdrawal test is relevant for safety because it allows evaluating the capability of calculation tools to compute the radial power distribution on fast reactors core configurations in which the flux field is very deformed. IRSN participated to this benchmark with the ERANOS code developed by CEA for fast reactors studies. This paper presents the results obtained in the framework of the benchmark activity. A relatively good agreement was found with available measures considering the approximations done in the modeling. The work underlines the importance of burn-up calculations in order to have a fine core concentrations mesh for the calculation of the power distribution. (authors)« less

  4. A model for a drug distribution system in remote Australia as a social determinant of health using event structure analysis.

    PubMed

    Rovers, John P; Mages, Michelle D

    2017-09-25

    The social determinants of health include the health systems under which people live and utilize health services. One social determinant, for which pharmacists are responsible, is designing drug distribution systems that ensure patients have safe and convenient access to medications. This is critical for settings with poor access to health care. Rural and remote Australia is one example of a setting where the pharmacy profession, schools of pharmacy, and regulatory agencies require pharmacists to assure medication access. Studies of drug distribution systems in such settings are uncommon. This study describes a model for a drug distribution system in an Aboriginal Health Service in remote Australia. The results may be useful for policy setting, pharmacy system design, health professions education, benchmarking, or quality assurance efforts for health system managers in similarly remote locations. The results also suggest that pharmacists can promote access to medications as a social determinant of health. The primary objective of this study was to propose a model for a drug procurement, storage, and distribution system in a remote region of Australia. The secondary objective was to learn the opinions and experiences of healthcare workers under the model. Qualitative research methods were used. Semi-structured interviews were performed with a convenience sample of 11 individuals employed by an Aboriginal health service. Transcripts were analyzed using Event Structure Analysis (ESA) to develop the model. Transcripts were also analyzed to determine the opinions and experiences of health care workers. The model was comprised of 24 unique steps with seven distinct components: choosing a supplier; creating a list of preferred medications; budgeting and ordering; supply and shipping; receipt and storage in the clinic; prescribing process; dispensing and patient counseling. Interviewees described opportunities for quality improvement in choosing suppliers, legal issues and staffing, cold chain integrity, medication shortages and wastage, and adherence to policies. The model illustrates how pharmacists address medication access as a social determinant of health, and may be helpful for policy setting, system design, benchmarking, and quality assurance by health system designers. ESA is an effective and novel method of developing such models.

  5. Benchmarking Naval Shipbuilding with 3D Laser Scanning, Additive Manufacturing, and Collaborative Product Lifecycle Management

    DTIC Science & Technology

    2015-09-20

    battle force ships . The changes affect a small number of ship classes designated as (very) small combatants or logistics and support ships . Specifically...accurate, and fast method of helping shipbuilders and manufacturers design , redesign, modify, and salvage ships . However, only a handful of several... ship construction to become a fleet of 306 battle force ships over the next 30 years. It is critical that the Navy capture the full benefits of new

  6. n+235U resonance parameters and neutron multiplicities in the energy region below 100 eV

    NASA Astrophysics Data System (ADS)

    Pigni, Marco T.; Capote, Roberto; Trkov, Andrej; Pronyaev, Vladimir G.

    2017-09-01

    In August 2016, following the recent effort within the Collaborative International Evaluated Library Organization (CIELO) pilot project to improve the neutron cross sections of 235U, Oak Ridge National Laboratory (ORNL) collaborated with the International Atomic Energy Agency (IAEA) to release a resonance parameter evaluation. This evaluation restores the performance of the evaluated cross sections for the thermal- and above-thermal-solution benchmarks on the basis of newly evaluated thermal neutron constants (TNCs) and thermal prompt fission neutron spectra (PFNS). Performed with support from the US Nuclear Criticality Safety Program (NCSP) in an effort to provide the highest fidelity general purpose nuclear database for nuclear criticality applications, the resonance parameter evaluation was submitted as an ENDF-compatible file to be part of the next release of the ENDF/B-VIII.0 nuclear data library. The resonance parameter evaluation methodology used the Reich-Moore approximation of the R-matrix formalism implemented in the code SAMMY to fit the available time-of-flight (TOF) measured data for the thermal induced cross section of n+235U up to 100 eV. While maintaining reasonably good agreement with the experimental data, the validation analysis focused on restoring the benchmark performance for 235U solutions by combining changes to the resonance parameters and to the prompt resonance v̅

  7. A Benchmark Problem for Development of Autonomous Structural Modal Identification

    NASA Technical Reports Server (NTRS)

    Pappa, Richard S.; Woodard, Stanley E.; Juang, Jer-Nan

    1996-01-01

    This paper summarizes modal identification results obtained using an autonomous version of the Eigensystem Realization Algorithm on a dynamically complex, laboratory structure. The benchmark problem uses 48 of 768 free-decay responses measured in a complete modal survey test. The true modal parameters of the structure are well known from two previous, independent investigations. Without user involvement, the autonomous data analysis identified 24 to 33 structural modes with good to excellent accuracy in 62 seconds of CPU time (on a DEC Alpha 4000 computer). The modal identification technique described in the paper is the baseline algorithm for NASA's Autonomous Dynamics Determination (ADD) experiment scheduled to fly on International Space Station assembly flights in 1997-1999.

  8. Benchmarking and performance analysis of the CM-2. [SIMD computer

    NASA Technical Reports Server (NTRS)

    Myers, David W.; Adams, George B., II

    1988-01-01

    A suite of benchmarking routines testing communication, basic arithmetic operations, and selected kernel algorithms written in LISP and PARIS was developed for the CM-2. Experiment runs are automated via a software framework that sequences individual tests, allowing for unattended overnight operation. Multiple measurements are made and treated statistically to generate well-characterized results from the noisy values given by cm:time. The results obtained provide a comparison with similar, but less extensive, testing done on a CM-1. Tests were chosen to aid the algorithmist in constructing fast, efficient, and correct code on the CM-2, as well as gain insight into what performance criteria are needed when evaluating parallel processing machines.

  9. Fidelity Witnesses for Fermionic Quantum Simulations

    NASA Astrophysics Data System (ADS)

    Gluza, M.; Kliesch, M.; Eisert, J.; Aolita, L.

    2018-05-01

    The experimental interest and developments in quantum spin-1 /2 chains has increased uninterruptedly over the past decade. In many instances, the target quantum simulation belongs to the broader class of noninteracting fermionic models, constituting an important benchmark. In spite of this class being analytically efficiently tractable, no direct certification tool has yet been reported for it. In fact, in experiments, certification has almost exclusively relied on notions of quantum state tomography scaling very unfavorably with the system size. Here, we develop experimentally friendly fidelity witnesses for all pure fermionic Gaussian target states. Their expectation value yields a tight lower bound to the fidelity and can be measured efficiently. We derive witnesses in full generality in the Majorana-fermion representation and apply them to experimentally relevant spin-1 /2 chains. Among others, we show how to efficiently certify strongly out-of-equilibrium dynamics in critical Ising chains. At the heart of the measurement scheme is a variant of importance sampling specially tailored to overlaps between covariance matrices. The method is shown to be robust against finite experimental-state infidelities.

  10. Solving NP-Hard Problems with Physarum-Based Ant Colony System.

    PubMed

    Liu, Yuxin; Gao, Chao; Zhang, Zili; Lu, Yuxiao; Chen, Shi; Liang, Mingxin; Tao, Li

    2017-01-01

    NP-hard problems exist in many real world applications. Ant colony optimization (ACO) algorithms can provide approximate solutions for those NP-hard problems, but the performance of ACO algorithms is significantly reduced due to premature convergence and weak robustness, etc. With these observations in mind, this paper proposes a Physarum-based pheromone matrix optimization strategy in ant colony system (ACS) for solving NP-hard problems such as traveling salesman problem (TSP) and 0/1 knapsack problem (0/1 KP). In the Physarum-inspired mathematical model, one of the unique characteristics is that critical tubes can be reserved in the process of network evolution. The optimized updating strategy employs the unique feature and accelerates the positive feedback process in ACS, which contributes to the quick convergence of the optimal solution. Some experiments were conducted using both benchmark and real datasets. The experimental results show that the optimized ACS outperforms other meta-heuristic algorithms in accuracy and robustness for solving TSPs. Meanwhile, the convergence rate and robustness for solving 0/1 KPs are better than those of classical ACS.

  11. Women Physicists Speak: The 2001 International Study of Women in Physics

    NASA Astrophysics Data System (ADS)

    Ivie, Rachel; Czujko, Roman; Stowe, Katie

    2002-09-01

    The Working Group on Women in Physics of the International Union of Pure and Applied Physics (IUPAP) subcontracted with the Statistical Research Center of the American Institute of Physics (AIP) to conduct an international study on women in physics. This study had two parts. First, we conducted a benchmarking study to identify reliable sources and collect data on the representation of women in physics in as many IUPAP member countries as possible. Second, we conducted an international survey of individual women physicists. The survey addressed issues related to both education and employment. On the education side, we asked about experiences and critical incidents from secondary school through the highest degree earned. On the employment side, we asked about how the respondents' careers had evolved and their self-assessment of how well their careers had progressed. In addition, the questionnaire also addressed issues that cut across education and employment, such as the impact of marriage and children, the factors that contributed the most toward the success they had achieved to date, and suggestions for what could be done to improve the situation of women physicists.

  12. Generating Performance Models for Irregular Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friese, Ryan D.; Tallent, Nathan R.; Vishnu, Abhinav

    2017-05-30

    Many applications have irregular behavior --- non-uniform input data, input-dependent solvers, irregular memory accesses, unbiased branches --- that cannot be captured using today's automated performance modeling techniques. We describe new hierarchical critical path analyses for the \\Palm model generation tool. To create a model's structure, we capture tasks along representative MPI critical paths. We create a histogram of critical tasks with parameterized task arguments and instance counts. To model each task, we identify hot instruction-level sub-paths and model each sub-path based on data flow, instruction scheduling, and data locality. We describe application models that generate accurate predictions for strong scalingmore » when varying CPU speed, cache speed, memory speed, and architecture. We present results for the Sweep3D neutron transport benchmark; Page Rank on multiple graphs; Support Vector Machine with pruning; and PFLOTRAN's reactive flow/transport solver with domain-induced load imbalance.« less

  13. Rethinking key–value store for parallel I/O optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kougkas, Anthony; Eslami, Hassan; Sun, Xian-He

    2015-01-26

    Key-value stores are being widely used as the storage system for large-scale internet services and cloud storage systems. However, they are rarely used in HPC systems, where parallel file systems are the dominant storage solution. In this study, we examine the architecture differences and performance characteristics of parallel file systems and key-value stores. We propose using key-value stores to optimize overall Input/Output (I/O) performance, especially for workloads that parallel file systems cannot handle well, such as the cases with intense data synchronization or heavy metadata operations. We conducted experiments with several synthetic benchmarks, an I/O benchmark, and a real application.more » We modeled the performance of these two systems using collected data from our experiments, and we provide a predictive method to identify which system offers better I/O performance given a specific workload. The results show that we can optimize the I/O performance in HPC systems by utilizing key-value stores.« less

  14. Investigation of the transient fuel preburner manifold and combustor

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Chen, Yen-Sen; Farmer, Richard C.

    1989-01-01

    A computational fluid dynamics (CFD) model with finite rate reactions, FDNS, was developed to study the start transient of the Space Shuttle Main Engine (SSME) fuel preburner (FPB). FDNS is a time accurate, pressure based CFD code. An upwind scheme was employed for spatial discretization. The upwind scheme was based on second and fourth order central differencing with adaptive artificial dissipation. A state of the art two-equation k-epsilon (T) turbulence model was employed for the turbulence calculation. A Pade' Rational Solution (PARASOL) chemistry algorithm was coupled with the point implicit procedure. FDNS was benchmarked with three well documented experiments: a confined swirling coaxial jet, a non-reactive ramjet dump combustor, and a reactive ramjet dump combustor. Excellent comparisons were obtained for the benchmark cases. The code was then used to study the start transient of an axisymmetric SSME fuel preburner. Predicted transient operation of the preburner agrees well with experiment. Furthermore, it was also found that an appreciable amount of unburned oxygen entered the turbine stages.

  15. Sulfur activation at the Little Boy-Comet Critical Assembly: a replica of the Hiroshima bomb

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerr, G.D.; Emery, J.F.; Pace, J.V. III

    1985-04-01

    Studies have been completed on the activation of sulfur by fast neutrons from the Little Boy-Comet Critical Assembly which replicates the general features of the Hiroshima bomb. The complex effects of the bomb's design and construction on leakage of sulfur-activation neutrons were investigated both experimentally and theoretically. Our sulfur activation studies were performed as part of a larger program to provide benchmark data for testing of methods used in recent source-term calculations for the Hiroshima bomb. Source neutrons capable of activating sulfur play an important role in determining neutron doses in Hiroshima at a kilometer or more from the pointmore » of explosion. 37 refs., 5 figs., 6 tabs.« less

  16. Algorithms for Lightweight Key Exchange †

    PubMed Central

    Santonja, Juan; Zamora, Antonio

    2017-01-01

    Public-key cryptography is too slow for general purpose encryption, with most applications limiting its use as much as possible. Some secure protocols, especially those that enable forward secrecy, make a much heavier use of public-key cryptography, increasing the demand for lightweight cryptosystems that can be implemented in low powered or mobile devices. This performance requirements are even more significant in critical infrastructure and emergency scenarios where peer-to-peer networks are deployed for increased availability and resiliency. We benchmark several public-key key-exchange algorithms, determining those that are better for the requirements of critical infrastructure and emergency applications and propose a security framework based on these algorithms and study its application to decentralized node or sensor networks. PMID:28654006

  17. Accelerating the connection between experiments and models: The FACE-MDS experience

    NASA Astrophysics Data System (ADS)

    Norby, R. J.; Medlyn, B. E.; De Kauwe, M. G.; Zaehle, S.; Walker, A. P.

    2014-12-01

    The mandate is clear for improving communication between models and experiments to better evaluate terrestrial responses to atmospheric and climatic change. Unfortunately, progress in linking experimental and modeling approaches has been slow and sometimes frustrating. Recent successes in linking results from the Duke and Oak Ridge free-air CO2 enrichment (FACE) experiments with ecosystem and land surface models - the FACE Model-Data Synthesis (FACE-MDS) project - came only after a period of slow progress, but the experience points the way to future model-experiment interactions. As the FACE experiments were approaching their termination, the FACE research community made an explicit attempt to work together with the modeling community to synthesize and deliver experimental data to benchmark models and to use models to supply appropriate context for the experimental results. Initial problems that impeded progress were: measurement protocols were not consistent across different experiments; data were not well organized for model input; and parameterizing and spinning up models that were not designed for simulating a specific site was difficult. Once these problems were worked out, the FACE-MDS project has been very successful in using data from the Duke and ORNL FACE experiment to test critical assumptions in the models. The project showed, for example, that the stomatal conductance model most widely used in models was supported by experimental data, but models did not capture important responses such as increased leaf mass per unit area in elevated CO2, and did not appropriately represent foliar nitrogen allocation. We now have an opportunity to learn from this experience. New FACE experiments that have recently been initiated, or are about to be initiated, include a eucalyptus forest in Australia; the AmazonFACE experiment in a primary, tropical forest in Brazil; and a mature oak woodland in England. Cross-site science questions are being developed that will have a strong modeling framework, and modelers and experimentalists will work to establish common measurement protocols and data format. By starting the model-experiment connection early and learning from our past experiences, we expect to significantly shorten the time lags between advances in process-oriented studies and large-scale models.

  18. The Safety Attitudes Questionnaire: psychometric properties, benchmarking data, and emerging research.

    PubMed

    Sexton, John B; Helmreich, Robert L; Neilands, Torsten B; Rowan, Kathy; Vella, Keryn; Boyden, James; Roberts, Peter R; Thomas, Eric J

    2006-04-03

    There is widespread interest in measuring healthcare provider attitudes about issues relevant to patient safety (often called safety climate or safety culture). Here we report the psychometric properties, establish benchmarking data, and discuss emerging areas of research with the University of Texas Safety Attitudes Questionnaire. Six cross-sectional surveys of health care providers (n = 10,843) in 203 clinical areas (including critical care units, operating rooms, inpatient settings, and ambulatory clinics) in three countries (USA, UK, New Zealand). Multilevel factor analyses yielded results at the clinical area level and the respondent nested within clinical area level. We report scale reliability, floor/ceiling effects, item factor loadings, inter-factor correlations, and percentage of respondents who agree with each item and scale. A six factor model of provider attitudes fit to the data at both the clinical area and respondent nested within clinical area levels. The factors were: Teamwork Climate, Safety Climate, Perceptions of Management, Job Satisfaction, Working Conditions, and Stress Recognition. Scale reliability was 0.9. Provider attitudes varied greatly both within and among organizations. Results are presented to allow benchmarking among organizations and emerging research is discussed. The Safety Attitudes Questionnaire demonstrated good psychometric properties. Healthcare organizations can use the survey to measure caregiver attitudes about six patient safety-related domains, to compare themselves with other organizations, to prompt interventions to improve safety attitudes and to measure the effectiveness of these interventions.

  19. Benchmark Shock Tube Experiments for Radiative Heating Relevant to Earth Re-Entry

    NASA Technical Reports Server (NTRS)

    Brandis, A. M.; Cruden, B. A.

    2017-01-01

    Detailed spectrally and spatially resolved radiance has been measured in the Electric Arc Shock Tube (EAST) facility for conditions relevant to high speed entry into a variety of atmospheres, including Earth, Venus, Titan, Mars and the Outer Planets. The tests that measured radiation relevant for Earth re-entry are the focus of this work and are taken from campaigns 47, 50, 52 and 57. These tests covered conditions from 8 km/s to 15.5 km/s at initial pressures ranging from 0.05 Torr to 1 Torr, of which shots at 0.1 and 0.2 Torr are analyzed in this paper. These conditions cover a range of points of interest for potential fight missions, including return from Low Earth Orbit, the Moon and Mars. The large volume of testing available from EAST is useful for statistical analysis of radiation data, but is problematic for identifying representative experiments for performing detailed analysis. Therefore, the intent of this paper is to select a subset of benchmark test data that can be considered for further detailed study. These benchmark shots are intended to provide more accessible data sets for future code validation studies and facility-to-facility comparisons. The shots that have been selected as benchmark data are the ones in closest agreement to a line of best fit through all of the EAST results, whilst also showing the best experimental characteristics, such as test time and convergence to equilibrium. The EAST data are presented in different formats for analysis. These data include the spectral radiance at equilibrium, the spatial dependence of radiance over defined wavelength ranges and the mean non-equilibrium spectral radiance (so-called 'spectral non-equilibrium metric'). All the information needed to simulate each experimental trace, including free-stream conditions, shock time of arrival (i.e. x-t) relation, and the spectral and spatial resolution functions, are provided.

  20. Fixism and conservation science.

    PubMed

    Robert, Alexandre; Fontaine, Colin; Veron, Simon; Monnet, Anne-Christine; Legrand, Marine; Clavel, Joanne; Chantepie, Stéphane; Couvet, Denis; Ducarme, Frédéric; Fontaine, Benoît; Jiguet, Frédéric; le Viol, Isabelle; Rolland, Jonathan; Sarrazin, François; Teplitsky, Céline; Mouchet, Maud

    2017-08-01

    The field of biodiversity conservation has recently been criticized as relying on a fixist view of the living world in which existing species constitute at the same time targets of conservation efforts and static states of reference, which is in apparent disagreement with evolutionary dynamics. We reviewed the prominent role of species as conservation units and the common benchmark approach to conservation that aims to use past biodiversity as a reference to conserve current biodiversity. We found that the species approach is justified by the discrepancy between the time scales of macroevolution and human influence and that biodiversity benchmarks are based on reference processes rather than fixed reference states. Overall, we argue that the ethical and theoretical frameworks underlying conservation research are based on macroevolutionary processes, such as extinction dynamics. Current species, phylogenetic, community, and functional conservation approaches constitute short-term responses to short-term human effects on these reference processes, and these approaches are consistent with evolutionary principles. © 2016 Society for Conservation Biology.

Top