Providing Nuclear Criticality Safety Analysis Education through Benchmark Experiment Evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
John D. Bess; J. Blair Briggs; David W. Nigg
2009-11-01
One of the challenges that today's new workforce of nuclear criticality safety engineers face is the opportunity to provide assessment of nuclear systems and establish safety guidelines without having received significant experience or hands-on training prior to graduation. Participation in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and/or the International Reactor Physics Experiment Evaluation Project (IRPhEP) provides students and young professionals the opportunity to gain experience and enhance critical engineering skills.
GROWTH OF THE INTERNATIONAL CRITICALITY SAFETY AND REACTOR PHYSICS EXPERIMENT EVALUATION PROJECTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. Blair Briggs; John D. Bess; Jim Gulliford
2011-09-01
Since the International Conference on Nuclear Criticality Safety (ICNC) 2007, the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) have continued to expand their efforts and broaden their scope. Eighteen countries participated on the ICSBEP in 2007. Now, there are 20, with recent contributions from Sweden and Argentina. The IRPhEP has also expanded from eight contributing countries in 2007 to 16 in 2011. Since ICNC 2007, the contents of the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments1' have increased from 442 evaluations (38000 pages), containing benchmark specifications for 3955 critical ormore » subcritical configurations to 516 evaluations (nearly 55000 pages), containing benchmark specifications for 4405 critical or subcritical configurations in the 2010 Edition of the ICSBEP Handbook. The contents of the Handbook have also increased from 21 to 24 criticality-alarm-placement/shielding configurations with multiple dose points for each, and from 20 to 200 configurations categorized as fundamental physics measurements relevant to criticality safety applications. Approximately 25 new evaluations and 150 additional configurations are expected to be added to the 2011 edition of the Handbook. Since ICNC 2007, the contents of the 'International Handbook of Evaluated Reactor Physics Benchmark Experiments2' have increased from 16 different experimental series that were performed at 12 different reactor facilities to 53 experimental series that were performed at 30 different reactor facilities in the 2011 edition of the Handbook. Considerable effort has also been made to improve the functionality of the searchable database, DICE (Database for the International Criticality Benchmark Evaluation Project) and verify the accuracy of the data contained therein. DICE will be discussed in separate papers at ICNC 2011. The status of the ICSBEP and the IRPhEP will be discussed in the full paper, selected benchmarks that have been added to the ICSBEP Handbook will be highlighted, and a preview of the new benchmarks that will appear in the September 2011 edition of the Handbook will be provided. Accomplishments of the IRPhEP will also be highlighted and the future of both projects will be discussed. REFERENCES (1) International Handbook of Evaluated Criticality Safety Benchmark Experiments, NEA/NSC/DOC(95)03/I-IX, Organisation for Economic Co-operation and Development-Nuclear Energy Agency (OECD-NEA), September 2010 Edition, ISBN 978-92-64-99140-8. (2) International Handbook of Evaluated Reactor Physics Benchmark Experiments, NEA/NSC/DOC(2006)1, Organisation for Economic Co-operation and Development-Nuclear Energy Agency (OECD-NEA), March 2011 Edition, ISBN 978-92-64-99141-5.« less
Educating Next Generation Nuclear Criticality Safety Engineers at the Idaho National Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. D. Bess; J. B. Briggs; A. S. Garcia
2011-09-01
One of the challenges in educating our next generation of nuclear safety engineers is the limitation of opportunities to receive significant experience or hands-on training prior to graduation. Such training is generally restricted to on-the-job-training before this new engineering workforce can adequately provide assessment of nuclear systems and establish safety guidelines. Participation in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) can provide students and young professionals the opportunity to gain experience and enhance critical engineering skills. The ICSBEP and IRPhEP publish annual handbooks that contain evaluations of experiments along withmore » summarized experimental data and peer-reviewed benchmark specifications to support the validation of neutronics codes, nuclear cross-section data, and the validation of reactor designs. Participation in the benchmark process not only benefits those who use these Handbooks within the international community, but provides the individual with opportunities for professional development, networking with an international community of experts, and valuable experience to be used in future employment. Traditionally students have participated in benchmarking activities via internships at national laboratories, universities, or companies involved with the ICSBEP and IRPhEP programs. Additional programs have been developed to facilitate the nuclear education of students while participating in the benchmark projects. These programs include coordination with the Center for Space Nuclear Research (CSNR) Next Degree Program, the Collaboration with the Department of Energy Idaho Operations Office to train nuclear and criticality safety engineers, and student evaluations as the basis for their Master's thesis in nuclear engineering.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Der Marck, S. C.
Three nuclear data libraries have been tested extensively using criticality safety benchmark calculations. The three libraries are the new release of the US library ENDF/B-VII.1 (2011), the new release of the Japanese library JENDL-4.0 (2011), and the OECD/NEA library JEFF-3.1 (2006). All calculations were performed with the continuous-energy Monte Carlo code MCNP (version 4C3, as well as version 6-beta1). Around 2000 benchmark cases from the International Handbook of Criticality Safety Benchmark Experiments (ICSBEP) were used. The results were analyzed per ICSBEP category, and per element. Overall, the three libraries show similar performance on most criticality safety benchmarks. The largest differencesmore » are probably caused by elements such as Be, C, Fe, Zr, W. (authors)« less
Scale/TSUNAMI Sensitivity Data for ICSBEP Evaluations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T; Reed, Davis Allan; Lefebvre, Robert A
2011-01-01
The Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) software developed at Oak Ridge National Laboratory (ORNL) as part of the Scale code system provide unique methods for code validation, gap analysis, and experiment design. For TSUNAMI analysis, sensitivity data are generated for each application and each existing or proposed experiment used in the assessment. The validation of diverse sets of applications requires potentially thousands of data files to be maintained and organized by the user, and a growing number of these files are available through the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE) distributed through themore » International Criticality Safety Benchmark Evaluation Program (ICSBEP). To facilitate the use of the IHECSBE benchmarks in rigorous TSUNAMI validation and gap analysis techniques, ORNL generated SCALE/TSUNAMI sensitivity data files (SDFs) for several hundred benchmarks for distribution with the IHECSBE. For the 2010 edition of IHECSBE, the sensitivity data were generated using 238-group cross-section data based on ENDF/B-VII.0 for 494 benchmark experiments. Additionally, ORNL has developed a quality assurance procedure to guide the generation of Scale inputs and sensitivity data, as well as a graphical user interface to facilitate the use of sensitivity data in identifying experiments and applying them in validation studies.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bess, John D.; Briggs, J. Blair; Ivanova, Tatiana
2017-02-01
In the past several decades, numerous experiments have been performed worldwide to support reactor operations, measurements, design, and nuclear safety. Those experiments represent an extensive international investment in infrastructure, expertise, and cost, representing significantly valuable resources of data supporting past, current, and future research activities. Those valuable assets represent the basis for recording, development, and validation of our nuclear methods and integral nuclear data [1]. The loss of these experimental data, which has occurred all too much in the recent years, is tragic. The high cost to repeat many of these measurements can be prohibitive, if not impossible, to surmount.more » Two international projects were developed, and are under the direction of the Organisation for Co-operation and Development Nuclear Energy Agency (OECD NEA) to address the challenges of not just data preservation, but evaluation of the data to determine its merit for modern and future use. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was established to identify and verify comprehensive critical benchmark data sets; evaluate the data, including quantification of biases and uncertainties; compile the data and calculations in a standardized format; and formally document the effort into a single source of verified benchmark data [2]. Similarly, the International Reactor Physics Experiment Evaluation Project (IRPhEP) was established to preserve integral reactor physics experimental data, including separate or special effects data for nuclear energy and technology applications [3]. Annually, contributors from around the world continue to collaborate in the evaluation and review of select benchmark experiments for preservation and dissemination. The extensively peer-reviewed integral benchmark data can then be utilized to support nuclear design and safety analysts to validate the analytical tools, methods, and data needed for next-generation reactor design, safety analysis requirements, and all other front- and back-end activities contributing to the overall nuclear fuel cycle where quality neutronics calculations are paramount.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
John D. Bess; J. Blair Briggs; Jim Gulliford
2014-10-01
The International Reactor Physics Experiment Evaluation Project (IRPhEP) is a widely recognized world class program. The work of the IRPhEP is documented in the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook). Integral data from the IRPhEP Handbook is used by reactor safety and design, nuclear data, criticality safety, and analytical methods development specialists, worldwide, to perform necessary validations of their calculational techniques. The IRPhEP Handbook is among the most frequently quoted reference in the nuclear industry and is expected to be a valuable resource for future decades.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marck, Steven C. van der, E-mail: vandermarck@nrg.eu
Recent releases of three major world nuclear reaction data libraries, ENDF/B-VII.1, JENDL-4.0, and JEFF-3.1.1, have been tested extensively using benchmark calculations. The calculations were performed with the latest release of the continuous energy Monte Carlo neutronics code MCNP, i.e. MCNP6. Three types of benchmarks were used, viz. criticality safety benchmarks, (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 2000 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), tomore » mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for {sup 6}Li, {sup 7}Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D2O, H2O, concrete, polyethylene and teflon). The new functionality in MCNP6 to calculate the effective delayed neutron fraction was tested by comparison with more than thirty measurements in widely varying systems. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. The performance of the three libraries, in combination with MCNP6, is shown to be good. The results for the LEU-COMP-THERM category are on average very close to the benchmark value. Also for most other categories the results are satisfactory. Deviations from the benchmark values do occur in certain benchmark series, or in isolated cases within benchmark series. Such instances can often be related to nuclear data for specific non-fissile elements, such as C, Fe, or Gd. Indications are that the intermediate and mixed spectrum cases are less well described. The results for the shielding benchmarks are generally good, with very similar results for the three libraries in the majority of cases. Nevertheless there are, in certain cases, strong deviations between calculated and benchmark values, such as for Co and Mg. Also, the results show discrepancies at certain energies or angles for e.g. C, N, O, Mo, and W. The functionality of MCNP6 to calculate the effective delayed neutron fraction yields very good results for all three libraries.« less
ICSBEP Benchmarks For Nuclear Data Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Briggs, J. Blair
2005-05-24
The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organization for Economic Cooperation and Development (OECD) -- Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Serbia and Montenegro (formerly Yugoslavia), Kazakhstan, Spain, Israel, Brazil, Poland, and the Czech Republic are now participating. South Africa, India, China, and Germany are considering participation. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive andmore » internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled ''International Handbook of Evaluated Criticality Safety Benchmark Experiments.'' The 2004 Edition of the Handbook contains benchmark specifications for 3331 critical or subcritical configurations that are intended for use in validation efforts and for testing basic nuclear data. New to the 2004 Edition of the Handbook is a draft criticality alarm / shielding type benchmark that should be finalized in 2005 along with two other similar benchmarks. The Handbook is being used extensively for nuclear data testing and is expected to be a valuable resource for code and data validation and improvement efforts for decades to come. Specific benchmarks that are useful for testing structural materials such as iron, chromium, nickel, and manganese; beryllium; lead; thorium; and 238U are highlighted.« less
New Reactor Physics Benchmark Data in the March 2012 Edition of the IRPhEP Handbook
DOE Office of Scientific and Technical Information (OSTI.GOV)
John D. Bess; J. Blair Briggs; Jim Gulliford
2012-11-01
The International Reactor Physics Experiment Evaluation Project (IRPhEP) was established to preserve integral reactor physics experimental data, including separate or special effects data for nuclear energy and technology applications. Numerous experiments that have been performed worldwide, represent a large investment of infrastructure, expertise, and cost, and are valuable resources of data for present and future research. These valuable assets provide the basis for recording, development, and validation of methods. If the experimental data are lost, the high cost to repeat many of these measurements may be prohibitive. The purpose of the IRPhEP is to provide an extensively peer-reviewed set ofmore » reactor physics-related integral data that can be used by reactor designers and safety analysts to validate the analytical tools used to design next-generation reactors and establish the safety basis for operation of these reactors. Contributors from around the world collaborate in the evaluation and review of selected benchmark experiments for inclusion in the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook) [1]. Several new evaluations have been prepared for inclusion in the March 2012 edition of the IRPhEP Handbook.« less
Fast Neutron Spectrum Potassium Worth for Space Power Reactor Design Validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bess, John D.; Marshall, Margaret A.; Briggs, J. Blair
2015-03-01
A variety of critical experiments were constructed of enriched uranium metal (oralloy ) during the 1960s and 1970s at the Oak Ridge Critical Experiments Facility (ORCEF) in support of criticality safety operations at the Y-12 Plant. The purposes of these experiments included the evaluation of storage, casting, and handling limits for the Y-12 Plant and providing data for verification of calculation methods and cross-sections for nuclear criticality safety applications. These included solid cylinders of various diameters, annuli of various inner and outer diameters, two and three interacting cylinders of various diameters, and graphite and polyethylene reflected cylinders and annuli. Ofmore » the hundreds of delayed critical experiments, one was performed that consisted of uranium metal annuli surrounding a potassium-filled, stainless steel can. The outer diameter of the annuli was approximately 13 inches (33.02 cm) with an inner diameter of 7 inches (17.78 cm). The diameter of the stainless steel can was 7 inches (17.78 cm). The critical height of the configurations was approximately 5.6 inches (14.224 cm). The uranium annulus consisted of multiple stacked rings, each with radial thicknesses of 1 inch (2.54 cm) and varying heights. A companion measurement was performed using empty stainless steel cans; the primary purpose of these experiments was to test the fast neutron cross sections of potassium as it was a candidate for coolant in some early space power reactor designs.The experimental measurements were performed on July 11, 1963, by J. T. Mihalczo and M. S. Wyatt (Ref. 1) with additional information in its corresponding logbook. Unreflected and unmoderated experiments with the same set of highly enriched uranium metal parts were performed at the Oak Ridge Critical Experiments Facility in the 1960s and are evaluated in the International Handbook for Evaluated Criticality Safety Benchmark Experiments (ICSBEP Handbook) with the identifier HEU MET FAST 051. Thin graphite reflected (2 inches or less) experiments also using the same set of highly enriched uranium metal parts are evaluated in HEU MET FAST 071. Polyethylene-reflected configurations are evaluated in HEU-MET-FAST-076. A stack of highly enriched metal discs with a thick beryllium top reflector is evaluated in HEU-MET-FAST-069, and two additional highly enriched uranium annuli with beryllium cores are evaluated in HEU-MET-FAST-059. Both detailed and simplified model specifications are provided in this evaluation. Both of these fast neutron spectra assemblies were determined to be acceptable benchmark experiments. The calculated eigenvalues for both the detailed and the simple benchmark models are within ~0.26 % of the benchmark values for Configuration 1 (calculations performed using MCNP6 with ENDF/B-VII.1 neutron cross section data), but under-calculate the benchmark values by ~7s because the uncertainty in the benchmark is very small: ~0.0004 (1s); for Configuration 2, the under-calculation is ~0.31 % and ~8s. Comparison of detailed and simple model calculations for the potassium worth measurement and potassium mass coefficient yield results approximately 70 – 80 % lower (~6s to 10s) than the benchmark values for the various nuclear data libraries utilized. Both the potassium worth and mass coefficient are also deemed to be acceptable benchmark experiment measurements.« less
A review on the benchmarking concept in Malaysian construction safety performance
NASA Astrophysics Data System (ADS)
Ishak, Nurfadzillah; Azizan, Muhammad Azizi
2018-02-01
Construction industry is one of the major industries that propels Malaysia's economy in highly contributes to our nation's GDP growth, yet the high fatality rates on construction sites have caused concern among safety practitioners and the stakeholders. Hence, there is a need of benchmarking in performance of Malaysia's construction industry especially in terms of safety. This concept can create a fertile ground for ideas, but only in a receptive environment, organization that share good practices and compare their safety performance against other benefit most to establish improvement in safety culture. This research was conducted to study the awareness important, evaluate current practice and improvement, and also identify the constraint in implement of benchmarking on safety performance in our industry. Additionally, interviews with construction professionals were come out with different views on this concept. Comparison has been done to show the different understanding of benchmarking approach and how safety performance can be benchmarked. But, it's viewed as one mission, which to evaluate objectives identified through benchmarking that will improve the organization's safety performance. Finally, the expected result from this research is to help Malaysia's construction industry implement best practice in safety performance management through the concept of benchmarking.
GEN-IV Benchmarking of Triso Fuel Performance Models under accident conditions modeling input data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collin, Blaise Paul
This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: • The modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release. • The modeling of the AGR-1 and HFR-EU1bis safety testing experiments. •more » The comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from “Case 5” of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. “Case 5” of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to “effects of the numerical calculation method rather than the physical model” [IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read this document thoroughly to make sure all the data needed for their calculations is provided in the document. Missing data will be added to a revision of the document if necessary. 09/2016: Tables 6 and 8 updated. AGR-2 input data added« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collin, Blaise P.
2014-09-01
This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: the modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release; the modeling of the AGR-1 and HFR-EU1bis safety testing experiments; and, the comparisonmore » of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from ''Case 5'' of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. ''Case 5'' of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to ''effects of the numerical calculation method rather than the physical model''[IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read this document thoroughly to make sure all the data needed for their calculations is provided in the document. Missing data will be added to a revision of the document if necessary.« less
29 CFR 1952.153 - Compliance staffing benchmarks.
Code of Federal Regulations, 2014 CFR
2014-07-01
... further revision of its benchmarks to 64 safety inspectors and 50 industrial hygienists. After opportunity... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... benchmarks of 50 safety and 27 health compliance officers. After opportunity for public comment and service...
29 CFR 1952.153 - Compliance staffing benchmarks.
Code of Federal Regulations, 2012 CFR
2012-07-01
... further revision of its benchmarks to 64 safety inspectors and 50 industrial hygienists. After opportunity... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... benchmarks of 50 safety and 27 health compliance officers. After opportunity for public comment and service...
29 CFR 1952.153 - Compliance staffing benchmarks.
Code of Federal Regulations, 2011 CFR
2011-07-01
... further revision of its benchmarks to 64 safety inspectors and 50 industrial hygienists. After opportunity... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... benchmarks of 50 safety and 27 health compliance officers. After opportunity for public comment and service...
29 CFR 1952.153 - Compliance staffing benchmarks.
Code of Federal Regulations, 2010 CFR
2010-07-01
... further revision of its benchmarks to 64 safety inspectors and 50 industrial hygienists. After opportunity... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... benchmarks of 50 safety and 27 health compliance officers. After opportunity for public comment and service...
29 CFR 1952.153 - Compliance staffing benchmarks.
Code of Federal Regulations, 2013 CFR
2013-07-01
... further revision of its benchmarks to 64 safety inspectors and 50 industrial hygienists. After opportunity... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... benchmarks of 50 safety and 27 health compliance officers. After opportunity for public comment and service...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lell, R. M.; Schaefer, R. W.; McKnight, R. D.
Over a period of 30 years more than a hundred Zero Power Reactor (ZPR) critical assemblies were constructed at Argonne National Laboratory. The ZPR facilities, ZPR-3, ZPR-6, ZPR-9 and ZPPR, were all fast critical assembly facilities. The ZPR critical assemblies were constructed to support fast reactor development, but data from some of these assemblies are also well suited to form the basis for criticality safety benchmarks. Of the three classes of ZPR assemblies, engineering mockups, engineering benchmarks and physics benchmarks, the last group tends to be most useful for criticality safety. Because physics benchmarks were designed to test fast reactormore » physics data and methods, they were as simple as possible in geometry and composition. The principal fissile species was {sup 235}U or {sup 239}Pu. Fuel enrichments ranged from 9% to 95%. Often there were only one or two main core diluent materials, such as aluminum, graphite, iron, sodium or stainless steel. The cores were reflected (and insulated from room return effects) by one or two layers of materials such as depleted uranium, lead or stainless steel. Despite their more complex nature, a small number of assemblies from the other two classes would make useful criticality safety benchmarks because they have features related to criticality safety issues, such as reflection by soil-like material. The term 'benchmark' in a ZPR program connotes a particularly simple loading aimed at gaining basic reactor physics insight, as opposed to studying a reactor design. In fact, the ZPR-6/7 Benchmark Assembly (Reference 1) had a very simple core unit cell assembled from plates of depleted uranium, sodium, iron oxide, U3O8, and plutonium. The ZPR-6/7 core cell-average composition is typical of the interior region of liquid-metal fast breeder reactors (LMFBRs) of the era. It was one part of the Demonstration Reactor Benchmark Program,a which provided integral experiments characterizing the important features of demonstration-size LMFBRs. As a benchmark, ZPR-6/7 was devoid of many 'real' reactor features, such as simulated control rods and multiple enrichment zones, in its reference form. Those kinds of features were investigated experimentally in variants of the reference ZPR-6/7 or in other critical assemblies in the Demonstration Reactor Benchmark Program.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lell, R. M.; McKnight, R. D.; Tsiboulia, A.
2010-09-30
Over a period of 30 years, more than a hundred Zero Power Reactor (ZPR) critical assemblies were constructed at Argonne National Laboratory. The ZPR facilities, ZPR-3, ZPR-6, ZPR-9 and ZPPR, were all fast critical assembly facilities. The ZPR critical assemblies were constructed to support fast reactor development, but data from some of these assemblies are also well suited for nuclear data validation and to form the basis for criticality safety benchmarks. A number of the Argonne ZPR/ZPPR critical assemblies have been evaluated as ICSBEP and IRPhEP benchmarks. Of the three classes of ZPR assemblies, engineering mockups, engineering benchmarks and physicsmore » benchmarks, the last group tends to be most useful for criticality safety. Because physics benchmarks were designed to test fast reactor physics data and methods, they were as simple as possible in geometry and composition. The principal fissile species was {sup 235}U or {sup 239}Pu. Fuel enrichments ranged from 9% to 95%. Often there were only one or two main core diluent materials, such as aluminum, graphite, iron, sodium or stainless steel. The cores were reflected (and insulated from room return effects) by one or two layers of materials such as depleted uranium, lead or stainless steel. Despite their more complex nature, a small number of assemblies from the other two classes would make useful criticality safety benchmarks because they have features related to criticality safety issues, such as reflection by soil-like material. ZPR-3 Assembly 11 (ZPR-3/11) was designed as a fast reactor physics benchmark experiment with an average core {sup 235}U enrichment of approximately 12 at.% and a depleted uranium reflector. Approximately 79.7% of the total fissions in this assembly occur above 100 keV, approximately 20.3% occur below 100 keV, and essentially none below 0.625 eV - thus the classification as a 'fast' assembly. This assembly is Fast Reactor Benchmark No. 8 in the Cross Section Evaluation Working Group (CSEWG) Benchmark Specificationsa and has historically been used as a data validation benchmark assembly. Loading of ZPR-3 Assembly 11 began in early January 1958, and the Assembly 11 program ended in late January 1958. The core consisted of highly enriched uranium (HEU) plates and depleted uranium plates loaded into stainless steel drawers, which were inserted into the central square stainless steel tubes of a 31 x 31 matrix on a split table machine. The core unit cell consisted of two columns of 0.125 in.-wide (3.175 mm) HEU plates, six columns of 0.125 in.-wide (3.175 mm) depleted uranium plates and one column of 1.0 in.-wide (25.4 mm) depleted uranium plates. The length of each column was 10 in. (254.0 mm) in each half of the core. The axial blanket consisted of 12 in. (304.8 mm) of depleted uranium behind the core. The thickness of the depleted uranium radial blanket was approximately 14 in. (355.6 mm), and the length of the radial blanket in each half of the matrix was 22 in. (558.8 mm). The assembly geometry approximated a right circular cylinder as closely as the square matrix tubes allowed. According to the logbook and loading records for ZPR-3/11, the reference critical configuration was loading 10 which was critical on January 21, 1958. Subsequent loadings were very similar but less clean for criticality because there were modifications made to accommodate reactor physics measurements other than criticality. Accordingly, ZPR-3/11 loading 10 was selected as the only configuration for this benchmark. As documented below, it was determined to be acceptable as a criticality safety benchmark experiment. A very accurate transformation to a simplified model is needed to make any ZPR assembly a practical criticality-safety benchmark. There is simply too much geometric detail in an exact (as-built) model of a ZPR assembly, even a clean core such as ZPR-3/11 loading 10. The transformation must reduce the detail to a practical level without masking any of the important features of the critical experiment. And it must do this without increasing the total uncertainty far beyond that of the original experiment. Such a transformation is described in Section 3. It was obtained using a pair of continuous-energy Monte Carlo calculations. First, the critical configuration was modeled in full detail - every plate, drawer, matrix tube, and air gap was modeled explicitly. Then the regionwise compositions and volumes from the detailed as-built model were used to construct a homogeneous, two-dimensional (RZ) model of ZPR-3/11 that conserved the mass of each nuclide and volume of each region. The simple cylindrical model is the criticality-safety benchmark model. The difference in the calculated k{sub eff} values between the as-built three-dimensional model and the homogeneous two-dimensional benchmark model was used to adjust the measured excess reactivity of ZPR-3/11 loading 10 to obtain the k{sub eff} for the benchmark model.« less
Aluminum Data Measurements and Evaluation for Criticality Safety Applications
NASA Astrophysics Data System (ADS)
Leal, L. C.; Guber, K. H.; Spencer, R. R.; Derrien, H.; Wright, R. Q.
2002-12-01
The Defense Nuclear Facility Safety Board (DNFSB) Recommendation 93-2 motivated the US Department of Energy (DOE) to develop a comprehensive criticality safety program to maintain and to predict the criticality of systems throughout the DOE complex. To implement the response to the DNFSB Recommendation 93-2, a Nuclear Criticality Safety Program (NCSP) was created including the following tasks: Critical Experiments, Criticality Benchmarks, Training, Analytical Methods, and Nuclear Data. The Nuclear Data portion of the NCSP consists of a variety of differential measurements performed at the Oak Ridge Electron Linear Accelerator (ORELA) at the Oak Ridge National Laboratory (ORNL), data analysis and evaluation using the generalized least-squares fitting code SAMMY in the resolved, unresolved, and high energy ranges, and the development and benchmark testing of complete evaluations for a nuclide for inclusion into the Evaluated Nuclear Data File (ENDF/B). This paper outlines the work performed at ORNL to measure, evaluate, and test the nuclear data for aluminum for applications in criticality safety problems.
Highly Enriched Uranium Metal Cylinders Surrounded by Various Reflector Materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernard Jones; J. Blair Briggs; Leland Monteirth
A series of experiments was performed at Los Alamos Scientific Laboratory in 1958 to determine critical masses of cylinders of Oralloy (Oy) reflected by a number of materials. The experiments were all performed on the Comet Universal Critical Assembly Machine, and consisted of discs of highly enriched uranium (93.3 wt.% 235U) reflected by half-inch and one-inch-thick cylindrical shells of various reflector materials. The experiments were performed by members of Group N-2, particularly K. W. Gallup, G. E. Hansen, H. C. Paxton, and R. H. White. This experiment was intended to ascertain critical masses for criticality safety purposes, as well asmore » to compare neutron transport cross sections to those obtained from danger coefficient measurements with the Topsy Oralloy-Tuballoy reflected and Godiva unreflected critical assemblies. The reflector materials examined in this series of experiments are as follows: magnesium, titanium, aluminum, graphite, mild steel, nickel, copper, cobalt, molybdenum, natural uranium, tungsten, beryllium, aluminum oxide, molybdenum carbide, and polythene (polyethylene). Also included are two special configurations of composite beryllium and iron reflectors. Analyses were performed in which uncertainty associated with six different parameters was evaluated; namely, extrapolation to the uranium critical mass, uranium density, 235U enrichment, reflector density, reflector thickness, and reflector impurities. In addition to the idealizations made by the experimenters (removal of the platen and diaphragm), two simplifications were also made to the benchmark models that resulted in a small bias and additional uncertainty. First of all, since impurities in core and reflector materials are only estimated, they are not included in the benchmark models. Secondly, the room, support structure, and other possible surrounding equipment were not included in the model. Bias values that result from these two simplifications were determined and associated uncertainty in the bias values were included in the overall uncertainty in benchmark keff values. Bias values were very small, ranging from 0.0004 ?k low to 0.0007 ?k low. Overall uncertainties range from ? 0.0018 to ? 0.0030. Major contributors to the overall uncertainty include uncertainty in the extrapolation to the uranium critical mass and the uranium density. Results are summarized in Figure 1. Figure 1. Experimental, Benchmark-Model, and MCNP/KENO Calculated Results The 32 configurations described and evaluated under ICSBEP Identifier HEU-MET-FAST-084 are judged to be acceptable for use as criticality safety benchmark experiments and should be valuable integral benchmarks for nuclear data testing of the various reflector materials. Details of the benchmark models, uncertainty analyses, and final results are given in this paper.« less
Proceedings of the Nuclear Criticality Technology Safety Workshop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rene G. Sanchez
1998-04-01
This document contains summaries of most of the papers presented at the 1995 Nuclear Criticality Technology Safety Project (NCTSP) meeting, which was held May 16 and 17 at San Diego, Ca. The meeting was broken up into seven sessions, which covered the following topics: (1) Criticality Safety of Project Sapphire; (2) Relevant Experiments For Criticality Safety; (3) Interactions with the Former Soviet Union; (4) Misapplications and Limitations of Monte Carlo Methods Directed Toward Criticality Safety Analyses; (5) Monte Carlo Vulnerabilities of Execution and Interpretation; (6) Monte Carlo Vulnerabilities of Representation; and (7) Benchmark Comparisons.
An approach to radiation safety department benchmarking in academic and medical facilities.
Harvey, Richard P
2015-02-01
Based on anecdotal evidence and networking with colleagues at other facilities, it has become evident that some radiation safety departments are not adequately staffed and radiation safety professionals need to increase their staffing levels. Discussions with management regarding radiation safety department staffing often lead to similar conclusions. Management acknowledges the Radiation Safety Officer (RSO) or Director of Radiation Safety's concern but asks the RSO to provide benchmarking and justification for additional full-time equivalents (FTEs). The RSO must determine a method to benchmark and justify additional staffing needs while struggling to maintain a safe and compliant radiation safety program. Benchmarking and justification are extremely important tools that are commonly used to demonstrate the need for increased staffing in other disciplines and are tools that can be used by radiation safety professionals. Parameters that most RSOs would expect to be positive predictors of radiation safety staff size generally are and can be emphasized in benchmarking and justification report summaries. Facilities with large radiation safety departments tend to have large numbers of authorized users, be broad-scope programs, be subject to increased controls regulations, have large clinical operations, have significant numbers of academic radiation-producing machines, and have laser safety responsibilities.
Thermo-hydro-mechanical-chemical processes in fractured-porous media: Benchmarks and examples
NASA Astrophysics Data System (ADS)
Kolditz, O.; Shao, H.; Görke, U.; Kalbacher, T.; Bauer, S.; McDermott, C. I.; Wang, W.
2012-12-01
The book comprises an assembly of benchmarks and examples for porous media mechanics collected over the last twenty years. Analysis of thermo-hydro-mechanical-chemical (THMC) processes is essential to many applications in environmental engineering, such as geological waste deposition, geothermal energy utilisation, carbon capture and storage, water resources management, hydrology, even climate change. In order to assess the feasibility as well as the safety of geotechnical applications, process-based modelling is the only tool to put numbers, i.e. to quantify future scenarios. This charges a huge responsibility concerning the reliability of computational tools. Benchmarking is an appropriate methodology to verify the quality of modelling tools based on best practices. Moreover, benchmarking and code comparison foster community efforts. The benchmark book is part of the OpenGeoSys initiative - an open source project to share knowledge and experience in environmental analysis and scientific computation.
INTEGRAL BENCHMARK DATA FOR NUCLEAR DATA TESTING THROUGH THE ICSBEP AND THE NEWLY ORGANIZED IRPHEP
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. Blair Briggs; Lori Scott; Yolanda Rugama
The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) was last reported in a nuclear data conference at the International Conference on Nuclear Data for Science and Technology, ND-2004, in Santa Fe, New Mexico. Since that time the number and type of integral benchmarks have increased significantly. Included in the ICSBEP Handbook are criticality-alarm / shielding and fundamental physic benchmarks in addition to the traditional critical / subcritical benchmark data. Since ND 2004, a reactor physics counterpart to the ICSBEP, the International Reactor Physics Experiment Evaluation Project (IRPhEP) was initiated. The IRPhEP is patterned after the ICSBEP, butmore » focuses on other integral measurements, such as buckling, spectral characteristics, reactivity effects, reactivity coefficients, kinetics measurements, reaction-rate and power distributions, nuclide compositions, and other miscellaneous-type measurements in addition to the critical configuration. The status of these two projects is discussed and selected benchmarks highlighted in this paper.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Thomas Martin; Celik, Cihangir; Dunn, Michael E
In October 2010, a series of benchmark experiments were conducted at the French Commissariat a l'Energie Atomique et aux Energies Alternatives (CEA) Valduc SILENE facility. These experiments were a joint effort between the United States Department of Energy Nuclear Criticality Safety Program and the CEA. The purpose of these experiments was to create three benchmarks for the verification and validation of radiation transport codes and evaluated nuclear data used in the analysis of criticality accident alarm systems. This series of experiments consisted of three single-pulsed experiments with the SILENE reactor. For the first experiment, the reactor was bare (unshielded), whereasmore » in the second and third experiments, it was shielded by lead and polyethylene, respectively. The polyethylene shield of the third experiment had a cadmium liner on its internal and external surfaces, which vertically was located near the fuel region of SILENE. During each experiment, several neutron activation foils and thermoluminescent dosimeters (TLDs) were placed around the reactor. Nearly half of the foils and TLDs had additional high-density magnetite concrete, high-density barite concrete, standard concrete, and/or BoroBond shields. CEA Saclay provided all the concrete, and the US Y-12 National Security Complex provided the BoroBond. Measurement data from the experiments were published at the 2011 International Conference on Nuclear Criticality (ICNC 2011) and the 2013 Nuclear Criticality Safety Division (NCSD 2013) topical meeting. Preliminary computational results for the first experiment were presented in the ICNC 2011 paper, which showed poor agreement between the computational results and the measured values of the foils shielded by concrete. Recently the hydrogen content, boron content, and density of these concrete shields were further investigated within the constraints of the previously available data. New computational results for the first experiment are now available that show much better agreement with the measured values.« less
Contributions to Integral Nuclear Data in ICSBEP and IRPhEP since ND 2013
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bess, John D.; Briggs, J. Blair; Gulliford, Jim
2016-09-01
The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) was last discussed directly with the international nuclear data community at ND2013. Since ND2013, integral benchmark data that are available for nuclear data testing has continued to increase. The status of the international benchmark efforts and the latest contributions to integral nuclear data for testing is discussed. Select benchmark configurations that have been added to the ICSBEP and IRPhEP Handbooks since ND2013 are highlighted. The 2015 edition of the ICSBEP Handbook now contains 567 evaluations with benchmark specifications for 4,874more » critical, near-critical, or subcritical configurations, 31 criticality alarm placement/shielding configuration with multiple dose points apiece, and 207 configurations that have been categorized as fundamental physics measurements that are relevant to criticality safety applications. The 2015 edition of the IRPhEP Handbook contains data from 143 different experimental series that were performed at 50 different nuclear facilities. Currently 139 of the 143 evaluations are published as approved benchmarks with the remaining four evaluations published in draft format only. Measurements found in the IRPhEP Handbook include criticality, buckling and extrapolation length, spectral characteristics, reactivity effects, reactivity coefficients, kinetics, reaction-rate distributions, power distributions, isotopic compositions, and/or other miscellaneous types of measurements for various types of reactor systems. Annual technical review meetings for both projects were held in April 2016; additional approved benchmark evaluations will be included in the 2016 editions of these handbooks.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bess, John D.; Sterbentz, James W.; Snoj, Luka
PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen criticalmore » configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.« less
Summary of ORSphere critical and reactor physics measurements
NASA Astrophysics Data System (ADS)
Marshall, Margaret A.; Bess, John D.
2017-09-01
In the early 1970s Dr. John T. Mihalczo (team leader), J.J. Lynn, and J.R. Taylor performed experiments at the Oak Ridge Critical Experiments Facility (ORCEF) with highly enriched uranium (HEU) metal (called Oak Ridge Alloy or ORALLOY) to recreate GODIVA I results with greater accuracy than those performed at Los Alamos National Laboratory in the 1950s. The purpose of the Oak Ridge ORALLOY Sphere (ORSphere) experiments was to estimate the unreflected and unmoderated critical mass of an idealized sphere of uranium metal corrected to a density, purity, and enrichment such that it could be compared with the GODIVA I experiments. This critical configuration has been evaluated. Preliminary results were presented at ND2013. Since then, the evaluation was finalized and judged to be an acceptable benchmark experiment for the International Criticality Safety Benchmark Experiment Project (ICSBEP). Additionally, reactor physics measurements were performed to determine surface button worths, central void worth, delayed neutron fraction, prompt neutron decay constant, fission density and neutron importance. These measurements have been evaluated and found to be acceptable experiments and are discussed in full detail in the International Handbook of Evaluated Reactor Physics Benchmark Experiments. The purpose of this paper is to summarize all the evaluated critical and reactor physics measurements evaluations.
42 CFR 425.502 - Calculating the ACO quality performance score.
Code of Federal Regulations, 2014 CFR
2014-10-01
... four domains: (i) Patient/care giver experience. (ii) Care coordination/Patient safety. (iii... year. (1) For the first performance year of an ACO's agreement, CMS defines the quality performance... a point scale for the measures. (2)(i) CMS will define the quality benchmarks using fee-for-service...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bess, John D.
2014-03-01
PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen criticalmore » configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
John D. Bess
2013-03-01
PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen criticalmore » configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
John D. Bess
2013-03-01
PROTEUS is a zero-power research reactor based on a cylindrical graphite annulus with a central cylindrical cavity. The graphite annulus remains basically the same for all experimental programs, but the contents of the central cavity are changed according to the type of reactor being investigated. Through most of its service history, PROTEUS has represented light-water reactors, but from 1992 to 1996 PROTEUS was configured as a pebble-bed reactor (PBR) critical facility and designated as HTR-PROTEUS. The nomenclature was used to indicate that this series consisted of High Temperature Reactor experiments performed in the PROTEUS assembly. During this period, seventeen criticalmore » configurations were assembled and various reactor physics experiments were conducted. These experiments included measurements of criticality, differential and integral control rod and safety rod worths, kinetics, reaction rates, water ingress effects, and small sample reactivity effects (Ref. 3). HTR-PROTEUS was constructed, and the experimental program was conducted, for the purpose of providing experimental benchmark data for assessment of reactor physics computer codes. Considerable effort was devoted to benchmark calculations as a part of the HTR-PROTEUS program. References 1 and 2 provide detailed data for use in constructing models for codes to be assessed. Reference 3 is a comprehensive summary of the HTR-PROTEUS experiments and the associated benchmark program. This document draws freely from these references. Only Cores 9 and 10 are evaluated in this benchmark report due to similarities in their construction. The other core configurations of the HTR-PROTEUS program are evaluated in their respective reports as outlined in Section 1.0. Cores 9 and 10 were evaluated and determined to be acceptable benchmark experiments.« less
Summary of ORSphere Critical and Reactor Physics Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, Margaret A.; Bess, John D.
In the early 1970s Dr. John T. Mihalczo (team leader), J. J. Lynn, and J. R. Taylor performed experiments at the Oak Ridge Critical Experiments Facility (ORCEF) with highly enriched uranium (HEU) metal (called Oak Ridge Alloy or ORALLOY) to recreate GODIVA I results with greater accuracy than those performed at Los Alamos National Laboratory in the 1950s. The purpose of the Oak Ridge ORALLOY Sphere (ORSphere) experiments was to estimate the unreflected and unmoderated critical mass of an idealized sphere of uranium metal corrected to a density, purity, and enrichment such that it could be compared with the GODIVAmore » I experiments. This critical configuration has been evaluated. Preliminary results were presented at ND2013. Since then, the evaluation was finalized and judged to be an acceptable benchmark experiment for the International Criticality Safety Benchmark Experiment Project (ICSBEP). Additionally, reactor physics measurements were performed to determine surface button worths, central void worth, delayed neutron fraction, prompt neutron decay constant, fission density and neutron importance. These measurements have been evaluated and found to be acceptable experiments and are discussed in full detail in the International Handbook of Evaluated Reactor Physics Benchmark Experiments. The purpose of this paper is summary summarize all the critical and reactor physics measurements evaluations and, when possible, to compare them to GODIVA experiment results.« less
The Paucity Problem: Where Have All the Space Reactor Experiments Gone?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bess, John D.; Marshall, Margaret A.
2016-10-01
The Handbooks of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) together contain a plethora of documented and evaluated experiments essential in the validation of nuclear data, neutronics codes, and modeling of various nuclear systems. Unfortunately, only a minute selection of handbook data (twelve evaluations) are of actual experimental facilities and mockups designed specifically for space nuclear research. There is a paucity problem, such that the multitude of space nuclear experimental activities performed in the past several decades have yet to be recovered and made available in such detail that themore » international community could benefit from these valuable historical research efforts. Those experiments represent extensive investments in infrastructure, expertise, and cost, as well as constitute significantly valuable resources of data supporting past, present, and future research activities. The ICSBEP and IRPhEP were established to identify and verify comprehensive sets of benchmark data; evaluate the data, including quantification of biases and uncertainties; compile the data and calculations in a standardized format; and formally document the effort into a single source of verified benchmark data. See full abstract in attached document.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leclaire, Nicolas; Le Dauphin, Francois-Xavier; Duhamel, Isabelle
2014-11-04
The MIRTE (Materials in Interacting and Reflecting configurations, all Thicknesses) program was established to answer the needs of criticality safety practitioners in terms of experimental validation of structural materials and to possibly contribute to nuclear data improvement, which ultimately supports reactor safety analysis as well. MIRTE took the shape of a collaboration between the AREVA and ANDRA French industrialists and a noncommercial international funding partner such as the U.S. Department of Energy. The aim of this paper is to present the configurations of the MIRTE 1 and MIRTE 2 programs and to highlight the results of the titanium experiments recentlymore » published in the International Handbook of Evaluated Criticality Safety Benchmark Experiments.« less
29 CFR 1952.213 - Compliance staffing benchmarks.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... OSHA, completed a reassessment of the levels initially established in 1980 and proposed revised compliance staffing benchmarks of 36 safety and 18 health compliance officers. After opportunity for public...
29 CFR 1952.233 - Compliance staffing benchmarks.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... OSHA, completed a reassessment of the levels initially established in 1980 and proposed revised compliance staffing benchmarks of 23 safety and 14 health compliance officers. After opportunity for public...
29 CFR 1952.323 - Compliance staffing benchmarks.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... OSHA, completed a reassessment of the levels initially established in 1980 and proposed revised compliance staffing benchmarks of 47 safety and 23 health compliance officers. After opportunity for public...
29 CFR 1952.93 - Compliance staffing benchmarks.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION..., in conjunction with OSHA, completed a reassessment of the levels initially established in 1980 and proposed revised compliance staffing benchmarks of 17 safety and 12 health compliance officers. After...
29 CFR 1952.223 - Compliance staffing benchmarks.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... OSHA, completed a reassessment of the levels initially established in 1980 and proposed revised compliance staffing benchmarks of 22 safety and 14 health compliance officers. After opportunity for public...
29 CFR 1952.223 - Compliance staffing benchmarks.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... OSHA, completed a reassessment of the levels initially established in 1980 and proposed revised compliance staffing benchmarks of 22 safety and 14 health compliance officers. After opportunity for public...
29 CFR 1952.343 - Compliance staffing benchmarks.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... OSHA, completed a reassessment of the levels initially established in 1980 and proposed revised compliance staffing benchmarks of 6 safety and 2 health compliance officers. After opportunity for pulbic...
29 CFR 1952.353 - Compliance staffing benchmarks.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... OSHA, completed a reassessment of the levels initially established in 1980 and proposed revised compliance staffing benchmarks of 9 safety and 6 health compliance officers. After opportunity for public...
29 CFR 1952.373 - Compliance staffing benchmarks.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... OSHA, completed a reassessment of the levels initially established in 1980 and proposed revised compliance staffing benchmarks of 38 safety and 21 health compliance officers. After opportunity for public...
29 CFR 1952.203 - Compliance staffing benchmarks.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... OSHA, completed a reassessment of the levels initially established in 1980 and proposed revised compliance staffing benchmarks of 31 safety and 12 health compliance officers. After opportunity for public...
29 CFR 1952.203 - Compliance staffing benchmarks.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... OSHA, completed a reassessment of the levels initially established in 1980 and proposed revised compliance staffing benchmarks of 31 safety and 12 health compliance officers. After opportunity for public...
29 CFR 1952.343 - Compliance staffing benchmarks.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... OSHA, completed a reassessment of the levels initially established in 1980 and proposed revised compliance staffing benchmarks of 6 safety and 2 health compliance officers. After opportunity for pulbic...
29 CFR 1952.373 - Compliance staffing benchmarks.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... OSHA, completed a reassessment of the levels initially established in 1980 and proposed revised compliance staffing benchmarks of 38 safety and 21 health compliance officers. After opportunity for public...
29 CFR 1952.93 - Compliance staffing benchmarks.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION..., in conjunction with OSHA, completed a reassessment of the levels initially established in 1980 and proposed revised compliance staffing benchmarks of 17 safety and 12 health compliance officers. After...
29 CFR 1952.233 - Compliance staffing benchmarks.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... OSHA, completed a reassessment of the levels initially established in 1980 and proposed revised compliance staffing benchmarks of 23 safety and 14 health compliance officers. After opportunity for public...
29 CFR 1952.323 - Compliance staffing benchmarks.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... OSHA, completed a reassessment of the levels initially established in 1980 and proposed revised compliance staffing benchmarks of 47 safety and 23 health compliance officers. After opportunity for public...
29 CFR 1952.353 - Compliance staffing benchmarks.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... OSHA, completed a reassessment of the levels initially established in 1980 and proposed revised compliance staffing benchmarks of 9 safety and 6 health compliance officers. After opportunity for public...
29 CFR 1952.213 - Compliance staffing benchmarks.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... OSHA, completed a reassessment of the levels initially established in 1980 and proposed revised compliance staffing benchmarks of 36 safety and 18 health compliance officers. After opportunity for public...
29 CFR 1952.103 - Compliance staffing benchmarks.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... of 28 health compliance officers. Oregon elected to retain the safety benchmark level established in... State operating an approved State plan. In October 1992, Oregon completed, in conjunction with OSHA, a...
29 CFR 1952.103 - Compliance staffing benchmarks.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... of 28 health compliance officers. Oregon elected to retain the safety benchmark level established in... State operating an approved State plan. In October 1992, Oregon completed, in conjunction with OSHA, a...
Ensuring the validity of calculated subcritical limits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, H.K.
1977-01-01
The care taken at the Savannah River Laboratory and Plant to ensure the validity of calculated subcritical limits is described. Close attention is given to ANSI N16.1-1975, ''Validation of Calculational Methods for Nuclear Criticality Safety.'' The computer codes used for criticality safety computations, which are listed and are briefly described, have been placed in the SRL JOSHUA system to facilitate calculation and to reduce input errors. A driver module, KOKO, simplifies and standardizes input and links the codes together in various ways. For any criticality safety evaluation, correlations of the calculational methods are made with experiment to establish bias. Occasionallymore » subcritical experiments are performed expressly to provide benchmarks. Calculated subcritical limits contain an adequate but not excessive margin to allow for uncertainty in the bias. The final step in any criticality safety evaluation is the writing of a report describing the calculations and justifying the margin.« less
Uncertainty Quantification Techniques of SCALE/TSUNAMI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T; Mueller, Don
2011-01-01
The Standardized Computer Analysis for Licensing Evaluation (SCALE) code system developed at Oak Ridge National Laboratory (ORNL) includes Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI). The TSUNAMI code suite can quantify the predicted change in system responses, such as k{sub eff}, reactivity differences, or ratios of fluxes or reaction rates, due to changes in the energy-dependent, nuclide-reaction-specific cross-section data. Where uncertainties in the neutron cross-section data are available, the sensitivity of the system to the cross-section data can be applied to propagate the uncertainties in the cross-section data to an uncertainty in the system response. Uncertainty quantification ismore » useful for identifying potential sources of computational biases and highlighting parameters important to code validation. Traditional validation techniques often examine one or more average physical parameters to characterize a system and identify applicable benchmark experiments. However, with TSUNAMI correlation coefficients are developed by propagating the uncertainties in neutron cross-section data to uncertainties in the computed responses for experiments and safety applications through sensitivity coefficients. The bias in the experiments, as a function of their correlation coefficient with the intended application, is extrapolated to predict the bias and bias uncertainty in the application through trending analysis or generalized linear least squares techniques, often referred to as 'data adjustment.' Even with advanced tools to identify benchmark experiments, analysts occasionally find that the application models include some feature or material for which adequately similar benchmark experiments do not exist to support validation. For example, a criticality safety analyst may want to take credit for the presence of fission products in spent nuclear fuel. In such cases, analysts sometimes rely on 'expert judgment' to select an additional administrative margin to account for gap in the validation data or to conclude that the impact on the calculated bias and bias uncertainty is negligible. As a result of advances in computer programs and the evolution of cross-section covariance data, analysts can use the sensitivity and uncertainty analysis tools in the TSUNAMI codes to estimate the potential impact on the application-specific bias and bias uncertainty resulting from nuclides not represented in available benchmark experiments. This paper presents the application of methods described in a companion paper.« less
29 CFR 1952.263 - Compliance staffing benchmarks.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... State operating an approved State plan. In 1992, Michigan completed, in conjunction with OSHA, a reassessment of the levels initially established in 1980 and proposed revised benchmarks of 56 safety and 45...
29 CFR 1952.263 - Compliance staffing benchmarks.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... State operating an approved State plan. In 1992, Michigan completed, in conjunction with OSHA, a reassessment of the levels initially established in 1980 and proposed revised benchmarks of 56 safety and 45...
29 CFR 1952.293 - Compliance staffing benchmarks.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... staffing benchmarks of 11 safety and 5 health compliance officers. After opportunity for public comment and... established for each State operating an approved State plan. In July 1986 Nevada, in conjunction with OSHA...
29 CFR 1952.163 - Compliance staffing benchmarks.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... staffing benchmarks of 16 safety and 13 health compliance officers. After opportunity for public comment... established for each State operating an approved State plan. In September 1984, Iowa, in conjunction with OSHA...
29 CFR 1952.113 - Compliance staffing benchmarks.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... staffing benchmarks of 10 safety and 9 health compliance officers. After opportunity for public comments... established for each State operating an approved State plan. In September 1984, Utah, in conjunction with OSHA...
29 CFR 1952.293 - Compliance staffing benchmarks.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... staffing benchmarks of 11 safety and 5 health compliance officers. After opportunity for public comment and... established for each State operating an approved State plan. In July 1986 Nevada, in conjunction with OSHA...
29 CFR 1952.363 - Compliance staffing benchmarks.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... State operating an approved State plan. In May 1992, New Mexico completed, in conjunction with OSHA, a reassessment of the staffing levels initially established in 1980 and proposed revised benchmarks of 7 safety...
29 CFR 1952.363 - Compliance staffing benchmarks.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... State operating an approved State plan. In May 1992, New Mexico completed, in conjunction with OSHA, a reassessment of the staffing levels initially established in 1980 and proposed revised benchmarks of 7 safety...
29 CFR 1952.113 - Compliance staffing benchmarks.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... staffing benchmarks of 10 safety and 9 health compliance officers. After opportunity for public comments... established for each State operating an approved State plan. In September 1984, Utah, in conjunction with OSHA...
29 CFR 1952.163 - Compliance staffing benchmarks.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION... staffing benchmarks of 16 safety and 13 health compliance officers. After opportunity for public comment... established for each State operating an approved State plan. In September 1984, Iowa, in conjunction with OSHA...
The Department of Energy Nuclear Criticality Safety Program
NASA Astrophysics Data System (ADS)
Felty, James R.
2005-05-01
This paper broadly covers key events and activities from which the Department of Energy Nuclear Criticality Safety Program (NCSP) evolved. The NCSP maintains fundamental infrastructure that supports operational criticality safety programs. This infrastructure includes continued development and maintenance of key calculational tools, differential and integral data measurements, benchmark compilation, development of training resources, hands-on training, and web-based systems to enhance information preservation and dissemination. The NCSP was initiated in response to Defense Nuclear Facilities Safety Board Recommendation 97-2, Criticality Safety, and evolved from a predecessor program, the Nuclear Criticality Predictability Program, that was initiated in response to Defense Nuclear Facilities Safety Board Recommendation 93-2, The Need for Critical Experiment Capability. This paper also discusses the role Dr. Sol Pearlstein played in helping the Department of Energy lay the foundation for a robust and enduring criticality safety infrastructure.
Companies' opinions and acceptance of global food safety initiative benchmarks after implementation.
Crandall, Phil; Van Loo, Ellen J; O'Bryan, Corliss A; Mauromoustakos, Andy; Yiannas, Frank; Dyenson, Natalie; Berdnik, Irina
2012-09-01
International attention has been focused on minimizing costs that may unnecessarily raise food prices. One important aspect to consider is the redundant and overlapping costs of food safety audits. The Global Food Safety Initiative (GFSI) has devised benchmarked schemes based on existing international food safety standards for use as a unifying standard accepted by many retailers. The present study was conducted to evaluate the impact of the decision made by Walmart Stores (Bentonville, AR) to require their suppliers to become GFSI compliant. An online survey of 174 retail suppliers was conducted to assess food suppliers' opinions of this requirement and the benefits suppliers realized when they transitioned from their previous food safety systems. The most common reason for becoming GFSI compliant was to meet customers' requirements; thus, supplier implementation of the GFSI standards was not entirely voluntary. Other reasons given for compliance were enhancing food safety and remaining competitive. About 54 % of food processing plants using GFSI benchmarked schemes followed the guidelines of Safe Quality Food 2000 and 37 % followed those of the British Retail Consortium. At the supplier level, 58 % followed Safe Quality Food 2000 and 31 % followed the British Retail Consortium. Respondents reported that the certification process took about 10 months. The most common reason for selecting a certain GFSI benchmarked scheme was because it was widely accepted by customers (retailers). Four other common reasons were (i) the standard has a good reputation in the industry, (ii) the standard was recommended by others, (iii) the standard is most often used in the industry, and (iv) the standard was required by one of their customers. Most suppliers agreed that increased safety of their products was required to comply with GFSI benchmarked schemes. They also agreed that the GFSI required a more carefully documented food safety management system, which often required improved company food safety practices and increased employee training. Adoption of a GFSI benchmarked scheme resulted in fewer audits, i.e., one less per year. An educational opportunity exists to acquaint retailers and suppliers worldwide with the benefits of having an internationally recognized certification program such as that recognized by the GFSI.
Alswat, Khalid; Abdalla, Rawia Ahmad Mustafa; Titi, Maher Abdelraheim; Bakash, Maram; Mehmood, Faiza; Zubairi, Beena; Jamal, Diana; El-Jardali, Fadi
2017-08-02
Measuring patient safety culture can provide insight into areas for improvement and help monitor changes over time. This study details the findings of a re-assessment of patient safety culture in a multi-site Medical City in Riyadh, Kingdom of Saudi Arabia (KSA). Results were compared to an earlier assessment conducted in 2012 and benchmarked with regional and international studies. Such assessments can provide hospital leadership with insight on how their hospital is performing on patient safety culture composites as a result of quality improvement plans. This paper also explored the association between patient safety culture predictors and patient safety grade, perception of patient safety, frequency of events reported and number of events reported. We utilized a customized version of the patient safety culture survey developed by the Agency for Healthcare Research and Quality. The Medical City is a tertiary care teaching facility composed of two sites (total capacity of 904 beds). Data was analyzed using SPSS 24 at a significance level of 0.05. A t-Test was used to compare results from the 2012 survey to that conducted in 2015. Two adopted Generalized Estimating Equations in addition to two linear models were used to assess the association between composites and patient safety culture outcomes. Results were also benchmarked against similar initiatives in Lebanon, Palestine and USA. Areas of strength in 2015 included Teamwork within units, and Organizational Learning-Continuous Improvement; areas requiring improvement included Non-Punitive Response to Error, and Staffing. Comparing results to the 2012 survey revealed improvement on some areas but non-punitive response to error and Staffing remained the lowest scoring composites in 2015. Regression highlighted significant association between managerial support, organizational learning and feedback and improved survey outcomes. Comparison to international benchmarks revealed that the hospital is performing at or better than benchmark on several composites. The Medical City has made significant progress on several of the patient safety culture composites despite still having areas requiring additional improvement. Patient safety culture outcomes are evidently linked to better performance on specific composites. While results are comparable with regional and international benchmarks, findings confirm that regular assessment can allow hospitals to better understand and visualize changes in their performance and identify additional areas for improvement.
Measurement, Standards, and Peer Benchmarking: One Hospital's Journey.
Martin, Brian S; Arbore, Mark
2016-04-01
Peer-to-peer benchmarking is an important component of rapid-cycle performance improvement in patient safety and quality-improvement efforts. Institutions should carefully examine critical success factors before engagement in peer-to-peer benchmarking in order to maximize growth and change opportunities. Solutions for Patient Safety has proven to be a high-yield engagement for Children's Hospital of Pittsburgh of University of Pittsburgh Medical Center, with measureable improvement in both organizational process and culture. Copyright © 2016 Elsevier Inc. All rights reserved.
Benchmarking road safety performance: Identifying a meaningful reference (best-in-class).
Chen, Faan; Wu, Jiaorong; Chen, Xiaohong; Wang, Jianjun; Wang, Di
2016-01-01
For road safety improvement, comparing and benchmarking performance are widely advocated as the emerging and preferred approaches. However, there is currently no universally agreed upon approach for the process of road safety benchmarking, and performing the practice successfully is by no means easy. This is especially true for the two core activities of which: (1) developing a set of road safety performance indicators (SPIs) and combining them into a composite index; and (2) identifying a meaningful reference (best-in-class), one which has already obtained outstanding road safety practices. To this end, a scientific technique that can combine the multi-dimensional safety performance indicators (SPIs) into an overall index, and subsequently can identify the 'best-in-class' is urgently required. In this paper, the Entropy-embedded RSR (Rank-sum ratio), an innovative, scientific and systematic methodology is investigated with the aim of conducting the above two core tasks in an integrative and concise procedure, more specifically in a 'one-stop' way. Using a combination of results from other methods (e.g. the SUNflower approach) and other measures (e.g. Human Development Index) as a relevant reference, a given set of European countries are robustly ranked and grouped into several classes based on the composite Road Safety Index. Within each class the 'best-in-class' is then identified. By benchmarking road safety performance, the results serve to promote best practice, encourage the adoption of successful road safety strategies and measures and, more importantly, inspire the kind of political leadership needed to create a road transport system that maximizes safety. Copyright © 2015 Elsevier Ltd. All rights reserved.
Resonance Parameter Adjustment Based on Integral Experiments
Sobes, Vladimir; Leal, Luiz; Arbanas, Goran; ...
2016-06-02
Our project seeks to allow coupling of differential and integral data evaluation in a continuous-energy framework and to use the generalized linear least-squares (GLLS) methodology in the TSURFER module of the SCALE code package to update the parameters of a resolved resonance region evaluation. We recognize that the GLLS methodology in TSURFER is identical to the mathematical description of a Bayesian update in SAMMY, the SAMINT code was created to use the mathematical machinery of SAMMY to update resolved resonance parameters based on integral data. Traditionally, SAMMY used differential experimental data to adjust nuclear data parameters. Integral experimental data, suchmore » as in the International Criticality Safety Benchmark Experiments Project, remain a tool for validation of completed nuclear data evaluations. SAMINT extracts information from integral benchmarks to aid the nuclear data evaluation process. Later, integral data can be used to resolve any remaining ambiguity between differential data sets, highlight troublesome energy regions, determine key nuclear data parameters for integral benchmark calculations, and improve the nuclear data covariance matrix evaluation. Moreover, SAMINT is not intended to bias nuclear data toward specific integral experiments but should be used to supplement the evaluation of differential experimental data. Using GLLS ensures proper weight is given to the differential data.« less
Laparoscopic recurrent inguinal hernia repair during the learning curve: it can be done?
Bracale, Umberto; Sciuto, Antonio; Andreuccetti, Jacopo; Merola, Giovanni; Pecchia, Leandro; Melillo, Paolo; Pignata, Giusto
2017-01-01
Trans-Abdominal Preperitoneal Patch (TAPP) repairs for Recurrent Hernia (RH) is a technically demanding procedure. It has to be performed only by surgeons with extensive experience in the laparoscopic approach. The purpose of this study is to evaluate the surgical safety and the efficacy of TAPP for RH performed in a tutoring program by surgeons in practice (SP). All TAPP repairs for RH performed by the same surgical team have been included in the study. We have evaluated the results of three SP during their learning curve in a tutoring program. Then these results have been compared to those of a highly experienced laparoscopic surgeon (Benchmark). A total of 530 TAPP repairs have been performed. Among these, 83 TAPP have been executed for RH, of which 43 by the Benchmark and 40 by the SP. When we have compared the outcomes of the Benchmark with those of SP, no significant difference has been observed about morbidity and recurrence while the operative time has been significantly longer for the SP. No intraoperative complications have occurred. International guidelines urge that TAPP repair for RH has to be performed only by surgeons with extensive experience in the laparoscopic approach. The results of the present study demonstrate that TAPP for RH could be performed also by surgeons in training during a learning program. We retain that an adequate tutoring program could lead a surgeon in practice to perform more complex hernia procedures without jeopardizing patient safety throughout the learning curve period. Laparoscopy, Learning Curve, Recurrent Hernia.
Final Report of the NASA Office of Safety and Mission Assurance Agile Benchmarking Team
NASA Technical Reports Server (NTRS)
Wetherholt, Martha
2016-01-01
To ensure that the NASA Safety and Mission Assurance (SMA) community remains in a position to perform reliable Software Assurance (SA) on NASAs critical software (SW) systems with the software industry rapidly transitioning from waterfall to Agile processes, Terry Wilcutt, Chief, Safety and Mission Assurance, Office of Safety and Mission Assurance (OSMA) established the Agile Benchmarking Team (ABT). The Team's tasks were: 1. Research background literature on current Agile processes, 2. Perform benchmark activities with other organizations that are involved in software Agile processes to determine best practices, 3. Collect information on Agile-developed systems to enable improvements to the current NASA standards and processes to enhance their ability to perform reliable software assurance on NASA Agile-developed systems, 4. Suggest additional guidance and recommendations for updates to those standards and processes, as needed. The ABT's findings and recommendations for software management, engineering and software assurance are addressed herein.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, Margaret A.; Bess, John D.
2015-02-01
The critical configuration of the small, compact critical assembly (SCCA) experiments performed at the Oak Ridge Critical Experiments Facility (ORCEF) in 1962-1965 have been evaluated as acceptable benchmark experiments for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. The initial intent of these experiments was to support the design of the Medium Power Reactor Experiment (MPRE) program, whose purpose was to study “power plants for the production of electrical power in space vehicles.” The third configuration in this series of experiments was a beryllium-reflected assembly of stainless-steel-clad, highly enriched uranium (HEU)-O 2 fuel mockup of a potassium-cooledmore » space power reactor. Reactivity measurements cadmium ratio spectral measurements and fission rate measurements were measured through the core and top reflector. Fuel effect worth measurements and neutron moderating and absorbing material worths were also measured in the assembly fuel region. The cadmium ratios, fission rate, and worth measurements were evaluated for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. The fuel tube effect and neutron moderating and absorbing material worth measurements are the focus of this paper. Additionally, a measurement of the worth of potassium filling the core region was performed but has not yet been evaluated Pellets of 93.15 wt.% enriched uranium dioxide (UO 2) were stacked in 30.48 cm tall stainless steel fuel tubes (0.3 cm tall end caps). Each fuel tube had 26 pellets with a total mass of 295.8 g UO 2 per tube. 253 tubes were arranged in 1.506-cm triangular lattice. An additional 7-tube cluster critical configuration was also measured but not used for any physics measurements. The core was surrounded on all side by a beryllium reflector. The fuel effect worths were measured by removing fuel tubes at various radius. An accident scenario was also simulated by moving outward twenty fuel rods from the periphery of the core so they were touching the core tank. The change in the system reactivity when the fuel tube(s) were removed/moved compared with the base configuration was the worth of the fuel tubes or accident scenario. The worth of neutron absorbing and moderating materials was measured by inserting material rods into the core at regular intervals or placing lids at the top of the core tank. Stainless steel 347, tungsten, niobium, polyethylene, graphite, boron carbide, aluminum and cadmium rods and/or lid worths were all measured. The change in the system reactivity when a material was inserted into the core is the worth of the material.« less
Serious injuries: an additional indicator to fatalities for road safety benchmarking.
Shen, Yongjun; Hermans, Elke; Bao, Qiong; Brijs, Tom; Wets, Geert
2015-01-01
Almost all of the current road safety benchmarking studies focus entirely on fatalities, which, however, represent only one measure of the magnitude of the road safety problem. The main objective of this article was to investigate the possibility of including the number of serious injuries in addition to the number of fatalities for road safety benchmarking and to further illuminate its impact on the countries' rankings. We introduced the technique of data envelopment analysis (DEA) to the road safety domain and developed a DEA-based road safety model (DEA-RS) in this study. Moreover, we outlined different types of possible weight restrictions and adopted 2 of them to indicate the relationship between road fatalities and serious injuries for the sake of rational benchmarking. One was a relative weight restriction based on the information of their shadow price, and the other was a virtual weight restriction using a priori knowledge about the importance level of these 2 aspects. By computing the most optimal road safety risk scores of 10 European countries based on the different models, we found that United Kingdom was the only best-performing country no matter which model was utilized. However, countries such as The Netherlands, Sweden, and Switzerland were no longer best-performing when the serious injuries were integrated. On the contrary, Spain, which ranked almost at the bottom among all of the countries when only the number of road fatalities was considered, became a relatively well-performing country when integrating its number of serious injuries in the evaluation. In general, no matter whether the country's road safety ranking was improved or deteriorated, most of the countries achieved a higher risk score when the number of serious injuries was included, which implied that compared to the road fatalities, more policy attention has to be paid to improve the situation of serious injuries in most countries. Given the importance of considering the serious injuries in addition to the fatalities for international benchmarking of road safety, the proposed model (i.e., the DEA-RS model with weight restrictions) turned out to be effective in deriving reasonable results. We are thereby also inspired to apply this kind of model to a more complete road safety benchmarking practice in the future when the data on, for example, the number of slight injuries, the degree of property damage, and the number of crashes are ready (i.e., comparable) to use.
Benchmarking: applications to transfusion medicine.
Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M
2012-10-01
Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal. Copyright © 2012 Elsevier Inc. All rights reserved.
Diagnostic Algorithm Benchmarking
NASA Technical Reports Server (NTRS)
Poll, Scott
2011-01-01
A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.
McLinton, Sarven S; Loh, May Young; Dollard, Maureen F; Tuckey, Michelle M R; Idris, Mohd Awang; Morton, Sharon
2018-04-06
To present benchmarks for working conditions in healthcare industries as an initial effort into international surveillance. The healthcare industry is fundamental to sustaining the health of Australians, yet it is under immense pressure. Budgets are limited, demands are increasing as are workplace injuries and all of these factors compromise patient care. Urgent attention is needed to reduce strains on workers and costs in health care, however, little work has been done to benchmark psychosocial factors in healthcare working conditions in the Asia-Pacific. Intercultural comparisons are important to provide an evidence base for public policy. A cross-sectional design was used (like other studies of prevalence), including a mixed-methods approach with qualitative interviews to better contextualize the results. Data on psychosocial factors and other work variables were collected from healthcare workers in three hospitals in Australia (N = 1,258) and Malaysia (N = 1,125). 2015 benchmarks were calculated for each variable and comparison was conducted via independent samples t tests. Healthcare samples were also compared with benchmarks for non-healthcare general working populations from their respective countries: Australia (N = 973) and Malaysia (N = 225). Our study benchmarks healthcare working conditions in Australia and Malaysia against the general working population, identifying trends that indicate the industry is in need of intervention strategies and job redesign initiatives that better support psychological health and safety. We move toward a better understanding of the precursors of psychosocial safety climate in a broader context, including similarities and differences between Australia and Malaysia in national culture, government occupational health and safety policies and top-level management practices. © 2018 John Wiley & Sons Ltd.
Sexton, John B; Helmreich, Robert L; Neilands, Torsten B; Rowan, Kathy; Vella, Keryn; Boyden, James; Roberts, Peter R; Thomas, Eric J
2006-04-03
There is widespread interest in measuring healthcare provider attitudes about issues relevant to patient safety (often called safety climate or safety culture). Here we report the psychometric properties, establish benchmarking data, and discuss emerging areas of research with the University of Texas Safety Attitudes Questionnaire. Six cross-sectional surveys of health care providers (n = 10,843) in 203 clinical areas (including critical care units, operating rooms, inpatient settings, and ambulatory clinics) in three countries (USA, UK, New Zealand). Multilevel factor analyses yielded results at the clinical area level and the respondent nested within clinical area level. We report scale reliability, floor/ceiling effects, item factor loadings, inter-factor correlations, and percentage of respondents who agree with each item and scale. A six factor model of provider attitudes fit to the data at both the clinical area and respondent nested within clinical area levels. The factors were: Teamwork Climate, Safety Climate, Perceptions of Management, Job Satisfaction, Working Conditions, and Stress Recognition. Scale reliability was 0.9. Provider attitudes varied greatly both within and among organizations. Results are presented to allow benchmarking among organizations and emerging research is discussed. The Safety Attitudes Questionnaire demonstrated good psychometric properties. Healthcare organizations can use the survey to measure caregiver attitudes about six patient safety-related domains, to compare themselves with other organizations, to prompt interventions to improve safety attitudes and to measure the effectiveness of these interventions.
Williams, M. L.; Wiarda, D.; Ilas, G.; ...
2014-06-15
Recently, we processed a new covariance data library based on ENDF/B-VII.1 for the SCALE nuclear analysis code system. The multigroup covariance data are discussed here, along with testing and application results for critical benchmark experiments. Moreover, the cross section covariance library, along with covariances for fission product yields and decay data, is used to compute uncertainties in the decay heat produced by a burned reactor fuel assembly.
Thought Experiment to Examine Benchmark Performance for Fusion Nuclear Data
NASA Astrophysics Data System (ADS)
Murata, Isao; Ohta, Masayuki; Kusaka, Sachie; Sato, Fuminobu; Miyamaru, Hiroyuki
2017-09-01
There are many benchmark experiments carried out so far with DT neutrons especially aiming at fusion reactor development. These integral experiments seemed vaguely to validate the nuclear data below 14 MeV. However, no precise studies exist now. The author's group thus started to examine how well benchmark experiments with DT neutrons can play a benchmarking role for energies below 14 MeV. Recently, as a next phase, to generalize the above discussion, the energy range was expanded to the entire region. In this study, thought experiments with finer energy bins have thus been conducted to discuss how to generally estimate performance of benchmark experiments. As a result of thought experiments with a point detector, the sensitivity for a discrepancy appearing in the benchmark analysis is "equally" due not only to contribution directly conveyed to the deterctor, but also due to indirect contribution of neutrons (named (A)) making neutrons conveying the contribution, indirect controbution of neutrons (B) making the neutrons (A) and so on. From this concept, it would become clear from a sensitivity analysis in advance how well and which energy nuclear data could be benchmarked with a benchmark experiment.
Developing integrated benchmarks for DOE performance measurement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.
1992-09-30
The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance couldmore » be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.« less
Sexton, John B; Helmreich, Robert L; Neilands, Torsten B; Rowan, Kathy; Vella, Keryn; Boyden, James; Roberts, Peter R; Thomas, Eric J
2006-01-01
Background There is widespread interest in measuring healthcare provider attitudes about issues relevant to patient safety (often called safety climate or safety culture). Here we report the psychometric properties, establish benchmarking data, and discuss emerging areas of research with the University of Texas Safety Attitudes Questionnaire. Methods Six cross-sectional surveys of health care providers (n = 10,843) in 203 clinical areas (including critical care units, operating rooms, inpatient settings, and ambulatory clinics) in three countries (USA, UK, New Zealand). Multilevel factor analyses yielded results at the clinical area level and the respondent nested within clinical area level. We report scale reliability, floor/ceiling effects, item factor loadings, inter-factor correlations, and percentage of respondents who agree with each item and scale. Results A six factor model of provider attitudes fit to the data at both the clinical area and respondent nested within clinical area levels. The factors were: Teamwork Climate, Safety Climate, Perceptions of Management, Job Satisfaction, Working Conditions, and Stress Recognition. Scale reliability was 0.9. Provider attitudes varied greatly both within and among organizations. Results are presented to allow benchmarking among organizations and emerging research is discussed. Conclusion The Safety Attitudes Questionnaire demonstrated good psychometric properties. Healthcare organizations can use the survey to measure caregiver attitudes about six patient safety-related domains, to compare themselves with other organizations, to prompt interventions to improve safety attitudes and to measure the effectiveness of these interventions. PMID:16584553
ARRAYS OF BOTTLES OF PLUTONIUM NITRATE SOLUTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Margaret A. Marshall
2012-09-01
In October and November of 1981 thirteen approaches-to-critical were performed on a remote split table machine (RSTM) in the Critical Mass Laboratory of Pacific Northwest Laboratory (PNL) in Richland, Washington using planar arrays of polyethylene bottles filled with plutonium (Pu) nitrate solution. Arrays of up to sixteen bottles were used to measure the critical number of bottles and critical array spacing with a tight fitting Plexiglas® reflector on all sides of the arrays except the top. Some experiments used Plexiglas shells fitted around each bottles to determine the effect of moderation on criticality. Each bottle contained approximately 2.4 L ofmore » Pu(NO3)4 solution with a Pu content of 105 g Pu/L and a free acid molarity H+ of 5.1. The plutonium was of low 240Pu (2.9 wt.%) content. These experiments were sponsored by Rockwell Hanford Operations because of the lack of experimental data on the criticality of arrays of bottles of Pu solution such as might be found in storage and handling at the Purex Facility at Hanford. The results of these experiments were used “to provide benchmark data to validate calculational codes used in criticality safety assessments of [the] plant configurations” (Ref. 1). Data for this evaluation were collected from the published report (Ref. 1), the approach to critical logbook, the experimenter’s logbook, and communication with the primary experimenter, B. Michael Durst. Of the 13 experiments preformed 10 were evaluated. One of the experiments was not evaluated because it had been thrown out by the experimenter, one was not evaluated because it was a repeat of another experiment and the third was not evaluated because it reported the critical number of bottles as being greater than 25. Seven of the thirteen evaluated experiments were determined to be acceptable benchmark experiments. A similar experiment using uranyl nitrate was benchmarked as U233-SOL-THERM-014.« less
NASA Technical Reports Server (NTRS)
2002-01-01
The NASA/Navy Benchmarking Exchange (NNBE) was undertaken to identify practices and procedures and to share lessons learned in the Navy's submarine and NASA's human space flight programs. The NNBE focus is on safety and mission assurance policies, processes, accountability, and control measures. This report is an interim summary of activity conducted through October 2002, and it coincides with completion of the first phase of a two-phase fact-finding effort.In August 2002, a team was formed, co-chaired by senior representatives from the NASA Office of Safety and Mission Assurance and the NAVSEA 92Q Submarine Safety and Quality Assurance Division. The team closely examined the two elements of submarine safety (SUBSAFE) certification: (1) new design/construction (initial certification) and (2) maintenance and modernization (sustaining certification), with a focus on: (1) Management and Organization, (2) Safety Requirements (technical and administrative), (3) Implementation Processes, (4) Compliance Verification Processes, and (5) Certification Processes.
Pitman, A; Jones, D N; Stuart, D; Lloydhope, K; Mallitt, K; O'Rourke, P
2009-10-01
The study reports on the evolution of the Australian radiologist relative value unit (RVU) model of measuring radiologist reporting workloads in teaching hospital departments, and aims to outline a way forward for the development of a broad national safety, quality and performance framework that enables value mapping, measurement and benchmarking. The Radiology International Benchmarking Project of Queensland Health provided a suitable high-level national forum where the existing Pitman-Jones RVU model was applied to contemporaneous data, and its shortcomings and potential avenues for future development were analysed. Application of the Pitman-Jones model to Queensland data and also a Victorian benchmark showed that the original recommendation of 40,000 crude RVU per full-time equivalent consultant radiologist (97-98 baseline level) has risen only moderately, to now lie around 45,000 crude RVU/full-time equivalent. Notwithstanding this, the model has a number of weaknesses and is becoming outdated, as it cannot capture newer time-consuming examinations particularly in CT. A significant re-evaluation of the value of medical imaging is required, and is now occurring. We must rethink how we measure, benchmark, display and continually improve medical imaging safety, quality and performance, throughout the imaging care cycle and beyond. It will be necessary to ensure alignment with patient needs, as well as clinical and organisational objectives. Clear recommendations for the development of an updated national reporting workload RVU system are available, and an opportunity now exists for developing a much broader national model. A more sophisticated and balanced multidimensional safety, quality and performance framework that enables measurement and benchmarking of all important elements of health-care service is needed.
Nuclear Data Activities in Support of the DOE Nuclear Criticality Safety Program
NASA Astrophysics Data System (ADS)
Westfall, R. M.; McKnight, R. D.
2005-05-01
The DOE Nuclear Criticality Safety Program (NCSP) provides the technical infrastructure maintenance for those technologies applied in the evaluation and performance of safe fissionable-material operations in the DOE complex. These technologies include an Analytical Methods element for neutron transport as well as the development of sensitivity/uncertainty methods, the performance of Critical Experiments, evaluation and qualification of experiments as Benchmarks, and a comprehensive Nuclear Data program coordinated by the NCSP Nuclear Data Advisory Group (NDAG). The NDAG gathers and evaluates differential and integral nuclear data, identifies deficiencies, and recommends priorities on meeting DOE criticality safety needs to the NCSP Criticality Safety Support Group (CSSG). Then the NDAG identifies the required resources and unique capabilities for meeting these needs, not only for performing measurements but also for data evaluation with nuclear model codes as well as for data processing for criticality safety applications. The NDAG coordinates effort with the leadership of the National Nuclear Data Center, the Cross Section Evaluation Working Group (CSEWG), and the Working Party on International Evaluation Cooperation (WPEC) of the OECD/NEA Nuclear Science Committee. The overall objective is to expedite the issuance of new data and methods to the DOE criticality safety user. This paper describes these activities in detail, with examples based upon special studies being performed in support of criticality safety for a variety of DOE operations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivanova, T.; Laville, C.; Dyrda, J.
2012-07-01
The sensitivities of the k{sub eff} eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplificationsmore » impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods. (authors)« less
NASA Astrophysics Data System (ADS)
Kahler, A. C.; MacFarlane, R. E.; Mosteller, R. D.; Kiedrowski, B. C.; Frankle, S. C.; Chadwick, M. B.; McKnight, R. D.; Lell, R. M.; Palmiotti, G.; Hiruta, H.; Herman, M.; Arcilla, R.; Mughabghab, S. F.; Sublet, J. C.; Trkov, A.; Trumbull, T. H.; Dunn, M.
2011-12-01
The ENDF/B-VII.1 library is the latest revision to the United States' Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 423 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [M. B. Chadwick et al., "ENDF/B-VII.1 Nuclear Data for Science and Technology: Cross Sections, Covariances, Fission Product Yields and Decay Data," Nuclear Data Sheets, 112, 2887 (2011)]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unmoderated and uranium reflected 235U and 239Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also confirmed for selected actinide reaction rates such as 236U, 238,242Pu and 241,243Am capture in fast systems. Other deficiencies, such as the overprediction of Pu solution system critical eigenvalues and a decreasing trend in calculated eigenvalue for 233U fueled systems as a function of Above-Thermal Fission Fraction remain. The comprehensive nature of this critical benchmark suite and the generally accurate calculated eigenvalues obtained with ENDF/B-VII.1 neutron cross sections support the conclusion that this is the most accurate general purpose ENDF/B cross section library yet released to the technical community.
Using Machine Learning to Predict MCNP Bias
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grechanuk, Pavel Aleksandrovi
For many real-world applications in radiation transport where simulations are compared to experimental measurements, like in nuclear criticality safety, the bias (simulated - experimental k eff) in the calculation is an extremely important quantity used for code validation. The objective of this project is to accurately predict the bias of MCNP6 [1] criticality calculations using machine learning (ML) algorithms, with the intention of creating a tool that can complement the current nuclear criticality safety methods. In the latest release of MCNP6, the Whisper tool is available for criticality safety analysts and includes a large catalogue of experimental benchmarks, sensitivity profiles,more » and nuclear data covariance matrices. This data, coming from 1100+ benchmark cases, is used in this study of ML algorithms for criticality safety bias predictions.« less
29 CFR 1952.103 - Compliance staffing benchmarks.
Code of Federal Regulations, 2014 CFR
2014-07-01
..., DEPARTMENT OF LABOR (CONTINUED) APPROVED STATE PLANS FOR ENFORCEMENT OF STATE STANDARDS Oregon § 1952.103... State operating an approved State plan. In October 1992, Oregon completed, in conjunction with OSHA, a... of 28 health compliance officers. Oregon elected to retain the safety benchmark level established in...
29 CFR 1952.103 - Compliance staffing benchmarks.
Code of Federal Regulations, 2013 CFR
2013-07-01
..., DEPARTMENT OF LABOR (CONTINUED) APPROVED STATE PLANS FOR ENFORCEMENT OF STATE STANDARDS Oregon § 1952.103... State operating an approved State plan. In October 1992, Oregon completed, in conjunction with OSHA, a... of 28 health compliance officers. Oregon elected to retain the safety benchmark level established in...
Safety and governance issues for neonatal transport services.
Ratnavel, Nandiran
2009-08-01
Neonatal transport is a subspecialty within the field of neonatology. Transport services are developing rapidly in the United Kingdom (UK) with network demographics and funding patterns leading to a broad spectrum of service provision. Applying principles of clinical governance and safety to such a diverse landscape of transport services is challenging but finally receiving much needed attention. To understand issues of risk management associated with this branch of retrieval medicine one needs to look at the infrastructure of transport teams, arrangements for governance, risk identification, incident reporting, feedback and learning from experience. One also needs to look at audit processes, training, communication and ways of team working. Adherence to current recommendations for equipment and vehicle design are vital. The national picture for neonatal transport is evolving. This is an excellent time to start benchmarking and sharing best practice with a view to optimising safety and reducing risk.
INL Experimental Program Roadmap for Thermal Hydraulic Code Validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glenn McCreery; Hugh McIlroy
2007-09-01
Advanced computer modeling and simulation tools and protocols will be heavily relied on for a wide variety of system studies, engineering design activities, and other aspects of the Next Generation Nuclear Power (NGNP) Very High Temperature Reactor (VHTR), the DOE Global Nuclear Energy Partnership (GNEP), and light-water reactors. The goal is for all modeling and simulation tools to be demonstrated accurate and reliable through a formal Verification and Validation (V&V) process, especially where such tools are to be used to establish safety margins and support regulatory compliance, or to design a system in a manner that reduces the role ofmore » expensive mockups and prototypes. Recent literature identifies specific experimental principles that must be followed in order to insure that experimental data meet the standards required for a “benchmark” database. Even for well conducted experiments, missing experimental details, such as geometrical definition, data reduction procedures, and manufacturing tolerances have led to poor Benchmark calculations. The INL has a long and deep history of research in thermal hydraulics, especially in the 1960s through 1980s when many programs such as LOFT and Semiscle were devoted to light-water reactor safety research, the EBRII fast reactor was in operation, and a strong geothermal energy program was established. The past can serve as a partial guide for reinvigorating thermal hydraulic research at the laboratory. However, new research programs need to fully incorporate modern experimental methods such as measurement techniques using the latest instrumentation, computerized data reduction, and scaling methodology. The path forward for establishing experimental research for code model validation will require benchmark experiments conducted in suitable facilities located at the INL. This document describes thermal hydraulic facility requirements and candidate buildings and presents examples of suitable validation experiments related to VHTRs, sodium-cooled fast reactors, and light-water reactors. These experiments range from relatively low-cost benchtop experiments for investigating individual phenomena to large electrically-heated integral facilities for investigating reactor accidents and transients.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bess, J. D.; Briggs, J. B.; Gulliford, J.
Overview of Experiments to Study the Physics of Fast Reactors Represented in the International Directories of Critical and Reactor Experiments John D. Bess Idaho National Laboratory Jim Gulliford, Tatiana Ivanova Nuclear Energy Agency of the Organisation for Economic Cooperation and Development E.V.Rozhikhin, M.Yu.Sem?nov, A.M.Tsibulya Institute of Physics and Power Engineering The study the physics of fast reactors traditionally used the experiments presented in the manual labor of the Working Group on Evaluation of sections CSEWG (ENDF-202) issued by the Brookhaven National Laboratory in 1974. This handbook presents simplified homogeneous model experiments with relevant experimental data, as amended. The Nuclear Energymore » Agency of the Organization for Economic Cooperation and Development coordinates the activities of two international projects on the collection, evaluation and documentation of experimental data - the International Project on the assessment of critical experiments (1994) and the International Project on the assessment of reactor experiments (since 2005). The result of the activities of these projects are replenished every year, an international directory of critical (ICSBEP Handbook) and reactor (IRPhEP Handbook) experiments. The handbooks present detailed models of experiments with minimal amendments. Such models are of particular interest in terms of the settlements modern programs. The directories contain a large number of experiments which are suitable for the study of physics of fast reactors. Many of these experiments were performed at specialized critical stands, such as BFS (Russia), ZPR and ZPPR (USA), the ZEBRA (UK) and the experimental reactor JOYO (Japan), FFTF (USA). Other experiments, such as compact metal assembly, is also of interest in terms of the physics of fast reactors, they have been carried out on the universal critical stands in Russian institutes (VNIITF and VNIIEF) and the US (LANL, LLNL, and others.). Also worth mentioning is the critical experiments with fast reactor fuel rods in water, interesting in terms of justification of nuclear safety during transportation and storage of fresh and spent fuel. These reports provide a detailed review of the experiment, designate the area of their application and include results of calculations on modern systems of constants in comparison with the estimated experimental data.« less
University Safety Culture: A Work-in-Progress?
ERIC Educational Resources Information Center
Lyons, Michael
2016-01-01
Safety management systems in Australian higher education organisations are under-researched. Limited workplace safety information can be found in the various reports on university human resources benchmarking programs, and typically they show only descriptive statistics. With the commencement of new consultation-focused regulations applying to…
Practicing Surgeons Lead in Quality Care, Safety, and Cost Control
Shively, Eugene H.; Heine, Michael J.; Schell, Robert H.; Sharpe, J Neal; Garrison, R Neal; Vallance, Steven R.; DeSimone, Kenneth J.S.; Polk, Hiram C.
2004-01-01
Objective: To report the experiences of 66 surgical specialists from 15 different hospitals who performed 43 CPT-based procedures more than 16,000 times. Summary Background Data: Surgeons are under increasing pressure to demonstrate patient safety data as quantitated by objective and subjective outcomes that meet or exceed the standards of benchmark institutions or databases. Methods: Data from 66 surgical specialists on 43 CPT-based procedures were accessioned over a 4-year period. The hospitals vary from a small 30-bed hospital to large teaching hospitals. All reported deaths and complications were verified from hospital and office records and compared with benchmarks. Results: Over a 4-year inclusive period (1999–2002), 16,028 elective operations were accessioned. There was a total 1.4% complication rate and 0.05% death rate. A system has been developed for tracking outcomes. A wide range of improvements have been identified. These include the following: 1) improved classification of indications for systemic prophylactic antibiotic use and reduction in the variety of drugs used, 2) shortened length of stay for standard procedures in different surgical specialties, 3) adherence to strict indicators for selected operative procedures, 4) less use of costly diagnostic procedures, 5) decreased use of expensive home health services, 6) decreased use of very expensive drugs, 7) identification of the unnecessary expense of disposable laparoscopic devices, 8) development of a method to compare a one-surgeon hospital with his peers, and 9) development of unique protocols for interaction of anesthesia and surgery. The system also provides a very good basis for confirmation of patient safety and improvement therein. Conclusions: Since 1998, Quality Surgical Solutions, PLLC, has developed simple physician-authored protocols for delivering high-quality and cost-effective surgery that measure up to benchmark institutions. We have discovered wide areas for improvements in surgery by adherence to simple protocols, minimizing death and complications and clarifying cost issues. PMID:15166954
Quality and safety in medical care: what does the future hold?
Liang, Bryan A; Mackey, Tim
2011-11-01
The rapid changes in health care policy, embracing quality and safety mandates, have culminated in programs and initiatives under the Patient Protection and Affordable Care Act. To review the context of, and anticipated quality and patient safety mandates for, delivery systems, incentives under health care reform, and models for future accountability for outcomes of care. Assessment of the provisions of Patient Protection and Affordable Care Act, other reform efforts, and reform initiatives focusing on future quality and safety provisions for health care providers. Health care reform and other efforts focus on consumerism in the context of price. Quality and safety efforts will be structured using financial incentives, best-practices research, and new delivery models that focus on reaching benchmarks while reducing costs. In addition, patient experience will be a key component of reimbursement, and a move toward "retail" approaches directed at the individual patient may supplant traditional "wholesale" efforts at attracting employers. Quality and safety have always been of prime importance in medicine. However, in the future, under health care reform and associated initiatives, a shift in the paradigm of medicine will integrate quality and safety measurement with financial incentives and a new emphasis on consumerism.
Benchmarking Global Food Safety Performances: The Era of Risk Intelligence.
Valleé, Jean-Charles Le; Charlebois, Sylvain
2015-10-01
Food safety data segmentation and limitations hamper the world's ability to select, build up, monitor, and evaluate food safety performance. Currently, there is no metric that captures the entire food safety system, and performance data are not collected strategically on a global scale. Therefore, food safety benchmarking is essential not only to help monitor ongoing performance but also to inform continued food safety system design, adoption, and implementation toward more efficient and effective food safety preparedness, responsiveness, and accountability. This comparative study identifies and evaluates common elements among global food safety systems. It provides an overall world ranking of food safety performance for 17 Organisation for Economic Co-Operation and Development (OECD) countries, illustrated by 10 indicators organized across three food safety risk governance domains: risk assessment (chemical risks, microbial risks, and national reporting on food consumption), risk management (national food safety capacities, food recalls, food traceability, and radionuclides standards), and risk communication (allergenic risks, labeling, and public trust). Results show all countries have very high food safety standards, but Canada and Ireland, followed by France, earned excellent grades relative to their peers. However, any subsequent global ranking study should consider the development of survey instruments to gather adequate and comparable national evidence on food safety.
Multi-measure Performance Assessment and Benchmarking of the Divisions of the Wyoming Highway Patrol
DOT National Transportation Integrated Search
2015-12-01
With many lives lost every year in traffic related crashes, traffic safety is a major concern all around the world. One way to improve traffic safety is to improve the organizational performance of agencies responsible for enforcing traffic safety. I...
Investigation of Abnormal Heat Transfer and Flow in a VHTR Reactor Core
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawaji, Masahiro; Valentin, Francisco I.; Artoun, Narbeh
2015-12-21
The main objective of this project was to identify and characterize the conditions under which abnormal heat transfer phenomena would occur in a Very High Temperature Reactor (VHTR) with a prismatic core. High pressure/high temperature experiments have been conducted to obtain data that could be used for validation of VHTR design and safety analysis codes. The focus of these experiments was on the generation of benchmark data for design and off-design heat transfer for forced, mixed and natural circulation in a VHTR core. In particular, a flow laminarization phenomenon was intensely investigated since it could give rise to hot spotsmore » in the VHTR core.« less
Bailey, Tessa S; Dollard, Maureen F; Richards, Penny A M
2015-01-01
Despite decades of research from around the world now permeating occupational health and safety (OHS) legislation and guidelines, there remains a lack of tools to guide practice. Our main goal was to establish benchmark levels of psychosocial safety climate (PSC) that would signify risk of job strain (jobs with high demands and low control) and depression in organizations. First, to justify our focus on PSC, using interview data from Australian employees matched at 2 time points 12 months apart (n = 1081), we verified PSC as a significant leading predictor of job strain and in turn depression. Next, using 2 additional data sets (n = 2097 and n = 1043) we determined benchmarks of organizational PSC (range 12-60) for low-risk (PSC at 41 or above) and high-risk (PSC at 37 or below) of employee job strain and depressive symptoms. Finally, using the newly created benchmarks we estimated the population attributable risk (PAR) and found that improving PSC in organizations to above 37 could reduce 14% of job strain and 16% of depressive symptoms in the working population. The results provide national standards that organizations and regulatory agencies can utilize to promote safer working environments and lower the risk of harm to employee mental health. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Bess, John D.; Fujimoto, Nozomu
2014-10-09
Benchmark models were developed to evaluate six cold-critical and two warm-critical, zero-power measurements of the HTTR. Additional measurements of a fully-loaded subcritical configuration, core excess reactivity, shutdown margins, six isothermal temperature coefficients, and axial reaction-rate distributions were also evaluated as acceptable benchmark experiments. Insufficient information is publicly available to develop finely-detailed models of the HTTR as much of the design information is still proprietary. However, the uncertainties in the benchmark models are judged to be of sufficient magnitude to encompass any biases and bias uncertainties incurred through the simplification process used to develop the benchmark models. Dominant uncertainties in themore » experimental keff for all core configurations come from uncertainties in the impurity content of the various graphite blocks that comprise the HTTR. Monte Carlo calculations of keff are between approximately 0.9 % and 2.7 % greater than the benchmark values. Reevaluation of the HTTR models as additional information becomes available could improve the quality of this benchmark and possibly reduce the computational biases. High-quality characterization of graphite impurities would significantly improve the quality of the HTTR benchmark assessment. Simulation of the other reactor physics measurements are in good agreement with the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Disney, R.K.
1994-10-01
The methodology for handling bias and uncertainty when calculational methods are used in criticality safety evaluations (CSE`s) is a rapidly evolving technology. The changes in the methodology are driven by a number of factors. One factor responsible for changes in the methodology for handling bias and uncertainty in CSE`s within the overview of the US Department of Energy (DOE) is a shift in the overview function from a ``site`` perception to a more uniform or ``national`` perception. Other causes for change or improvement in the methodology for handling calculational bias and uncertainty are; (1) an increased demand for benchmark criticalsmore » data to expand the area (range) of applicability of existing data, (2) a demand for new data to supplement existing benchmark criticals data, (3) the increased reliance on (or need for) computational benchmarks which supplement (or replace) experimental measurements in critical assemblies, and (4) an increased demand for benchmark data applicable to the expanded range of conditions and configurations encountered in DOE site restoration and remediation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bobrov, A. A.; Boyarinov, V. F.; Glushkov, A. E.
2012-07-01
Results of critical experiments performed at five ASTRA facility configurations modeling the high-temperature helium-cooled graphite-moderated reactors are presented. Results of experiments on definition of space distribution of {sup 235}U fission reaction rate performed at four from these five configurations are presented more detail. Analysis of available information showed that all experiments on criticality at these five configurations are acceptable for use them as critical benchmark experiments. All experiments on definition of space distribution of {sup 235}U fission reaction rate are acceptable for use them as physical benchmark experiments. (authors)
Seismo-acoustic ray model benchmarking against experimental tank data.
Camargo Rodríguez, Orlando; Collis, Jon M; Simpson, Harry J; Ey, Emanuel; Schneiderwind, Joseph; Felisberto, Paulo
2012-08-01
Acoustic predictions of the recently developed traceo ray model, which accounts for bottom shear properties, are benchmarked against tank experimental data from the EPEE-1 and EPEE-2 (Elastic Parabolic Equation Experiment) experiments. Both experiments are representative of signal propagation in a Pekeris-like shallow-water waveguide over a non-flat isotropic elastic bottom, where significant interaction of the signal with the bottom can be expected. The benchmarks show, in particular, that the ray model can be as accurate as a parabolic approximation model benchmarked in similar conditions. The results of benchmarking are important, on one side, as a preliminary experimental validation of the model and, on the other side, demonstrates the reliability of the ray approach for seismo-acoustic applications.
Benchmark On Sensitivity Calculation (Phase III)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivanova, Tatiana; Laville, Cedric; Dyrda, James
2012-01-01
The sensitivities of the keff eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impactmore » the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods.« less
Calculation of the Phenix end-of-life test 'Control Rod Withdrawal' with the ERANOS code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tiberi, V.
2012-07-01
The Inst. of Radiological Protection and Nuclear Safety (IRSN) acts as technical support to French public authorities. As such, IRSN is in charge of safety assessment of operating and under construction reactors, as well as future projects. In this framework, one current objective of IRSN is to evaluate the ability and accuracy of numerical tools to foresee consequences of accidents. Neutronic studies step in the safety assessment from different points of view among which the core design and its protection system. They are necessary to evaluate the core behavior in case of accident in order to assess the integrity ofmore » the first barrier and the absence of a prompt criticality risk. To reach this objective one main physical quantity has to be evaluated accurately: the neutronic power distribution in core during whole reactor lifetime. Phenix end of life tests, carried out in 2009, aim at increasing the experience feedback on sodium cooled fast reactors. These experiments have been done in the framework of the development of the 4. generation of nuclear reactors. Ten tests have been carried out: 6 on neutronic and fuel aspects, 2 on thermal hydraulics and 2 for the emergency shutdown. Two of them have been chosen for an international exercise on thermal hydraulics and neutronics in the frame of an IAEA Coordinated Research Project. Concerning neutronics, the Control Rod Withdrawal test is relevant for safety because it allows evaluating the capability of calculation tools to compute the radial power distribution on fast reactors core configurations in which the flux field is very deformed. IRSN participated to this benchmark with the ERANOS code developed by CEA for fast reactors studies. This paper presents the results obtained in the framework of the benchmark activity. A relatively good agreement was found with available measures considering the approximations done in the modeling. The work underlines the importance of burn-up calculations in order to have a fine core concentrations mesh for the calculation of the power distribution. (authors)« less
Dynamic Positioning at Sea Using the Global Positioning System.
1987-06-01
the Global Positioning System (GPS) acquired in Phase II of the Seafloor Benchmark Experiment on R/V Point Sur in August 1986. CPS position...data from the Global Positioning System (GPS) acquired in Phase 11 of the Seafloor Benchmark Experiment on R,:V Point Sur in August 1986. GPS position...The Seafloor Benchmark Experiment, a project of the Hydrographic Sciences Group of the Oceanography Department at the Naval Postgraduate School (NPS
Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program
Bess, John D.; Montierth, Leland; Köberl, Oliver; ...
2014-10-09
Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the ²³⁵U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of k eff with MCNP5 and ENDF/B-VII.0 neutron nuclear data aremore » greater than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of k eff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less
Measuring the FMCSA's safety objectives from March 2000 to September 2004.
DOT National Transportation Integrated Search
2006-01-01
The Volpe Center was requested by FMCSA to establish metrics and benchmarks against which to assess progress in attaining the FMCSA safety objectives. This was to be done objectively, emphasizing the use of SafeStat information. SafeStat (short for M...
Test Facilities and Experience on Space Nuclear System Developments at the Kurchatov Institute
NASA Astrophysics Data System (ADS)
Ponomarev-Stepnoi, Nikolai N.; Garin, Vladimir P.; Glushkov, Evgeny S.; Kompaniets, George V.; Kukharkin, Nikolai E.; Madeev, Vicktor G.; Papin, Vladimir K.; Polyakov, Dmitry N.; Stepennov, Boris S.; Tchuniyaev, Yevgeny I.; Tikhonov, Lev Ya.; Uksusov, Yevgeny I.
2004-02-01
The complexity of space fission systems and rigidity of requirement on minimization of weight and dimension characteristics along with the wish to decrease expenditures on their development demand implementation of experimental works which results shall be used in designing, safety substantiation, and licensing procedures. Experimental facilities are intended to solve the following tasks: obtainment of benchmark data for computer code validations, substantiation of design solutions when computational efforts are too expensive, quality control in a production process, and ``iron'' substantiation of criticality safety design solutions for licensing and public relations. The NARCISS and ISKRA critical facilities and unique ORM facility on shielding investigations at the operating OR nuclear research reactor were created in the Kurchatov Institute to solve the mentioned tasks. The range of activities performed at these facilities within the implementation of the previous Russian nuclear power system programs is briefly described in the paper. This experience shall be analyzed in terms of methodological approach to development of future space nuclear systems (this analysis is beyond this paper). Because of the availability of these facilities for experiments, the brief description of their critical assemblies and characteristics is given in this paper.
Benchmarking the MCNP Monte Carlo code with a photon skyshine experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olsher, R.H.; Hsu, Hsiao Hua; Harvey, W.F.
1993-07-01
The MCNP Monte Carlo transport code is used by the Los Alamos National Laboratory Health and Safety Division for a broad spectrum of radiation shielding calculations. One such application involves the determination of skyshine dose for a variety of photon sources. To verify the accuracy of the code, it was benchmarked with the Kansas State Univ. (KSU) photon skyshine experiment of 1977. The KSU experiment for the unshielded source geometry was simulated in great detail to include the contribution of groundshine, in-silo photon scatter, and the effect of spectral degradation in the source capsule. The standard deviation of the KSUmore » experimental data was stated to be 7%, while the statistical uncertainty of the simulation was kept at or under 1%. The results of the simulation agreed closely with the experimental data, generally to within 6%. At distances of under 100 m from the silo, the modeling of the in-silo scatter was crucial to achieving close agreement with the experiment. Specifically, scatter off the top layer of the source cask accounted for [approximately]12% of the dose at 50 m. At distance >300m, using the [sup 60]Co line spectrum led to a dose overresponse as great as 19% at 700 m. It was necessary to use the actual source spectrum, which includes a Compton tail from photon collisions in the source capsule, to achieve close agreement with experimental data. These results highlight the importance of using Monte Carlo transport techniques to account for the nonideal features of even simple experiments''.« less
ERIC Educational Resources Information Center
Ossiannilsson, E.; Landgren, L.
2012-01-01
Between 2008 and 2010, Lund University took part in three international benchmarking projects, "E-xcellence+," the "eLearning Benchmarking Exercise 2009," and the "First Dual-Mode Distance Learning Benchmarking Club." A comparison of these models revealed a rather high level of correspondence. From this finding and…
DE-NE0008277_PROTEUS final technical report 2018
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enqvist, Andreas
This project details re-evaluations of experiments of gas-cooled fast reactor (GCFR) core designs performed in the 1970s at the PROTEUS reactor and create a series of International Reactor Physics Experiment Evaluation Project (IRPhEP) benchmarks. Currently there are no gas-cooled fast reactor (GCFR) experiments available in the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook). These experiments are excellent candidates for reanalysis and development of multiple benchmarks because these experiments provide high-quality integral nuclear data relevant to the validation and refinement of thorium, neptunium, uranium, plutonium, iron, and graphite cross sections. It would be cost prohibitive to reproduce suchmore » a comprehensive suite of experimental data to support any future GCFR endeavors.« less
Benchmarking study of the MCNP code against cold critical experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sitaraman, S.
1991-01-01
The purpose of this study was to benchmark the widely used Monte Carlo code MCNP against a set of cold critical experiments with a view to using the code as a means of independently verifying the performance of faster but less accurate Monte Carlo and deterministic codes. The experiments simulated consisted of both fast and thermal criticals as well as fuel in a variety of chemical forms. A standard set of benchmark cold critical experiments was modeled. These included the two fast experiments, GODIVA and JEZEBEL, the TRX metallic uranium thermal experiments, the Babcock and Wilcox oxide and mixed oxidemore » experiments, and the Oak Ridge National Laboratory (ORNL) and Pacific Northwest Laboratory (PNL) nitrate solution experiments. The principal case studied was a small critical experiment that was performed with boiling water reactor bundles.« less
Benchmarking of HEU Mental Annuli Critical Assemblies with Internally Reflected Graphite Cylinder
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiaobo, Liu; Bess, John D.; Marshall, Margaret A.
Three experimental configurations of critical assemblies, performed in 1963 at the Oak Ridge Critical Experiment Facility, which are assembled using three different diameter HEU annuli (15-9 inches, 15-7 inches and 13-7 inches) metal annuli with internally reflected graphite cylinder are evaluated and benchmarked. The experimental uncertainties which are 0.00055, 0.00055 and 0.00055 respectively, and biases to the detailed benchmark models which are -0.00179, -0.00189 and -0.00114 respectively, were determined, and the experimental benchmark keff results were obtained for both detailed and simplified model. The calculation results for both detailed and simplified models using MCNP6-1.0 and ENDF VII.1 agree well tomore » the benchmark experimental results with a difference of less than 0.2%. These are acceptable benchmark experiments for inclusion in the ICSBEP Handbook.« less
Hubert, D J; Ullrich, D R; Murphy, T H; Lindner, J R
2001-08-01
The purpose of this study was to gather benchmark data for the assessment of the knowledge, attitudes, and perceptions regarding agricultural safety issues and curricula held by Texas agricultural teachers with less than two full years of teaching experience (entry-year teachers). Seventy-four of 118 well-distributed teachers responded to this survey. Researchers concluded that more females were entering a traditionally male-dominated field. Overall, teachers addressed safety within units of instruction rather than as separate units. The most useful forms of new teaching resources that this group of teachers would like to see produced were safety videos and study guides, and class demonstration/simulation activities. There was a significant difference in rankings between teachers less than 26 years old and teachers more than 26 years old regarding the usefulness of transparencies as a new teaching resource (F = 5.00, p = 0.0268). Few teachers were currently CPR and first aid certified, even though most had received training and completed a general safety and/or health related course while in college. Teachers generally agreed philosophically with most practices and exhibited personal beliefs consistent with proper safety preparedness and practice in agricultural settings. However, many of these teachers failed to practice what was expected of safe tractor operators, such as wearing safety belts and allowing younger drivers to operate the equipment.
Doram, Keith; Chadwick, Whitney; Bokovoy, Joni; Profit, Jochen; Sexton, Janel D; Sexton, J Bryan
2017-02-11
Organizations that encourage the respectful expression of diverse spiritual views have higher productivity and performance, and support employees with greater organizational commitment and job satisfaction. Within healthcare, there is a paucity of studies which define or intervene on the spiritual needs of healthcare workers, or examine the effects of a pro-spirituality environment on teamwork and patient safety. Our objective was to describe a novel survey scale for evaluating spiritual climate in healthcare workers, evaluate its psychometric properties, provide benchmarking data from a large faith-based healthcare system, and investigate relationships between spiritual climate and other predictors of patient safety and job satisfaction. Cross-sectional survey study of US healthcare workers within a large, faith-based health system. Seven thousand nine hundred twenty three of 9199 eligible healthcare workers across 325 clinical areas within 16 hospitals completed our survey in 2009 (86% response rate). The spiritual climate scale exhibited good psychometric properties (internal consistency: Cronbach α = .863). On average 68% (SD 17.7) of respondents of a given clinical area expressed good spiritual climate, although assessments varied widely (14 to 100%). Spiritual climate correlated positively with teamwork climate (r = .434, p < .001) and safety climate (r = .489, p < .001). Healthcare workers reporting good spiritual climate were less likely to have intentions to leave, to be burned out, or to experience disruptive behaviors in their unit and more likely to have participated in executive rounding (p < .001 for each variable). The spiritual climate scale exhibits good psychometric properties, elicits results that vary widely by clinical area, and aligns well with other culture constructs that have been found to correlate with clinical and organizational outcomes.
Qureshi, Ali A; Parikh, Rajiv P; Myckatyn, Terence M; Tenenbaum, Marissa M
2016-10-01
Comprehensive aesthetic surgery education is an integral part of plastic surgery residency training. Recently, the ACGME increased minimum requirements for aesthetic procedures in residency. To expand aesthetic education and prepare residents for independent practice, our institution has supported a resident cosmetic clinic for over 25 years. To evaluate the safety of procedures performed through a resident clinic by comparing outcomes to benchmarked national aesthetic surgery outcomes and to provide a model for resident clinics in academic plastic surgery institutions. We identified a consecutive cohort of patients who underwent procedures through our resident cosmetic clinic between 2010 and 2015. Major complications, as defined by CosmetAssure database, were recorded and compared to published aesthetic surgery complication rates from the CosmetAssure database for outcomes benchmarking. Fisher's exact test was used to compare sample proportions. Two hundred and seventy-one new patients were evaluated and 112 patients (41.3%) booked surgery for 175 different aesthetic procedures. There were 55 breast, 19 head and neck, and 101 trunk or extremity aesthetic procedures performed. The median number of preoperative and postoperative visits was 2 and 4 respectively with a mean follow-up time of 35 weeks. There were 3 major complications (2 hematomas and 1 infection requiring IV antibiotics) with an overall complication rate of 1.7% compared to 2.0% for patients in the CosmetAssure database (P = .45). Surgical outcomes for procedures performed through a resident cosmetic clinic are comparable to national outcomes for aesthetic surgery procedures, suggesting this experience can enhance comprehensive aesthetic surgery education without compromising patient safety or quality of care. 4 Risk. © 2016 The American Society for Aesthetic Plastic Surgery, Inc. Reprints and permission: journals.permissions@oup.com.
Implementation of Programmatic Quality and the Impact on Safety
NASA Technical Reports Server (NTRS)
Huls, Dale Thomas; Meehan, Kevin
2005-01-01
The purpose of this paper is to discuss the implementation of a programmatic quality assurance discipline within the International Space Station Program and the resulting impact on safety. NASA culture has continued to stress safety at the expense of quality when both are extremely important and both can equally influence the success or failure of a Program or Mission. Although safety was heavily criticized in the media after Colimbiaa, strong case can be made that it was the failure of quality processes and quality assurance in all processes that eventually led to the Columbia accident. Consequently, it is possible to have good quality processes without safety, but it is impossible to have good safety processes without quality. The ISS Program quality assurance function was analyzed as representative of the long-term manned missions that are consistent with the President s Vision for Space Exploration. Background topics are as follows: The quality assurance organizational structure within the ISS Program and the interrelationships between various internal and external organizations. ISS Program quality roles and responsibilities with respect to internal Program Offices and other external organizations such as the Shuttle Program, JSC Directorates, NASA Headquarters, NASA Contractors, other NASA Centers, and International Partner/participants will be addressed. A detailed analysis of implemented quality assurance responsibilities and functions with respect to NASA Headquarters, the JSC S&MA Directorate, and the ISS Program will be presented. Discussions topics are as follows: A comparison of quality and safety resources in terms of staffing, training, experience, and certifications. A benchmark assessment of the lessons learned from the Columbia Accident Investigation (CAB) Report (and follow-up reports and assessments), NASA Benchmarking, and traditional quality assurance activities against ISS quality procedures and practices. The lack of a coherent operational and sustaining quality assurance strategy for long-term manned space flight. An analysis of the ISS waiver processes and the Problem Reporting and Corrective Action (PRACA) process implemented as quality functions. Impact of current ISS Program procedures and practices with regards to operational safety and risk A discussion regarding a "defense-in-depth" approach to quality functions will be provided to address the issue of "integration vs independence" with respect to the roles of Programs, NASA Centers, and NASA Headquarters. Generic recommendations are offered to address the inadequacies identified in the implementation of ISS quality assurance. A reassessment by the NASA community regarding the importance of a "quality culture" as a component within a larger "safety culture" will generate a more effective and value-added functionality that will ultimately enhance safety.
Benchmark Testing of a New 56Fe Evaluation for Criticality Safety Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leal, Luiz C; Ivanov, E.
2015-01-01
The SAMMY code was used to evaluate resonance parameters of the 56Fe cross section in the resolved resonance energy range of 0–2 MeV using transmission data, capture, elastic, inelastic, and double differential elastic cross sections. The resonance analysis was performed with the code SAMMY that fits R-matrix resonance parameters using the generalized least-squares technique (Bayes’ theory). The evaluation yielded a set of resonance parameters that reproduced the experimental data very well, along with a resonance parameter covariance matrix for data uncertainty calculations. Benchmark tests were conducted to assess the evaluation performance in benchmark calculations.
Nuclear Data Needs for Generation IV Nuclear Energy Systems
NASA Astrophysics Data System (ADS)
Rullhusen, Peter
2006-04-01
Nuclear data needs for generation IV systems. Future of nuclear energy and the role of nuclear data / P. Finck. Nuclear data needs for generation IV nuclear energy systems-summary of U.S. workshop / T. A. Taiwo, H. S. Khalil. Nuclear data needs for the assessment of gen. IV systems / G. Rimpault. Nuclear data needs for generation IV-lessons from benchmarks / S. C. van der Marck, A. Hogenbirk, M. C. Duijvestijn. Core design issues of the supercritical water fast reactor / M. Mori ... [et al.]. GFR core neutronics studies at CEA / J. C. Bosq ... [et al]. Comparative study on different phonon frequency spectra of graphite in GCR / Young-Sik Cho ... [et al.]. Innovative fuel types for minor actinides transmutation / D. Haas, A. Fernandez, J. Somers. The importance of nuclear data in modeling and designing generation IV fast reactors / K. D. Weaver. The GIF and Mexico-"everything is possible" / C. Arrenondo Sánchez -- Benmarks, sensitivity calculations, uncertainties. Sensitivity of advanced reactor and fuel cycle performance parameters to nuclear data uncertainties / G. Aliberti ... [et al.]. Sensitivity and uncertainty study for thermal molten salt reactors / A. Biduad ... [et al.]. Integral reactor physics benchmarks- The International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPHEP) / J. B. Briggs, D. W. Nigg, E. Sartori. Computer model of an error propagation through micro-campaign of fast neutron gas cooled nuclear reactor / E. Ivanov. Combining differential and integral experiments on [symbol] for reducing uncertainties in nuclear data applications / T. Kawano ... [et al.]. Sensitivity of activation cross sections of the Hafnium, Tanatalum and Tungsten stable isotopes to nuclear reaction mechanisms / V. Avrigeanu ... [et al.]. Generating covariance data with nuclear models / A. J. Koning. Sensitivity of Candu-SCWR reactors physics calculations to nuclear data files / K. S. Kozier, G. R. Dyck. The lead cooled fast reactor benchmark BREST-300: analysis with sensitivity method / V. Smirnov ... [et al.]. Sensitivity analysis of neutron cross-sections considered for design and safety studies of LFR and SFR generation IV systems / K. Tucek, J. Carlsson, H. Wider -- Experiments. INL capabilities for nuclear data measurements using the Argonne intense pulsed neutron source facility / J. D. Cole ... [et al.]. Cross-section measurements in the fast neutron energy range / A. Plompen. Recent measurements of neutron capture cross sections for minor actinides by a JNC and Kyoto University Group / H. Harada ... [et al.]. Determination of minor actinides fission cross sections by means of transfer reactions / M. Aiche ... [et al.] -- Evaluated data libraries. Nuclear data services from the NEA / H. Henriksson, Y. Rugama. Nuclear databases for energy applications: an IAEA perspective / R. Capote Noy, A. L. Nichols, A. Trkov. Nuclear data evaluation for generation IV / G. Noguère ... [et al.]. Improved evaluations of neutron-induced reactions on americium isotopes / P. Talou ... [et al.]. Using improved ENDF-based nuclear data for candu reactor calculations / J. Prodea. A comparative study on the graphite-moderated reactors using different evaluated nuclear data / Do Heon Kim ... [et al.].
Benchmarking Multilayer-HySEA model for landslide generated tsunami. HTHMP validation process.
NASA Astrophysics Data System (ADS)
Macias, J.; Escalante, C.; Castro, M. J.
2017-12-01
Landslide tsunami hazard may be dominant along significant parts of the coastline around the world, in particular in the USA, as compared to hazards from other tsunamigenic sources. This fact motivated NTHMP about the need of benchmarking models for landslide generated tsunamis, following the same methodology already used for standard tsunami models when the source is seismic. To perform the above-mentioned validation process, a set of candidate benchmarks were proposed. These benchmarks are based on a subset of available laboratory data sets for solid slide experiments and deformable slide experiments, and include both submarine and subaerial slides. A benchmark based on a historic field event (Valdez, AK, 1964) close the list of proposed benchmarks. A total of 7 benchmarks. The Multilayer-HySEA model including non-hydrostatic effects has been used to perform all the benchmarking problems dealing with laboratory experiments proposed in the workshop that was organized at Texas A&M University - Galveston, on January 9-11, 2017 by NTHMP. The aim of this presentation is to show some of the latest numerical results obtained with the Multilayer-HySEA (non-hydrostatic) model in the framework of this validation effort.Acknowledgements. This research has been partially supported by the Spanish Government Research project SIMURISK (MTM2015-70490-C02-01-R) and University of Malaga, Campus de Excelencia Internacional Andalucía Tech. The GPU computations were performed at the Unit of Numerical Methods (University of Malaga).
Collegiate Aviation Research and Education Solutions to Critical Safety Issues
NASA Technical Reports Server (NTRS)
Bowen, Brent (Editor)
2002-01-01
This Conference Proceedings is a collection of 6 abstracts and 3 papers presented April 19-20, 2001 in Denver, CO. The conference focus was "Best Practices and Benchmarking in Collegiate and Industry Programs". Topics covered include: satellite-based aviation navigation; weather safety training; human-behavior and aircraft maintenance issues; disaster preparedness; the collegiate aviation emergency response checklist; aviation safety research; and regulatory status of maintenance resource management.
Using Smart Pumps to Understand and Evaluate Clinician Practice Patterns to Ensure Patient Safety
Mansfield, Jennifer; Jarrett, Steven
2013-01-01
Background: Safety software installed on intravenous (IV) infusion pumps has been shown to positively impact the quality of patient care through avoidance of medication errors. The data derived from the use of smart pumps are often overlooked, although these data provide helpful insight into the delivery of quality patient care. Objective: The objectives of this report are to describe the value of implementing IV infusion safety software and analyzing the data and reports generated by this system. Case study: Based on experience at the Carolinas HealthCare System (CHS), executive score cards provide an aggregate view of compliance rate, number of alerts, overrides, and edits. The report of serious errors averted (ie, critical catches) supplies the location, date, and time of the critical catch, thereby enabling management to pinpoint the end-user for educational purposes. By examining the number of critical catches, a return on investment may be calculated. Assuming 3,328 of these events each year, an estimated cost avoidance would be $29,120,000 per year for CHS. Other reports allow benchmarking between institutions. Conclusion: A review of the data about medication safety across CHS has helped garner support for a medication safety officer position with the goal of ultimately creating a safer environment for the patient. PMID:24474836
The art and science of using routine outcome measurement in mental health benchmarking.
McKay, Roderick; Coombs, Tim; Duerden, David
2014-02-01
To report and critique the application of routine outcome measurement data when benchmarking Australian mental health services. The experience of the authors as participants and facilitators of benchmarking activities is augmented by a review of the literature regarding mental health benchmarking in Australia. Although the published literature is limited, in practice, routine outcome measures, in particular the Health of the National Outcomes Scales (HoNOS) family of measures, are used in a variety of benchmarking activities. Use in exploring similarities and differences in consumers between services and the outcomes of care are illustrated. This requires the rigour of science in data management and interpretation, supplemented by the art that comes from clinical experience, a desire to reflect on clinical practice and the flexibility to use incomplete data to explore clinical practice. Routine outcome measurement data can be used in a variety of ways to support mental health benchmarking. With the increasing sophistication of information development in mental health, the opportunity to become involved in benchmarking will continue to increase. The techniques used during benchmarking and the insights gathered may prove useful to support reflection on practice by psychiatrists and other senior mental health clinicians.
Evaluation of Neutron Radiography Reactor LEU-Core Start-Up Measurements
Bess, John D.; Maddock, Thomas L.; Smolinski, Andrew T.; ...
2014-11-04
Benchmark models were developed to evaluate the cold-critical start-up measurements performed during the fresh core reload of the Neutron Radiography (NRAD) reactor with Low Enriched Uranium (LEU) fuel. Experiments include criticality, control-rod worth measurements, shutdown margin, and excess reactivity for four core loadings with 56, 60, 62, and 64 fuel elements. The worth of four graphite reflector block assemblies and an empty dry tube used for experiment irradiations were also measured and evaluated for the 60-fuel-element core configuration. Dominant uncertainties in the experimental k eff come from uncertainties in the manganese content and impurities in the stainless steel fuel claddingmore » as well as the 236U and erbium poison content in the fuel matrix. Calculations with MCNP5 and ENDF/B-VII.0 neutron nuclear data are approximately 1.4% (9σ) greater than the benchmark model eigenvalues, which is commonly seen in Monte Carlo simulations of other TRIGA reactors. Simulations of the worth measurements are within the 2σ uncertainty for most of the benchmark experiment worth values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less
Evaluation of Neutron Radiography Reactor LEU-Core Start-Up Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bess, John D.; Maddock, Thomas L.; Smolinski, Andrew T.
Benchmark models were developed to evaluate the cold-critical start-up measurements performed during the fresh core reload of the Neutron Radiography (NRAD) reactor with Low Enriched Uranium (LEU) fuel. Experiments include criticality, control-rod worth measurements, shutdown margin, and excess reactivity for four core loadings with 56, 60, 62, and 64 fuel elements. The worth of four graphite reflector block assemblies and an empty dry tube used for experiment irradiations were also measured and evaluated for the 60-fuel-element core configuration. Dominant uncertainties in the experimental k eff come from uncertainties in the manganese content and impurities in the stainless steel fuel claddingmore » as well as the 236U and erbium poison content in the fuel matrix. Calculations with MCNP5 and ENDF/B-VII.0 neutron nuclear data are approximately 1.4% (9σ) greater than the benchmark model eigenvalues, which is commonly seen in Monte Carlo simulations of other TRIGA reactors. Simulations of the worth measurements are within the 2σ uncertainty for most of the benchmark experiment worth values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less
Benchmark Evaluation of Dounreay Prototype Fast Reactor Minor Actinide Depletion Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hess, J. D.; Gauld, I. C.; Gulliford, J.
2017-01-01
Historic measurements of actinide samples in the Dounreay Prototype Fast Reactor (PFR) are of interest for modern nuclear data and simulation validation. Samples of various higher-actinide isotopes were irradiated for 492 effective full-power days and radiochemically assayed at Oak Ridge National Laboratory (ORNL) and Japan Atomic Energy Research Institute (JAERI). Limited data were available regarding the PFR irradiation; a six-group neutron spectra was available with some power history data to support a burnup depletion analysis validation study. Under the guidance of the Organisation for Economic Co-Operation and Development Nuclear Energy Agency (OECD NEA), the International Reactor Physics Experiment Evaluation Projectmore » (IRPhEP) and Spent Fuel Isotopic Composition (SFCOMPO) Project are collaborating to recover all measurement data pertaining to these measurements, including collaboration with the United Kingdom to obtain pertinent reactor physics design and operational history data. These activities will produce internationally peer-reviewed benchmark data to support validation of minor actinide cross section data and modern neutronic simulation of fast reactors with accompanying fuel cycle activities such as transportation, recycling, storage, and criticality safety.« less
Pooling knowledge and improving safety for contracted works at a large industrial park.
Agnello, Patrizia; Ansaldi, Silvia; Bragatto, Paolo
2015-01-01
At a large chemical park maintenance is contracted by the major companies operating the plants to many small firms. The cultural and psychological isolation of contractor workers was recognized a root cause of severe accidents in the recent years. That problem is common in chemical industry. The knowledge sharing has been assumed a good key to involve contractors and sub contractors in safety culture and contributing to injuries prevention. The selection of personal protective equipment PPE for the maintenance works has been taken as benchmark to demonstrate the adequateness of the proposed approach. To support plant operators, contractors and subcontractors in PPE discussion, a method has been developed. Its core is a knowledge-base, organized in an Ontology, as suitable for inferring decisions. By means of this tool all stakeholders have merged experience and information and find out the right PPE, to be provided, with adequate training and information package. PPE selection requires sound competencies about process and environmental hazards, including major accident, preventive and protective measures, maintenance activities. These pieces of knowledge previously fragmented among plant operators and contractors, have to be pooled, and used to find out the adequate PPE for a number of maintenance works. The PPE selection is per se important, but it is also a good chance to break the contractors' isolation and involve them in safety objectives. Thus by pooling experience and practical knowledge, the common understanding of safety issues has been strengthened.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-06
... and facilitate the use of documentation in future evaluations and benchmarking. Extraordinary.... Benchmarking Other Agencies' Experiences A Federal agency cannot rely on another agency's categorical exclusion... was established. Federal agencies can also substantiate categorical exclusions by benchmarking, or...
NASA Astrophysics Data System (ADS)
Murata, Isao; Ohta, Masayuki; Miyamaru, Hiroyuki; Kondo, Keitaro; Yoshida, Shigeo; Iida, Toshiyuki; Ochiai, Kentaro; Konno, Chikara
2011-10-01
Nuclear data are indispensable for development of fusion reactor candidate materials. However, benchmarking of the nuclear data in MeV energy region is not yet adequate. In the present study, benchmark performance in the MeV energy region was investigated theoretically for experiments by using a 14 MeV neutron source. We carried out a systematical analysis for light to heavy materials. As a result, the benchmark performance for the neutron spectrum was confirmed to be acceptable, while for gamma-rays it was not sufficiently accurate. Consequently, a spectrum shifter has to be applied. Beryllium had the best performance as a shifter. Moreover, a preliminary examination of whether it is really acceptable that only the spectrum before the last collision is considered in the benchmark performance analysis. It was pointed out that not only the last collision but also earlier collisions should be considered equally in the benchmark performance analysis.
Ellis, Judith
2006-07-01
The aim of this article is to review published descriptions of benchmarking activity and synthesize benchmarking principles to encourage the acceptance and use of Essence of Care as a new benchmarking approach to continuous quality improvement, and to promote its acceptance as an integral and effective part of benchmarking activity in health services. The Essence of Care, was launched by the Department of Health in England in 2001 to provide a benchmarking tool kit to support continuous improvement in the quality of fundamental aspects of health care, for example, privacy and dignity, nutrition and hygiene. The tool kit is now being effectively used by some frontline staff. However, use is inconsistent, with the value of the tool kit, or the support clinical practice benchmarking requires to be effective, not always recognized or provided by National Health Service managers, who are absorbed with the use of quantitative benchmarking approaches and measurability of comparative performance data. This review of published benchmarking literature, was obtained through an ever-narrowing search strategy commencing from benchmarking within quality improvement literature through to benchmarking activity in health services and including access to not only published examples of benchmarking approaches and models used but the actual consideration of web-based benchmarking data. This supported identification of how benchmarking approaches have developed and been used, remaining true to the basic benchmarking principles of continuous improvement through comparison and sharing (Camp 1989). Descriptions of models and exemplars of quantitative and specifically performance benchmarking activity in industry abound (Camp 1998), with far fewer examples of more qualitative and process benchmarking approaches in use in the public services and then applied to the health service (Bullivant 1998). The literature is also in the main descriptive in its support of the effectiveness of benchmarking activity and although this does not seem to have restricted its popularity in quantitative activity, reticence about the value of the more qualitative approaches, for example Essence of Care, needs to be overcome in order to improve the quality of patient care and experiences. The perceived immeasurability and subjectivity of Essence of Care and clinical practice benchmarks means that these benchmarking approaches are not always accepted or supported by health service organizations as valid benchmarking activity. In conclusion, Essence of Care benchmarking is a sophisticated clinical practice benchmarking approach which needs to be accepted as an integral part of health service benchmarking activity to support improvement in the quality of patient care and experiences.
Readiness for First Crewed Flight
NASA Technical Reports Server (NTRS)
Schaible, Dawn M.
2011-01-01
The NASA Engineering and Safety Center (NESC) was requested to develop a generic framework for evaluating whether any given program has sufficiently complete and balanced plans in place to allow crewmembers to fly safely on a human spaceflight system for the first time (i.e., first crewed flight). The NESC assembled a small team which included experts with experience developing robotic and human spaceflight and aviation systems through first crewed test flight and into operational capability. The NESC team conducted a historical review of the steps leading up to the first crewed flights of Mercury through the Space Shuttle. Benchmarking was also conducted with the United States (U.S.) Air Force and U.S. Navy. This report contains documentation of that review.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, William BJ J; Rearden, Bradley T
The validation of neutron transport methods used in nuclear criticality safety analyses is required by consensus American National Standards Institute/American Nuclear Society (ANSI/ANS) standards. In the last decade, there has been an increased interest in correlations among critical experiments used in validation that have shared physical attributes and which impact the independence of each measurement. The statistical methods included in many of the frequently cited guidance documents on performing validation calculations incorporate the assumption that all individual measurements are independent, so little guidance is available to practitioners on the topic. Typical guidance includes recommendations to select experiments from multiple facilitiesmore » and experiment series in an attempt to minimize the impact of correlations or common-cause errors in experiments. Recent efforts have been made both to determine the magnitude of such correlations between experiments and to develop and apply methods for adjusting the bias and bias uncertainty to account for the correlations. This paper describes recent work performed at Oak Ridge National Laboratory using the Sampler sequence from the SCALE code system to develop experimental correlations using a Monte Carlo sampling technique. Sampler will be available for the first time with the release of SCALE 6.2, and a brief introduction to the methods used to calculate experiment correlations within this new sequence is presented in this paper. Techniques to utilize these correlations in the establishment of upper subcritical limits are the subject of a companion paper and will not be discussed here. Example experimental uncertainties and correlation coefficients are presented for a variety of low-enriched uranium water-moderated lattice experiments selected for use in a benchmark exercise by the Working Party on Nuclear Criticality Safety Subgroup on Uncertainty Analysis in Criticality Safety Analyses. The results include studies on the effect of fuel rod pitch on the correlations, and some observations are also made regarding difficulties in determining experimental correlations using the Monte Carlo sampling technique.« less
Renner, Franziska
2016-09-01
Monte Carlo simulations are regarded as the most accurate method of solving complex problems in the field of dosimetry and radiation transport. In (external) radiation therapy they are increasingly used for the calculation of dose distributions during treatment planning. In comparison to other algorithms for the calculation of dose distributions, Monte Carlo methods have the capability of improving the accuracy of dose calculations - especially under complex circumstances (e.g. consideration of inhomogeneities). However, there is a lack of knowledge of how accurate the results of Monte Carlo calculations are on an absolute basis. A practical verification of the calculations can be performed by direct comparison with the results of a benchmark experiment. This work presents such a benchmark experiment and compares its results (with detailed consideration of measurement uncertainty) with the results of Monte Carlo calculations using the well-established Monte Carlo code EGSnrc. The experiment was designed to have parallels to external beam radiation therapy with respect to the type and energy of the radiation, the materials used and the kind of dose measurement. Because the properties of the beam have to be well known in order to compare the results of the experiment and the simulation on an absolute basis, the benchmark experiment was performed using the research electron accelerator of the Physikalisch-Technische Bundesanstalt (PTB), whose beam was accurately characterized in advance. The benchmark experiment and the corresponding Monte Carlo simulations were carried out for two different types of ionization chambers and the results were compared. Considering the uncertainty, which is about 0.7 % for the experimental values and about 1.0 % for the Monte Carlo simulation, the results of the simulation and the experiment coincide. Copyright © 2015. Published by Elsevier GmbH.
Object-Oriented Implementation of the NAS Parallel Benchmarks using Charm++
NASA Technical Reports Server (NTRS)
Krishnan, Sanjeev; Bhandarkar, Milind; Kale, Laxmikant V.
1996-01-01
This report describes experiences with implementing the NAS Computational Fluid Dynamics benchmarks using a parallel object-oriented language, Charm++. Our main objective in implementing the NAS CFD kernel benchmarks was to develop a code that could be used to easily experiment with different domain decomposition strategies and dynamic load balancing. We also wished to leverage the object-orientation provided by the Charm++ parallel object-oriented language, to develop reusable abstractions that would simplify the process of developing parallel applications. We first describe the Charm++ parallel programming model and the parallel object array abstraction, then go into detail about each of the Scalar Pentadiagonal (SP) and Lower/Upper Triangular (LU) benchmarks, along with performance results. Finally we conclude with an evaluation of the methodology used.
Benchmarking in Czech Higher Education: The Case of Schools of Economics
ERIC Educational Resources Information Center
Placek, Michal; Ochrana, František; Pucek, Milan
2015-01-01
This article describes the use of benchmarking in universities in the Czech Republic and academics' experiences with it. It is based on research conducted among academics from economics schools in Czech public and private universities. The results identified several issues regarding the utilisation and understanding of benchmarking in the Czech…
The Schultz MIDI Benchmarking Toolbox for MIDI interfaces, percussion pads, and sound cards.
Schultz, Benjamin G
2018-04-17
The Musical Instrument Digital Interface (MIDI) was readily adopted for auditory sensorimotor synchronization experiments. These experiments typically use MIDI percussion pads to collect responses, a MIDI-USB converter (or MIDI-PCI interface) to record responses on a PC and manipulate feedback, and an external MIDI sound module to generate auditory feedback. Previous studies have suggested that auditory feedback latencies can be introduced by these devices. The Schultz MIDI Benchmarking Toolbox (SMIDIBT) is an open-source, Arduino-based package designed to measure the point-to-point latencies incurred by several devices used in the generation of response-triggered auditory feedback. Experiment 1 showed that MIDI messages are sent and received within 1 ms (on average) in the absence of any external MIDI device. Latencies decreased when the baud rate increased above the MIDI protocol default (31,250 bps). Experiment 2 benchmarked the latencies introduced by different MIDI-USB and MIDI-PCI interfaces. MIDI-PCI was superior to MIDI-USB, primarily because MIDI-USB is subject to USB polling. Experiment 3 tested three MIDI percussion pads. Both the audio and MIDI message latencies were significantly greater than 1 ms for all devices, and there were significant differences between percussion pads and instrument patches. Experiment 4 benchmarked four MIDI sound modules. Audio latencies were significantly greater than 1 ms, and there were significant differences between sound modules and instrument patches. These experiments suggest that millisecond accuracy might not be achievable with MIDI devices. The SMIDIBT can be used to benchmark a range of MIDI devices, thus allowing researchers to make informed decisions when choosing testing materials and to arrive at an acceptable latency at their discretion.
The Joint Convention - Its Structure, the Articles and its Administration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Metcalf, P.; Louvat, D.
The objective of the Joint Convention on the Safety of Spent Fuel Management and on the Safety of Radioactive Waste Management (The Joint Convention) is to achieve a high level of safety worldwide in the management of spent nuclear and fuel and radioactive waste. [1] It is an incentive convention designed to encourage and assist countries to achieve the objective. Contracting Parties to the Joint Convention are required to compile and submit a national report on how they meet the articles of the Joint Convention. The reports are peer reviewed by other Contracting Parties to the Joint Convention and thenmore » countries have to defend the report at a review meeting of all the Contracting Parties. The process entails both a self appraisal in compiling the report and independent international peer review. Summaries are compiled of the various reviews and these are presented in plenary, with a view to identifying generic issues and areas in which countries are improving safety or have identified for further development. The process also presents an opportunity for countries involved to benchmark their national spent fuel and radioactive waste safety programmes against prevailing international practice. The paper elaborates the detailed elements involved and discusses the experience from the first review meeting of Contracting Parties, and issues envisaged for consideration at the second review meeting scheduled for May 2006. (authors)« less
Development and Testing of Neutron Cross Section Covariance Data for SCALE 6.2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, William BJ J; Williams, Mark L; Wiarda, Dorothea
2015-01-01
Neutron cross-section covariance data are essential for many sensitivity/uncertainty and uncertainty quantification assessments performed both within the TSUNAMI suite and more broadly throughout the SCALE code system. The release of ENDF/B-VII.1 included a more complete set of neutron cross-section covariance data: these data form the basis for a new cross-section covariance library to be released in SCALE 6.2. A range of testing is conducted to investigate the properties of these covariance data and ensure that the data are reasonable. These tests include examination of the uncertainty in critical experiment benchmark model k eff values due to nuclear data uncertainties, asmore » well as similarity assessments of irradiated pressurized water reactor (PWR) and boiling water reactor (BWR) fuel with suites of critical experiments. The contents of the new covariance library, the testing performed, and the behavior of the new covariance data are described in this paper. The neutron cross-section covariances can be combined with a sensitivity data file generated using the TSUNAMI suite of codes within SCALE to determine the uncertainty in system k eff caused by nuclear data uncertainties. The Verified, Archived Library of Inputs and Data (VALID) maintained at Oak Ridge National Laboratory (ORNL) contains over 400 critical experiment benchmark models, and sensitivity data are generated for each of these models. The nuclear data uncertainty in k eff is generated for each experiment, and the resulting uncertainties are tabulated and compared to the differences in measured and calculated results. The magnitude of the uncertainty for categories of nuclides (such as actinides, fission products, and structural materials) is calculated for irradiated PWR and BWR fuel to quantify the effect of covariance library changes between the SCALE 6.1 and 6.2 libraries. One of the primary applications of sensitivity/uncertainty methods within SCALE is the assessment of similarities between benchmark experiments and safety applications. This is described by a c k value for each experiment with each application. Several studies have analyzed typical c k values for a range of critical experiments compared with hypothetical irradiated fuel applications. The c k value is sensitive to the cross-section covariance data because the contribution of each nuclide is influenced by its uncertainty; large uncertainties indicate more likely bias sources and are thus given more weight. Changes in c k values resulting from different covariance data can be used to examine and assess underlying data changes. These comparisons are performed for PWR and BWR fuel in storage and transportation systems.« less
An overview of the ENEA activities in the field of coupled codes NPP simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parisi, C.; Negrenti, E.; Sepielli, M.
2012-07-01
In the framework of the nuclear research activities in the fields of safety, training and education, ENEA (the Italian National Agency for New Technologies, Energy and the Sustainable Development) is in charge of defining and pursuing all the necessary steps for the development of a NPP engineering simulator at the 'Casaccia' Research Center near Rome. A summary of the activities in the field of the nuclear power plants simulation by coupled codes is here presented with the long term strategy for the engineering simulator development. Specifically, results from the participation in international benchmarking activities like the OECD/NEA 'Kalinin-3' benchmark andmore » the 'AER-DYN-002' benchmark, together with simulations of relevant events like the Fukushima accident, are here reported. The ultimate goal of such activities performed using state-of-the-art technology is the re-establishment of top level competencies in the NPP simulation field in order to facilitate the development of Enhanced Engineering Simulators and to upgrade competencies for supporting national energy strategy decisions, the nuclear national safety authority, and the R and D activities on NPP designs. (authors)« less
Scaglione, John M.; Mueller, Don E.; Wagner, John C.
2014-12-01
One of the most important remaining challenges associated with expanded implementation of burnup credit in the United States is the validation of depletion and criticality calculations used in the safety evaluation—in particular, the availability and use of applicable measured data to support validation, especially for fission products (FPs). Applicants and regulatory reviewers have been constrained by both a scarcity of data and a lack of clear technical basis or approach for use of the data. In this study, this paper describes a validation approach for commercial spent nuclear fuel (SNF) criticality safety (k eff) evaluations based on best-available data andmore » methods and applies the approach for representative SNF storage and transport configurations/conditions to demonstrate its usage and applicability, as well as to provide reference bias results. The criticality validation approach utilizes not only available laboratory critical experiment (LCE) data from the International Handbook of Evaluated Criticality Safety Benchmark Experiments and the French Haut Taux de Combustion program to support validation of the principal actinides but also calculated sensitivities, nuclear data uncertainties, and limited available FP LCE data to predict and verify individual biases for relevant minor actinides and FPs. The results demonstrate that (a) sufficient critical experiment data exist to adequately validate k eff calculations via conventional validation approaches for the primary actinides, (b) sensitivity-based critical experiment selection is more appropriate for generating accurate application model bias and uncertainty, and (c) calculated sensitivities and nuclear data uncertainties can be used for generating conservative estimates of bias for minor actinides and FPs. Results based on the SCALE 6.1 and the ENDF/B-VII.0 cross-section libraries indicate that a conservative estimate of the bias for the minor actinides and FPs is 1.5% of their worth within the application model. Finally, this paper provides a detailed description of the approach and its technical bases, describes the application of the approach for representative pressurized water reactor and boiling water reactor safety analysis models, and provides reference bias results based on the prerelease SCALE 6.1 code package and ENDF/B-VII nuclear cross-section data.« less
High-energy neutron depth-dose distribution experiment.
Ferenci, M S; Hertel, N E
2003-01-01
A unique set of high-energy neutron depth-dose benchmark experiments were performed at the Los Alamos Neutron Science Center/Weapons Neutron Research (LANSCE/WNR) complex. The experiments consisted of filtered neutron beams with energies up to 800 MeV impinging on a 30 x 30 x 30 cm3 liquid, tissue-equivalent phantom. The absorbed dose was measured in the phantom at various depths with tissue-equivalent ion chambers. This experiment is intended to serve as a benchmark experiment for the testing of high-energy radiation transport codes for the international radiation protection community.
ERIC Educational Resources Information Center
Sampson, K. A.; Johnston, L.; Comer, K.; Brogt, E.
2016-01-01
Summative and benchmarking surveys to measure the postgraduate student research experience are well reported in the literature. While useful, we argue that local instruments that provide formative resources with an academic development focus are also required. If higher education institutions are to move beyond the identification of issues and…
A performance improvement plan to increase nurse adherence to use of medication safety software.
Gavriloff, Carrie
2012-08-01
Nurses can protect patients receiving intravenous (IV) medication by using medication safety software to program "smart" pumps to administer IV medications. After a patient safety event identified inconsistent use of medication safety software by nurses, a performance improvement team implemented the Deming Cycle performance improvement methodology. The combined use of improved direct care nurse communication, programming strategies, staff education, medication safety champions, adherence monitoring, and technology acquisition resulted in a statistically significant (p < .001) increase in nurse adherence to using medication safety software from 28% to above 85%, exceeding national benchmark adherence rates (Cohen, Cooke, Husch & Woodley, 2007; Carefusion, 2011). Copyright © 2012 Elsevier Inc. All rights reserved.
Bertzbach, F; Franz, T; Möller, K
2012-01-01
This paper shows the results of performance improvement, which have been achieved in benchmarking projects in the wastewater industry in Germany over the last 15 years. A huge number of changes in operational practice and also in achieved annual savings can be shown, induced in particular by benchmarking at process level. Investigation of this question produces some general findings for the inclusion of performance improvement in a benchmarking project and for the communication of its results. Thus, we elaborate on the concept of benchmarking at both utility and process level, which is still a necessary distinction for the integration of performance improvement into our benchmarking approach. To achieve performance improvement via benchmarking it should be made quite clear that this outcome depends, on one hand, on a well conducted benchmarking programme and, on the other, on the individual situation within each participating utility.
BENCHMARK DOSE TECHNICAL GUIDANCE DOCUMENT ...
The purpose of this document is to provide guidance for the Agency on the application of the benchmark dose approach in determining the point of departure (POD) for health effects data, whether a linear or nonlinear low dose extrapolation is used. The guidance includes discussion on computation of benchmark doses and benchmark concentrations (BMDs and BMCs) and their lower confidence limits, data requirements, dose-response analysis, and reporting requirements. This guidance is based on today's knowledge and understanding, and on experience gained in using this approach.
FFTF Passive Safety Test Data for Benchmarks for New LMR Designs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wootan, David W.; Casella, Andrew M.
Liquid Metal Reactors (LMRs) continue to be considered as an attractive concept for advanced reactor design. Software packages such as SASSYS are being used to im-prove new LMR designs and operating characteristics. Significant cost and safety im-provements can be realized in advanced liquid metal reactor designs by emphasizing inherent or passive safety through crediting the beneficial reactivity feedbacks associ-ated with core and structural movement. This passive safety approach was adopted for the Fast Flux Test Facility (FFTF), and an experimental program was conducted to characterize the structural reactivity feedback. The FFTF passive safety testing pro-gram was developed to examine howmore » specific design elements influenced dynamic re-activity feedback in response to a reactivity input and to demonstrate the scalability of reactivity feedback results to reactors of current interest. The U.S. Department of En-ergy, Office of Nuclear Energy Advanced Reactor Technology program is in the pro-cess of preserving, protecting, securing, and placing in electronic format information and data from the FFTF, including the core configurations and data collected during the passive safety tests. Benchmarks based on empirical data gathered during operation of the Fast Flux Test Facility (FFTF) as well as design documents and post-irradiation examination will aid in the validation of these software packages and the models and calculations they produce. Evaluation of these actual test data could provide insight to improve analytical methods which may be used to support future licensing applications for LMRs« less
Methodology and issues of integral experiments selection for nuclear data validation
NASA Astrophysics Data System (ADS)
Tatiana, Ivanova; Ivanov, Evgeny; Hill, Ian
2017-09-01
Nuclear data validation involves a large suite of Integral Experiments (IEs) for criticality, reactor physics and dosimetry applications. [1] Often benchmarks are taken from international Handbooks. [2, 3] Depending on the application, IEs have different degrees of usefulness in validation, and usually the use of a single benchmark is not advised; indeed, it may lead to erroneous interpretation and results. [1] This work aims at quantifying the importance of benchmarks used in application dependent cross section validation. The approach is based on well-known General Linear Least Squared Method (GLLSM) extended to establish biases and uncertainties for given cross sections (within a given energy interval). The statistical treatment results in a vector of weighting factors for the integral benchmarks. These factors characterize the value added by a benchmark for nuclear data validation for the given application. The methodology is illustrated by one example, selecting benchmarks for 239Pu cross section validation. The studies were performed in the framework of Subgroup 39 (Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files) established at the Working Party on International Nuclear Data Evaluation Cooperation (WPEC) of the Nuclear Science Committee under the Nuclear Energy Agency (NEA/OECD).
Board oversight of patient care quality in community health systems.
Prybil, Lawrence D; Peterson, Richard; Brezinski, Paul; Zamba, Gideon; Roach, William; Fillmore, Ammon
2010-01-01
In hospitals and health systems, ensuring that standards for the quality of patient care are established and continuous improvement processes are in place are among the board's most fundamental responsibilities. A recent survey has examined governance oversight of patient care quality at 123 nonprofit community health systems and compared their practices with current benchmarks of good governance. The findings show that 88% of the boards have established standing committees on patient quality and safety, nearly all chief executive officers' performance expectations now include targets related to patient quality and safety, and 96% of the boards regularly receive formal written reports regarding their organizations' performance in relation to quality measures and standards. However, there continue to be gaps between present reality and current benchmarks of good governance in several areas. These gaps are somewhat greater for independent systems than for those affiliated with a larger parent organization.
NASA Astrophysics Data System (ADS)
Pescarini, M.; Sinitsa, V.; Orsi, R.; Frisoni, M.
2013-03-01
This paper presents a synthesis of the ENEA-Bologna Nuclear Data Group programme dedicated to generate and validate group-wise cross section libraries for shielding and radiation damage deterministic calculations in nuclear fission reactors, following the data processing methodology recommended in the ANSI/ANS-6.1.2-1999 (R2009) American Standard. The VITJEFF311.BOLIB and VITENDF70.BOLIB finegroup coupled n-γ (199 n + 42 γ - VITAMIN-B6 structure) multi-purpose cross section libraries, based on the Bondarenko method for neutron resonance self-shielding and respectively on JEFF-3.1.1 and ENDF/B-VII.0 evaluated nuclear data, were produced in AMPX format using the NJOY-99.259 and the ENEA-Bologna 2007 Revision of the SCAMPI nuclear data processing systems. Two derived broad-group coupled n-γ (47 n + 20 γ - BUGLE-96 structure) working cross section libraries in FIDO-ANISN format for LWR shielding and pressure vessel dosimetry calculations, named BUGJEFF311.BOLIB and BUGENDF70.BOLIB, were generated by the revised version of SCAMPI, through problem-dependent cross section collapsing and self-shielding from the cited fine-group libraries. The validation results on the criticality safety benchmark experiments for the fine-group libraries and the preliminary validation results for the broad-group working libraries on the PCA-Replica and VENUS-3 engineering neutron shielding benchmark experiments are reported in synthesis.
Security in Intelligent Transport Systems for Smart Cities: From Theory to Practice.
Javed, Muhammad Awais; Ben Hamida, Elyes; Znaidi, Wassim
2016-06-15
Connecting vehicles securely and reliably is pivotal to the implementation of next generation ITS applications of smart cities. With continuously growing security threats, vehicles could be exposed to a number of service attacks that could put their safety at stake. To address this concern, both US and European ITS standards have selected Elliptic Curve Cryptography (ECC) algorithms to secure vehicular communications. However, there is still a lack of benchmarking studies on existing security standards in real-world settings. In this paper, we first analyze the security architecture of the ETSI ITS standard. We then implement the ECC based digital signature and encryption procedures using an experimental test-bed and conduct an extensive benchmark study to assess their performance which depends on factors such as payload size, processor speed and security levels. Using network simulation models, we further evaluate the impact of standard compliant security procedures in dense and realistic smart cities scenarios. Obtained results suggest that existing security solutions directly impact the achieved quality of service (QoS) and safety awareness of vehicular applications, in terms of increased packet inter-arrival delays, packet and cryptographic losses, and reduced safety awareness in safety applications. Finally, we summarize the insights gained from the simulation results and discuss open research challenges for efficient working of security in ITS applications of smart cities.
Workplace road safety risk management: An investigation into Australian practices.
Warmerdam, Amanda; Newnam, Sharon; Sheppard, Dianne; Griffin, Mark; Stevenson, Mark
2017-01-01
In Australia, more than 30% of the traffic volume can be attributed to work-related vehicles. Although work-related driver safety has been given increasing attention in the scientific literature, it is uncertain how well this knowledge has been translated into practice in industry. It is also unclear how current practice in industry can inform scientific knowledge. The aim of the research was to use a benchmarking tool developed by the National Road Safety Partnership Program to assess industry maturity in relation to risk management practices. A total of 83 managers from a range of small, medium and large organisations were recruited through the Victorian Work Authority. Semi-structured interviews aimed at eliciting information on current organisational practices, as well as policy and procedures around work-related driving were conducted and the data mapped onto the benchmarking tool. Overall, the results demonstrated varying levels of maturity of risk management practices across organisations, highlighting the need to build accountability within organisations, improve communication practices, improve journey management, reduce vehicle-related risk, improve driver competency through an effective workplace road safety management program and review organisational incident and infringement management. The findings of the study have important implications for industry and highlight the need to review current risk management practices. Copyright © 2016 Elsevier Ltd. All rights reserved.
Key Performance Indicators in the Evaluation of the Quality of Radiation Safety Programs.
Schultz, Cheryl Culver; Shaffer, Sheila; Fink-Bennett, Darlene; Winokur, Kay
2016-08-01
Beaumont is a multiple hospital health care system with a centralized radiation safety department. The health system operates under a broad scope Nuclear Regulatory Commission license but also maintains several other limited use NRC licenses in off-site facilities and clinics. The hospital-based program is expansive including diagnostic radiology and nuclear medicine (molecular imaging), interventional radiology, a comprehensive cardiovascular program, multiple forms of radiation therapy (low dose rate brachytherapy, high dose rate brachytherapy, external beam radiotherapy, and gamma knife), and the Research Institute (including basic bench top, human and animal). Each year, in the annual report, data is analyzed and then tracked and trended. While any summary report will, by nature, include items such as the number of pieces of equipment, inspections performed, staff monitored and educated and other similar parameters, not all include an objective review of the quality and effectiveness of the program. Through objective numerical data Beaumont adopted seven key performance indicators. The assertion made is that key performance indicators can be used to establish benchmarks for evaluation and comparison of the effectiveness and quality of radiation safety programs. Based on over a decade of data collection, and adoption of key performance indicators, this paper demonstrates one way to establish objective benchmarking for radiation safety programs in the health care environment.
Use of benchmarking and public reporting for infection control in four high-income countries.
Haustein, Thomas; Gastmeier, Petra; Holmes, Alison; Lucet, Jean-Christophe; Shannon, Richard P; Pittet, Didier; Harbarth, Stephan
2011-06-01
Benchmarking of surveillance data for health-care-associated infection (HCAI) has been used for more than three decades to inform prevention strategies and improve patients' safety. In recent years, public reporting of HCAI indicators has been mandated in several countries because of an increasing demand for transparency, although many methodological issues surrounding benchmarking remain unresolved and are highly debated. In this Review, we describe developments in benchmarking and public reporting of HCAI indicators in England, France, Germany, and the USA. Although benchmarking networks in these countries are derived from a common model and use similar methods, approaches to public reporting have been more diverse. The USA and England have predominantly focused on reporting of infection rates, whereas France has put emphasis on process and structure indicators. In Germany, HCAI indicators of individual institutions are treated confidentially and are not disseminated publicly. Although evidence for a direct effect of public reporting of indicators alone on incidence of HCAIs is weak at present, it has been associated with substantial organisational change. An opportunity now exists to learn from the different strategies that have been adopted. Copyright © 2011 Elsevier Ltd. All rights reserved.
Benchmarking gate-based quantum computers
NASA Astrophysics Data System (ADS)
Michielsen, Kristel; Nocon, Madita; Willsch, Dennis; Jin, Fengping; Lippert, Thomas; De Raedt, Hans
2017-11-01
With the advent of public access to small gate-based quantum processors, it becomes necessary to develop a benchmarking methodology such that independent researchers can validate the operation of these processors. We explore the usefulness of a number of simple quantum circuits as benchmarks for gate-based quantum computing devices and show that circuits performing identity operations are very simple, scalable and sensitive to gate errors and are therefore very well suited for this task. We illustrate the procedure by presenting benchmark results for the IBM Quantum Experience, a cloud-based platform for gate-based quantum computing.
The MCNP6 Analytic Criticality Benchmark Suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.
2016-06-16
Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling)more » and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.« less
Combining Phase Identification and Statistic Modeling for Automated Parallel Benchmark Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Ye; Ma, Xiaosong; Liu, Qing Gary
2015-01-01
Parallel application benchmarks are indispensable for evaluating/optimizing HPC software and hardware. However, it is very challenging and costly to obtain high-fidelity benchmarks reflecting the scale and complexity of state-of-the-art parallel applications. Hand-extracted synthetic benchmarks are time-and labor-intensive to create. Real applications themselves, while offering most accurate performance evaluation, are expensive to compile, port, reconfigure, and often plainly inaccessible due to security or ownership concerns. This work contributes APPRIME, a novel tool for trace-based automatic parallel benchmark generation. Taking as input standard communication-I/O traces of an application's execution, it couples accurate automatic phase identification with statistical regeneration of event parameters tomore » create compact, portable, and to some degree reconfigurable parallel application benchmarks. Experiments with four NAS Parallel Benchmarks (NPB) and three real scientific simulation codes confirm the fidelity of APPRIME benchmarks. They retain the original applications' performance characteristics, in particular the relative performance across platforms.« less
Experimental power density distribution benchmark in the TRIGA Mark II reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snoj, L.; Stancar, Z.; Radulovic, V.
2012-07-01
In order to improve the power calibration process and to benchmark the existing computational model of the TRIGA Mark II reactor at the Josef Stefan Inst. (JSI), a bilateral project was started as part of the agreement between the French Commissariat a l'energie atomique et aux energies alternatives (CEA) and the Ministry of higher education, science and technology of Slovenia. One of the objectives of the project was to analyze and improve the power calibration process of the JSI TRIGA reactor (procedural improvement and uncertainty reduction) by using absolutely calibrated CEA fission chambers (FCs). This is one of the fewmore » available power density distribution benchmarks for testing not only the fission rate distribution but also the absolute values of the fission rates. Our preliminary calculations indicate that the total experimental uncertainty of the measured reaction rate is sufficiently low that the experiments could be considered as benchmark experiments. (authors)« less
Revisiting Yasinsky and Henry`s benchmark using modern nodal codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feltus, M.A.; Becker, M.W.
1995-12-31
The numerical experiments analyzed by Yasinsky and Henry are quite trivial by comparison with today`s standards because they used the finite difference code WIGLE for their benchmark. Also, this problem is a simple slab (one-dimensional) case with no feedback mechanisms. This research attempts to obtain STAR (Ref. 2) and NEM (Ref. 3) code results in order to produce a more modern kinetics benchmark with results comparable WIGLE.
Use of integral experiments in support to the validation of JEFF-3.2 nuclear data evaluation
NASA Astrophysics Data System (ADS)
Leclaire, Nicolas; Cochet, Bertrand; Jinaphanh, Alexis; Haeck, Wim
2017-09-01
For many years now, IRSN has developed its own Monte Carlo continuous energy capability, which allows testing various nuclear data libraries. In that prospect, a validation database of 1136 experiments was built from cases used for the validation of the APOLLO2-MORET 5 multigroup route of the CRISTAL V2.0 package. In this paper, the keff obtained for more than 200 benchmarks using the JEFF-3.1.1 and JEFF-3.2 libraries are compared to benchmark keff values and main discrepancies are analyzed regarding the neutron spectrum. Special attention is paid on benchmarks for which the results have been highly modified between both JEFF-3 versions.
Validating vignette and conjoint survey experiments against real-world behavior
Hainmueller, Jens; Hangartner, Dominik; Yamamoto, Teppei
2015-01-01
Survey experiments, like vignette and conjoint analyses, are widely used in the social sciences to elicit stated preferences and study how humans make multidimensional choices. However, there is a paucity of research on the external validity of these methods that examines whether the determinants that explain hypothetical choices made by survey respondents match the determinants that explain what subjects actually do when making similar choices in real-world situations. This study compares results from conjoint and vignette analyses on which immigrant attributes generate support for naturalization with closely corresponding behavioral data from a natural experiment in Switzerland, where some municipalities used referendums to decide on the citizenship applications of foreign residents. Using a representative sample from the same population and the official descriptions of applicant characteristics that voters received before each referendum as a behavioral benchmark, we find that the effects of the applicant attributes estimated from the survey experiments perform remarkably well in recovering the effects of the same attributes in the behavioral benchmark. We also find important differences in the relative performances of the different designs. Overall, the paired conjoint design, where respondents evaluate two immigrants side by side, comes closest to the behavioral benchmark; on average, its estimates are within 2% percentage points of the effects in the behavioral benchmark. PMID:25646415
DOE Office of Scientific and Technical Information (OSTI.GOV)
Will, M.E.
1994-01-01
This report presents a standard method for deriving benchmarks for the purpose of ''contaminant screening,'' performed by comparing measured ambient concentrations of chemicals. The work was performed under Work Breakdown Structure 1.4.12.2.3.04.07.02 (Activity Data Sheet 8304). In addition, this report presents sets of data concerning the effects of chemicals in soil on invertebrates and soil microbial processes, benchmarks for chemicals potentially associated with United States Department of Energy sites, and literature describing the experiments from which data were drawn for benchmark derivation.
International land Model Benchmarking (ILAMB) Package v002.00
Collier, Nathaniel [Oak Ridge National Laboratory; Hoffman, Forrest M. [Oak Ridge National Laboratory; Mu, Mingquan [University of California, Irvine; Randerson, James T. [University of California, Irvine; Riley, William J. [Lawrence Berkeley National Laboratory
2016-05-09
As a contribution to International Land Model Benchmarking (ILAMB) Project, we are providing new analysis approaches, benchmarking tools, and science leadership. The goal of ILAMB is to assess and improve the performance of land models through international cooperation and to inform the design of new measurement campaigns and field studies to reduce uncertainties associated with key biogeochemical processes and feedbacks. ILAMB is expected to be a primary analysis tool for CMIP6 and future model-data intercomparison experiments. This team has developed initial prototype benchmarking systems for ILAMB, which will be improved and extended to include ocean model metrics and diagnostics.
International land Model Benchmarking (ILAMB) Package v001.00
Mu, Mingquan [University of California, Irvine; Randerson, James T. [University of California, Irvine; Riley, William J. [Lawrence Berkeley National Laboratory; Hoffman, Forrest M. [Oak Ridge National Laboratory
2016-05-02
As a contribution to International Land Model Benchmarking (ILAMB) Project, we are providing new analysis approaches, benchmarking tools, and science leadership. The goal of ILAMB is to assess and improve the performance of land models through international cooperation and to inform the design of new measurement campaigns and field studies to reduce uncertainties associated with key biogeochemical processes and feedbacks. ILAMB is expected to be a primary analysis tool for CMIP6 and future model-data intercomparison experiments. This team has developed initial prototype benchmarking systems for ILAMB, which will be improved and extended to include ocean model metrics and diagnostics.
Performance of Landslide-HySEA tsunami model for NTHMP benchmarking validation process
NASA Astrophysics Data System (ADS)
Macias, Jorge
2017-04-01
In its FY2009 Strategic Plan, the NTHMP required that all numerical tsunami inundation models be verified as accurate and consistent through a model benchmarking process. This was completed in 2011, but only for seismic tsunami sources and in a limited manner for idealized solid underwater landslides. Recent work by various NTHMP states, however, has shown that landslide tsunami hazard may be dominant along significant parts of the US coastline, as compared to hazards from other tsunamigenic sources. To perform the above-mentioned validation process, a set of candidate benchmarks were proposed. These benchmarks are based on a subset of available laboratory date sets for solid slide experiments and deformable slide experiments, and include both submarine and subaerial slides. A benchmark based on a historic field event (Valdez, AK, 1964) close the list of proposed benchmarks. The Landslide-HySEA model has participated in the workshop that was organized at Texas A&M University - Galveston, on January 9-11, 2017. The aim of this presentation is to show some of the numerical results obtained for Landslide-HySEA in the framework of this benchmarking validation/verification effort. Acknowledgements. This research has been partially supported by the Junta de Andalucía research project TESELA (P11-RNM7069), the Spanish Government Research project SIMURISK (MTM2015-70490-C02-01-R) and Universidad de Málaga, Campus de Excelencia Internacional Andalucía Tech. The GPU computations were performed at the Unit of Numerical Methods (University of Malaga).
Applying MDA to SDR for Space to Model Real-time Issues
NASA Technical Reports Server (NTRS)
Blaser, Tammy M.
2007-01-01
NASA space communications systems have the challenge of designing SDRs with highly-constrained Size, Weight and Power (SWaP) resources. A study is being conducted to assess the effectiveness of applying the MDA Platform-Independent Model (PIM) and one or more Platform-Specific Models (PSM) specifically to address NASA space domain real-time issues. This paper will summarize our experiences with applying MDA to SDR for Space to model real-time issues. Real-time issues to be examined, measured, and analyzed are: meeting waveform timing requirements and efficiently applying Real-time Operating System (RTOS) scheduling algorithms, applying safety control measures, and SWaP verification. Real-time waveform algorithms benchmarked with the worst case environment conditions under the heaviest workload will drive the SDR for Space real-time PSM design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Will, M.E.; Suter, G.W. II
1994-09-01
One of the initial stages in ecological risk assessment for hazardous waste sites is screening contaminants to determine which of them are worthy of further consideration as contaminants of potential concern. This process is termed contaminant screening. It is performed by comparing measured ambient concentrations of chemicals to benchmark concentrations. Currently, no standard benchmark concentrations exist for assessing contaminants in soil with respect to their toxicity to plants. This report presents a standard method for deriving benchmarks for this purpose (phytotoxicity benchmarks), a set of data concerning effects of chemicals in soil or soil solution on plants, and a setmore » of phytotoxicity benchmarks for 38 chemicals potentially associated with United States Department of Energy (DOE) sites. In addition, background information on the phytotoxicity and occurrence of the chemicals in soils is presented, and literature describing the experiments from which data were drawn for benchmark derivation is reviewed. Chemicals that are found in soil at concentrations exceeding both the phytotoxicity benchmark and the background concentration for the soil type should be considered contaminants of potential concern.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suter, G.W. II
1993-01-01
One of the initial stages in ecological risk assessment for hazardous waste sites is screening contaminants to determine which of them are worthy of further consideration as contaminants of potential concern. This process is termed contaminant screening. It is performed by comparing measured ambient concentrations of chemicals to benchmark concentrations. Currently, no standard benchmark concentrations exist for assessing contaminants in soil with respect to their toxicity to plants. This report presents a standard method for deriving benchmarks for this purpose (phytotoxicity benchmarks), a set of data concerning effects of chemicals in soil or soil solution on plants, and a setmore » of phytotoxicity benchmarks for 38 chemicals potentially associated with United States Department of Energy (DOE) sites. In addition, background information on the phytotoxicity and occurrence of the chemicals in soils is presented, and literature describing the experiments from which data were drawn for benchmark derivation is reviewed. Chemicals that are found in soil at concentrations exceeding both the phytotoxicity benchmark and the background concentration for the soil type should be considered contaminants of potential concern.« less
NASA Technical Reports Server (NTRS)
Krause, David L.; Brewer, Ethan J.; Pawlik, Ralph
2013-01-01
This report provides test methodology details and qualitative results for the first structural benchmark creep test of an Advanced Stirling Convertor (ASC) heater head of ASC-E2 design heritage. The test article was recovered from a flight-like Microcast MarM-247 heater head specimen previously used in helium permeability testing. The test article was utilized for benchmark creep test rig preparation, wall thickness and diametral laser scan hardware metrological developments, and induction heater custom coil experiments. In addition, a benchmark creep test was performed, terminated after one week when through-thickness cracks propagated at thermocouple weld locations. Following this, it was used to develop a unique temperature measurement methodology using contact thermocouples, thereby enabling future benchmark testing to be performed without the use of conventional welded thermocouples, proven problematic for the alloy. This report includes an overview of heater head structural benchmark creep testing, the origin of this particular test article, test configuration developments accomplished using the test article, creep predictions for its benchmark creep test, qualitative structural benchmark creep test results, and a short summary.
Marshall, Margaret A.
2014-11-04
In the early 1970s Dr. John T. Mihalczo (team leader), J.J. Lynn, and J.R. Taylor performed experiments at the Oak Ridge Critical Experiments Facility (ORCEF) with highly enriched uranium (HEU) metal (called Oak Ridge Alloy or ORALLOY) in an effort to recreate GODIVA I results with greater accuracy than those performed at Los Alamos National Laboratory in the 1950s. The purpose of the Oak Ridge ORALLOY Sphere (ORSphere) experiments was to estimate the unreflected and unmoderated critical mass of an idealized sphere of uranium metal corrected to a density, purity, and enrichment such that it could be compared with themore » GODIVA I experiments. Additionally, various material reactivity worths, the surface material worth coefficient, the delayed neutron fraction, the prompt neutron decay constant, relative fission density, and relative neutron importance were all measured. The critical assembly, material reactivity worths, the surface material worth coefficient, and the delayed neutron fraction were all evaluated as benchmark experiment measurements. The reactor physics measurements are the focus of this paper; although for clarity the critical assembly benchmark specifications are briefly discussed.« less
Implementation of Programmatic Quality and the Impact on Safety
NASA Astrophysics Data System (ADS)
Huls, Dale T.; Meehan, Kevin M.
2005-12-01
The implementation of an inadequate programmatic quality assurance discipline has the potential to adversely affect safety and mission success. This is best demonstrated in the lessons provided by the Apollo 1 Apollo 13 Challenger, and Columbia accidents; NASA Safety and Mission Assurance (S&MA) benchmarking exchanges; and conclusions reached by the Shuttle Return-to-Flight Task Group established following the Columbia Shuttle accident. Examples from the ISS Program demonstrate continuing issues with programmatic quality. Failure to adequately address programmatic quality assurance issues has a real potential to lead to continued inefficiency, increases in program costs, and additional catastrophic accidents.
A comparison of five benchmarks
NASA Technical Reports Server (NTRS)
Huss, Janice E.; Pennline, James A.
1987-01-01
Five benchmark programs were obtained and run on the NASA Lewis CRAY X-MP/24. A comparison was made between the programs codes and between the methods for calculating performance figures. Several multitasking jobs were run to gain experience in how parallel performance is measured.
Security in Intelligent Transport Systems for Smart Cities: From Theory to Practice
Javed, Muhammad Awais; Ben Hamida, Elyes; Znaidi, Wassim
2016-01-01
Connecting vehicles securely and reliably is pivotal to the implementation of next generation ITS applications of smart cities. With continuously growing security threats, vehicles could be exposed to a number of service attacks that could put their safety at stake. To address this concern, both US and European ITS standards have selected Elliptic Curve Cryptography (ECC) algorithms to secure vehicular communications. However, there is still a lack of benchmarking studies on existing security standards in real-world settings. In this paper, we first analyze the security architecture of the ETSI ITS standard. We then implement the ECC based digital signature and encryption procedures using an experimental test-bed and conduct an extensive benchmark study to assess their performance which depends on factors such as payload size, processor speed and security levels. Using network simulation models, we further evaluate the impact of standard compliant security procedures in dense and realistic smart cities scenarios. Obtained results suggest that existing security solutions directly impact the achieved quality of service (QoS) and safety awareness of vehicular applications, in terms of increased packet inter-arrival delays, packet and cryptographic losses, and reduced safety awareness in safety applications. Finally, we summarize the insights gained from the simulation results and discuss open research challenges for efficient working of security in ITS applications of smart cities. PMID:27314358
A new numerical benchmark of a freshwater lens
NASA Astrophysics Data System (ADS)
Stoeckl, L.; Walther, M.; Graf, T.
2016-04-01
A numerical benchmark for 2-D variable-density flow and solute transport in a freshwater lens is presented. The benchmark is based on results of laboratory experiments conducted by Stoeckl and Houben (2012) using a sand tank on the meter scale. This benchmark describes the formation and degradation of a freshwater lens over time as it can be found under real-world islands. An error analysis gave the appropriate spatial and temporal discretization of 1 mm and 8.64 s, respectively. The calibrated parameter set was obtained using the parameter estimation tool PEST. Comparing density-coupled and density-uncoupled results showed that the freshwater-saltwater interface position is strongly dependent on density differences. A benchmark that adequately represents saltwater intrusion and that includes realistic features of coastal aquifers or freshwater lenses was lacking. This new benchmark was thus developed and is demonstrated to be suitable to test variable-density groundwater models applied to saltwater intrusion investigations.
Patterson, Mark E; Miranda, Derick; Schuman, Greg; Eaton, Christopher; Smith, Andrew; Silver, Brad
2016-01-01
Leveraging "big data" as a means of informing cost-effective care holds potential in triaging high-risk heart failure (HF) patients for interventions within hospitals seeking to reduce 30-day readmissions. Explore provider's beliefs and perceptions about using an electronic health record (EHR)-based tool that uses unstructured clinical notes to risk-stratify high-risk heart failure patients. Six providers from an inpatient HF clinic within an urban safety net hospital were recruited to participate in a semistructured focus group. A facilitator led a discussion on the feasibility and value of using an EHR tool driven by unstructured clinical notes to help identify high-risk patients. Data collected from transcripts were analyzed using a thematic analysis that facilitated drawing conclusions clustered around categories and themes. From six categories emerged two themes: (1) challenges of finding valid and accurate results, and (2) strategies used to overcome these challenges. Although employing a tool that uses electronic medical record (EMR) unstructured text as the benchmark by which to identify high-risk patients is efficient, choosing appropriate benchmark groups could be challenging given the multiple causes of readmission. Strategies to mitigate these challenges include establishing clear selection criteria to guide benchmark group composition, and quality outcome goals for the hospital. Prior to implementing into practice an innovative EMR-based case-finder driven by unstructured clinical notes, providers are advised to do the following: (1) define patient quality outcome goals, (2) establish criteria by which to guide benchmark selection, and (3) verify the tool's validity and reliability. Achieving consensus on these issues would be necessary for this innovative EHR-based tool to effectively improve clinical decision-making and in turn, decrease readmissions for high-risk patients.
Dimensions of Safety Climate among Iranian Nurses.
Konjin, Z Naghavi; Shokoohi, Y; Zarei, F; Rahimzadeh, M; Sarsangi, V
2015-10-01
Workplace safety has been a concern of workers and managers for decades. Measuring safety climate is crucial in improving safety performance. It is also a method of benchmarking safety perception. To develop and validate a psychometrics scale for measuring nurses' safety climate. Literature review, subject matter experts and nurse's judgment were used in items developing. Content validity and reliability for new tool were tested by content validity index (CVI) and test-retest analysis, respectively. Exploratory factor analysis (EFA) with varimax rotation was used to improve the interpretation of latent factors. A 40-item scale in 6 factors was developed, which could explain 55% of the observed variance. The 6 factors included employees' involvement in safety and management support, compliance with safety rules, safety training and accessibility to personal protective equipment, hindrance to safe work, safety communication and job pressure, and individual risk perception. The proposed scale can be used in identifying the needed areas to implement interventions in safety climate of nurses.
Characterization of addressability by simultaneous randomized benchmarking.
Gambetta, Jay M; Córcoles, A D; Merkel, S T; Johnson, B R; Smolin, John A; Chow, Jerry M; Ryan, Colm A; Rigetti, Chad; Poletto, S; Ohki, Thomas A; Ketchen, Mark B; Steffen, M
2012-12-14
The control and handling of errors arising from cross talk and unwanted interactions in multiqubit systems is an important issue in quantum information processing architectures. We introduce a benchmarking protocol that provides information about the amount of addressability present in the system and implement it on coupled superconducting qubits. The protocol consists of randomized benchmarking experiments run both individually and simultaneously on pairs of qubits. A relevant figure of merit for the addressability is then related to the differences in the measured average gate fidelities in the two experiments. We present results from two similar samples with differing cross talk and unwanted qubit-qubit interactions. The results agree with predictions based on simple models of the classical cross talk and Stark shifts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
John D. Bess; Barbara H. Dolphin; James W. Sterbentz
2013-03-01
In its deployment as a pebble bed reactor (PBR) critical facility from 1992 to 1996, the PROTEUS facility was designated as HTR-PROTEUS. This experimental program was performed as part of an International Atomic Energy Agency (IAEA) Coordinated Research Project (CRP) on the Validation of Safety Related Physics Calculations for Low Enriched HTGRs. Within this project, critical experiments were conducted for graphite moderated LEU systems to determine core reactivity, flux and power profiles, reaction-rate ratios, the worth of control rods, both in-core and reflector based, the worth of burnable poisons, kinetic parameters, and the effects of moisture ingress on these parameters.more » Four benchmark experiments were evaluated in this report: Cores 1, 1A, 2, and 3. These core configurations represent the hexagonal close packing (HCP) configurations of the HTR-PROTEUS experiment with a moderator-to-fuel pebble ratio of 1:2. Core 1 represents the only configuration utilizing ZEBRA control rods. Cores 1A, 2, and 3 use withdrawable, hollow, stainless steel control rods. Cores 1 and 1A are similar except for the use of different control rods; Core 1A also has one less layer of pebbles (21 layers instead of 22). Core 2 retains the first 16 layers of pebbles from Cores 1 and 1A and has 16 layers of moderator pebbles stacked above the fueled layers. Core 3 retains the first 17 layers of pebbles but has polyethylene rods inserted between pebbles to simulate water ingress. The additional partial pebble layer (layer 18) for Core 3 was not included as it was used for core operations and not the reported critical configuration. Cores 1, 1A, 2, and 3 were determined to be acceptable benchmark experiments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
John D. Bess; Barbara H. Dolphin; James W. Sterbentz
2012-03-01
In its deployment as a pebble bed reactor (PBR) critical facility from 1992 to 1996, the PROTEUS facility was designated as HTR-PROTEUS. This experimental program was performed as part of an International Atomic Energy Agency (IAEA) Coordinated Research Project (CRP) on the Validation of Safety Related Physics Calculations for Low Enriched HTGRs. Within this project, critical experiments were conducted for graphite moderated LEU systems to determine core reactivity, flux and power profiles, reaction-rate ratios, the worth of control rods, both in-core and reflector based, the worth of burnable poisons, kinetic parameters, and the effects of moisture ingress on these parameters.more » Four benchmark experiments were evaluated in this report: Cores 1, 1A, 2, and 3. These core configurations represent the hexagonal close packing (HCP) configurations of the HTR-PROTEUS experiment with a moderator-to-fuel pebble ratio of 1:2. Core 1 represents the only configuration utilizing ZEBRA control rods. Cores 1A, 2, and 3 use withdrawable, hollow, stainless steel control rods. Cores 1 and 1A are similar except for the use of different control rods; Core 1A also has one less layer of pebbles (21 layers instead of 22). Core 2 retains the first 16 layers of pebbles from Cores 1 and 1A and has 16 layers of moderator pebbles stacked above the fueled layers. Core 3 retains the first 17 layers of pebbles but has polyethylene rods inserted between pebbles to simulate water ingress. The additional partial pebble layer (layer 18) for Core 3 was not included as it was used for core operations and not the reported critical configuration. Cores 1, 1A, 2, and 3 were determined to be acceptable benchmark experiments.« less
Parallelization of NAS Benchmarks for Shared Memory Multiprocessors
NASA Technical Reports Server (NTRS)
Waheed, Abdul; Yan, Jerry C.; Saini, Subhash (Technical Monitor)
1998-01-01
This paper presents our experiences of parallelizing the sequential implementation of NAS benchmarks using compiler directives on SGI Origin2000 distributed shared memory (DSM) system. Porting existing applications to new high performance parallel and distributed computing platforms is a challenging task. Ideally, a user develops a sequential version of the application, leaving the task of porting to new generations of high performance computing systems to parallelization tools and compilers. Due to the simplicity of programming shared-memory multiprocessors, compiler developers have provided various facilities to allow the users to exploit parallelism. Native compilers on SGI Origin2000 support multiprocessing directives to allow users to exploit loop-level parallelism in their programs. Additionally, supporting tools can accomplish this process automatically and present the results of parallelization to the users. We experimented with these compiler directives and supporting tools by parallelizing sequential implementation of NAS benchmarks. Results reported in this paper indicate that with minimal effort, the performance gain is comparable with the hand-parallelized, carefully optimized, message-passing implementations of the same benchmarks.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-19
...), Rockville, Maryland, and from the ADAMS Public Library component on the NRC's Web site, http://www.nrc.gov... initiating a review of nuclear safety culture issues by the corporate nuclear review board, benchmarking SCWE...
NASA Case Sensitive Review and Audit Approach
NASA Astrophysics Data System (ADS)
Lee, Arthur R.; Bacus, Thomas H.; Bowersox, Alexandra M.; Newman, J. Steven
2005-12-01
As an Agency involved in high-risk endeavors NASA continually reassesses its commitment to engineering excellence and compliance to requirements. As a component of NASA's continual process improvement, the Office of Safety and Mission Assurance (OSMA) established the Review and Assessment Division (RAD) [1] to conduct independent audits to verify compliance with Agency requirements that impact safe and reliable operations. In implementing its responsibilities, RAD benchmarked various approaches for conducting audits, focusing on organizations that, like NASA, operate in high-risk environments - where seemingly inconsequential departures from safety, reliability, and quality requirements can have catastrophic impact to the public, NASA personnel, high-value equipment, and the environment. The approach used by the U.S. Navy Submarine Program [2] was considered the most fruitful framework for the invigorated OSMA audit processes. Additionally, the results of benchmarking activity revealed that not all audits are conducted using just one approach or even with the same objectives. This led to the concept of discrete, unique "audit cases."
Validation of tungsten cross sections in the neutron energy region up to 100 keV
NASA Astrophysics Data System (ADS)
Pigni, Marco T.; Žerovnik, Gašper; Leal, Luiz. C.; Trkov, Andrej
2017-09-01
Following a series of recent cross section evaluations on tungsten isotopes performed at Oak Ridge National Laboratory (ORNL), this paper presents the validation work carried out to test the performance of the evaluated cross sections based on lead-slowing-down (LSD) benchmarks conducted in Grenoble. ORNL completed the resonance parameter evaluation of four tungsten isotopes - 182,183,184,186W - in August 2014 and submitted it as an ENDF-compatible file to be part of the next release of the ENDF/B-VIII.0 nuclear data library. The evaluations were performed with support from the US Nuclear Criticality Safety Program in an effort to provide improved tungsten cross section and covariance data for criticality safety sensitivity analyses. The validation analysis based on the LSD benchmarks showed an improved agreement with the experimental response when the ORNL tungsten evaluations were included in the ENDF/B-VII.1 library. Comparison with the results obtained with the JEFF-3.2 nuclear data library are also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thrower, A.W.; Patric, J.; Keister, M.
2008-07-01
The purpose of the Office of Civilian Radioactive Waste Management's (OCRWM) Logistics Benchmarking Project is to identify established government and industry practices for the safe transportation of hazardous materials which can serve as a yardstick for design and operation of OCRWM's national transportation system for shipping spent nuclear fuel and high-level radioactive waste to the proposed repository at Yucca Mountain, Nevada. The project will present logistics and transportation practices and develop implementation recommendations for adaptation by the national transportation system. This paper will describe the process used to perform the initial benchmarking study, highlight interim findings, and explain how thesemore » findings are being implemented. It will also provide an overview of the next phase of benchmarking studies. The benchmarking effort will remain a high-priority activity throughout the planning and operational phases of the transportation system. The initial phase of the project focused on government transportation programs to identify those practices which are most clearly applicable to OCRWM. These Federal programs have decades of safe transportation experience, strive for excellence in operations, and implement effective stakeholder involvement, all of which parallel OCRWM's transportation mission and vision. The initial benchmarking project focused on four business processes that are critical to OCRWM's mission success, and can be incorporated into OCRWM planning and preparation in the near term. The processes examined were: transportation business model, contract management/out-sourcing, stakeholder relations, and contingency planning. More recently, OCRWM examined logistics operations of AREVA NC's Business Unit Logistics in France. The next phase of benchmarking will focus on integrated domestic and international commercial radioactive logistic operations. The prospective companies represent large scale shippers and have vast experience in safely and efficiently shipping spent nuclear fuel and other radioactive materials. Additional business processes may be examined in this phase. The findings of these benchmarking efforts will help determine the organizational structure and requirements of the national transportation system. (authors)« less
The skyshine benchmark experiment revisited.
Terry, Ian R
2005-01-01
With the coming renaissance of nuclear power, heralded by new nuclear power plant construction in Finland, the issue of qualifying modern tools for calculation becomes prominent. Among the calculations required may be the determination of radiation levels outside the plant owing to skyshine. For example, knowledge of the degree of accuracy in the calculation of gamma skyshine through the turbine hall roof of a BWR plant is important. Modern survey programs which can calculate skyshine dose rates tend to be qualified only by verification with the results of Monte Carlo calculations. However, in the past, exacting experimental work has been performed in the field for gamma skyshine, notably the benchmark work in 1981 by Shultis and co-workers, which considered not just the open source case but also the effects of placing a concrete roof above the source enclosure. The latter case is a better reflection of reality as safety considerations nearly always require the source to be shielded in some way, usually by substantial walls but by a thinner roof. One of the tools developed since that time, which can both calculate skyshine radiation and accurately model the geometrical set-up of an experiment, is the code RANKERN, which is used by Framatome ANP and other organisations for general shielding design work. The following description concerns the use of this code to re-address the experimental results from 1981. This then provides a realistic gauge to validate, but also to set limits on, the program for future gamma skyshine applications within the applicable licensing procedures for all users of the code.
Safety assessment in the urban park environment in Alborz Province, Iran.
Oostakhan, Morteza; Babaei, Aliakbar
2013-01-01
Urban parks, as one of the recreational and sports sectors, could cause serious injuries among different ages if the safety issues in their design are not considered. These injuries can result from the equipment in the park, including play and sports equipment, or even from environmental factors, too. Lack of safety benchmark in parks will impact on the development of future proposals. In this article, attempts are made to survey the important safety factors in the urban parks including playgrounds, fitness equipment, pedestrian surface and environmental factors into a risk assessment. Hence, a checklist of safety factors was used. A Yes or No descriptor was allocated to any factor for determining safety level. The study also suggests recommendations for future planning concerning existing failures for designers. It was found that the safety level of the regional and local parks differ from each other.
Best practices from WisDOT mega and ARRA projects--request for information : benchmarks and metrics.
DOT National Transportation Integrated Search
2012-03-01
Successful highway construction is measured by cost, time, safety, and quality. One further measure of success is the quantity of Request for Information's (RFI) submitted and their impact. An RFI is a formal written procedure initiated by the contra...
Mitchell, L
1996-01-01
The processes of benchmarking, benchmark data comparative analysis, and study of best practices are distinctly different. The study of best practices is explained with an example based on the Arthur Andersen & Co. 1992 "Study of Best Practices in Ambulatory Surgery". The results of a national best practices study in ambulatory surgery were used to provide our quality improvement team with the goal of improving the turnaround time between surgical cases. The team used a seven-step quality improvement problem-solving process to improve the surgical turnaround time. The national benchmark for turnaround times between surgical cases in 1992 was 13.5 minutes. The initial turnaround time at St. Joseph's Medical Center was 19.9 minutes. After the team implemented solutions, the time was reduced to an average of 16.3 minutes, an 18% improvement. Cost-benefit analysis showed a potential enhanced revenue of approximately $300,000, or a potential savings of $10,119. Applying quality improvement principles to benchmarking, benchmarks, or best practices can improve process performance. Understanding which form of benchmarking the institution wishes to embark on will help focus a team and use appropriate resources. Communicating with professional organizations that have experience in benchmarking will save time and money and help achieve the desired results.
Schutte, K; Prinsen, M K; McNamee, P M; Roggeband, R
2009-08-01
Eye irritation is an important endpoint in the safety evaluation of consumer products and their ingredients. Several in vitro methods have been developed and are used by different industry sectors to assess eye irritation. One such in vitro method in use for some time already is the isolated chicken eye test (ICE). This investigation focuses on assessing the ICE as a method to determine the eye irritation potential of household cleaning products, both for product safety assurance prior to marketing and for classification and labeling decisions. The ICE involves a single application of test substances onto the cornea of isolated chicken eyes. Endpoints are corneal swelling, corneal opacity and fluorescein retention. The ICE results were compared to historic LVET data in this study due to availability of such in vivo data and the ability to correlate LVET to human experience data on the outcome of accidental exposures to household cleaning products in general. The results of this study indicate that the ICE test is a useful in vitro method for evaluating the eye irritation/corrosion potential and establishing classification and labeling for household cleaning products. For new product formulations, it is best used as part of a weight-of-evidence approach and benchmarked against data from comparable formulations with known eye irritation/corrosion profiles and market experience.
Nations that develop water quality benchmark values have relied primarily on standard data and methods. However, experience with chemicals such as Se, ammonia, and tributyltin has shown that standard methods do not adequately address some taxa, modes of exposure and effects. Deve...
Benchmarking Academic Libraries: An Australian Case Study.
ERIC Educational Resources Information Center
Robertson, Margaret; Trahn, Isabella
1997-01-01
Discusses experiences and outcomes of benchmarking at the Queensland University of Technology (Australia) library that compared acquisitions, cataloging, document delivery, and research support services with those of the University of New South Wales. Highlights include results as a catalyst for change, and the use of common output and performance…
ERIC Educational Resources Information Center
Leppisaari, Irja; Vainio, Leena; Herrington, Jan; Im, Yeonwook
2011-01-01
More and more, social technologies and virtual work methods are facilitating new ways of crossing boundaries in professional development and international collaborations. This paper examines the peer development of higher education teachers through the experiences of the IVBM project (International Virtual Benchmarking, 2009-2010). The…
Nations that develop water quality benchmark values have relied primarily on standard data and methods. However, experience with chemicals such as Se, ammonia, and tributyltin has shown that standard methods do not adequately address some taxa, modes of exposure and effects. Deve...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Will, M.E.; Suter, G.W. II
1994-09-01
One of the initial stages in ecological risk assessments for hazardous waste sites is the screening of contaminants to determine which of them are worthy of further consideration as {open_quotes}contaminants of potential concern.{close_quotes} This process is termed {open_quotes}contaminant screening.{close_quotes} It is performed by comparing measured ambient concentrations of chemicals to benchmark concentrations. Currently, no standard benchmark concentrations exist for assessing contaminants in soil with respect to their toxicity to soil- and litter-dwelling invertebrates, including earthworms, other micro- and macroinvertebrates, or heterotrophic bacteria and fungi. This report presents a standard method for deriving benchmarks for this purpose, sets of data concerningmore » effects of chemicals in soil on invertebrates and soil microbial processes, and benchmarks for chemicals potentially associated with United States Department of Energy sites. In addition, literature describing the experiments from which data were drawn for benchmark derivation. Chemicals that are found in soil at concentrations exceeding both the benchmarks and the background concentration for the soil type should be considered contaminants of potential concern.« less
Test One to Test Many: A Unified Approach to Quantum Benchmarks
NASA Astrophysics Data System (ADS)
Bai, Ge; Chiribella, Giulio
2018-04-01
Quantum benchmarks are routinely used to validate the experimental demonstration of quantum information protocols. Many relevant protocols, however, involve an infinite set of input states, of which only a finite subset can be used to test the quality of the implementation. This is a problem, because the benchmark for the finitely many states used in the test can be higher than the original benchmark calculated for infinitely many states. This situation arises in the teleportation and storage of coherent states, for which the benchmark of 50% fidelity is commonly used in experiments, although finite sets of coherent states normally lead to higher benchmarks. Here, we show that the average fidelity over all coherent states can be indirectly probed with a single setup, requiring only two-mode squeezing, a 50-50 beam splitter, and homodyne detection. Our setup enables a rigorous experimental validation of quantum teleportation, storage, amplification, attenuation, and purification of noisy coherent states. More generally, we prove that every quantum benchmark can be tested by preparing a single entangled state and measuring a single observable.
Benchmark gamma-ray skyshine experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nason, R.R.; Shultis, J.K.; Faw, R.E.
1982-01-01
A benchmark gamma-ray skyshine experiment is descibed in which /sup 60/Co sources were either collimated into an upward 150-deg conical beam or shielded vertically by two different thicknesses of concrete. A NaI(Tl) spectrometer and a high pressure ion chamber were used to measure, respectively, the energy spectrum and the 4..pi..-exposure rate of the air-reflected gamma photons up to 700 m from the source. Analyses of the data and comparison to DOT discrete ordinates calculations are presented.
Material Activation Benchmark Experiments at the NuMI Hadron Absorber Hall in Fermilab
NASA Astrophysics Data System (ADS)
Matsumura, H.; Matsuda, N.; Kasugai, Y.; Toyoda, A.; Yashima, H.; Sekimoto, S.; Iwase, H.; Oishi, K.; Sakamoto, Y.; Nakashima, H.; Leveling, A.; Boehnlein, D.; Lauten, G.; Mokhov, N.; Vaziri, K.
2014-06-01
In our previous study, double and mirror symmetric activation peaks found for Al and Au arranged spatially on the back of the Hadron absorber of the NuMI beamline in Fermilab were considerably higher than those expected purely from muon-induced reactions. From material activation bench-mark experiments, we conclude that this activation is due to hadrons with energy greater than 3 GeV that had passed downstream through small gaps in the hadron absorber.
Safety culture and care: a program to prevent surgical errors.
Hemingway, Maureen White; O'Malley, Catherine; Silvestri, Sandra
2015-04-01
Surgical errors are under scrutiny in health care as part of ensuring a culture of safety in which patients receive quality care. Hospitals use safety measures to compare their performance against industry benchmarks. To understand patient safety issues, health care providers must have processes in place to analyze and evaluate the quality of the care they provide. At one facility, efforts made to improve its quality and safety led to the development of a robust safety program with resources devoted to enhancing the culture of safety in the Perioperative Services department. Improvement initiatives included changing processes for safety reporting and performance improvement plans, adding resources and nurse roles, and creating communication strategies around adverse safety events and how to improve care. One key outcome included a 54% increase in the percentage of personnel who indicated in a survey that they would speak up if they saw something negatively affecting patient care. Copyright © 2015 AORN, Inc. Published by Elsevier Inc. All rights reserved.
Issues in Benchmarking Human Reliability Analysis Methods: A Literature Review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ronald L. Boring; Stacey M. L. Hendrickson; John A. Forester
There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessments (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study comparing and evaluating HRA methods in assessing operator performance in simulator experiments is currently underway. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing pastmore » benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.« less
XWeB: The XML Warehouse Benchmark
NASA Astrophysics Data System (ADS)
Mahboubi, Hadj; Darmont, Jérôme
With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.
Atta Mills, Ebenezer Fiifi Emire; Yan, Dawen; Yu, Bo; Wei, Xinyuan
2016-01-01
We propose a consolidated risk measure based on variance and the safety-first principle in a mean-risk portfolio optimization framework. The safety-first principle to financial portfolio selection strategy is modified and improved. Our proposed models are subjected to norm regularization to seek near-optimal stable and sparse portfolios. We compare the cumulative wealth of our preferred proposed model to a benchmark, S&P 500 index for the same period. Our proposed portfolio strategies have better out-of-sample performance than the selected alternative portfolio rules in literature and control the downside risk of the portfolio returns.
Ferrier-Rembert, Audrey; Drillien, Robert; Meignier, Bernard; Garin, Daniel; Crance, Jean-Marc
2007-11-28
It is now difficult to manufacture the first-generation smallpox vaccine, as the process could not comply with current safety and manufacturing regulations. In this study, a candidate non-clonal second-generation smallpox vaccine developed by Sanofi-Pasteur from the Lister strain has been assessed using a cowpox virus challenge in mice. We have observed similar safety, immunogenicity and protection (from disease and death) after a short or long interval following vaccination, as well as similar virus clearance post-challenge, with the second-generation smallpox vaccine candidate as compared to the traditional vaccine used as a benchmark.
Lecture Notes on Criticality Safety Validation Using MCNP & Whisper
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise
Training classes for nuclear criticality safety, MCNP documentation. The need for, and problems surrounding, validation of computer codes and data area considered first. Then some background for MCNP & Whisper is given--best practices for Monte Carlo criticality calculations, neutron spectra, S(α,β) thermal neutron scattering data, nuclear data sensitivities, covariance data, and correlation coefficients. Whisper is computational software designed to assist the nuclear criticality safety analyst with validation studies with the Monte Carlo radiation transport package MCNP. Whisper's methodology (benchmark selection – C k's, weights; extreme value theory – bias, bias uncertainty; MOS for nuclear data uncertainty – GLLS) and usagemore » are discussed.« less
Statistical Analysis of NAS Parallel Benchmarks and LINPACK Results
NASA Technical Reports Server (NTRS)
Meuer, Hans-Werner; Simon, Horst D.; Strohmeier, Erich; Lasinski, T. A. (Technical Monitor)
1994-01-01
In the last three years extensive performance data have been reported for parallel machines both based on the NAS Parallel Benchmarks, and on LINPACK. In this study we have used the reported benchmark results and performed a number of statistical experiments using factor, cluster, and regression analyses. In addition to the performance results of LINPACK and the eight NAS parallel benchmarks, we have also included peak performance of the machine, and the LINPACK n and n(sub 1/2) values. Some of the results and observations can be summarized as follows: 1) All benchmarks are strongly correlated with peak performance. 2) LINPACK and EP have each a unique signature. 3) The remaining NPB can grouped into three groups as follows: (CG and IS), (LU and SP), and (MG, FT, and BT). Hence three (or four with EP) benchmarks are sufficient to characterize the overall NPB performance. Our poster presentation will follow a standard poster format, and will present the data of our statistical analysis in detail.
Verification of MCNP6.2 for Nuclear Criticality Safety Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise
2017-05-10
Several suites of verification/validation benchmark problems were run in early 2017 to verify that the new production release of MCNP6.2 performs correctly for nuclear criticality safety applications (NCS). MCNP6.2 results for several NCS validation suites were compared to the results from MCNP6.1 [1] and MCNP6.1.1 [2]. MCNP6.1 is the production version of MCNP® released in 2013, and MCNP6.1.1 is the update released in 2014. MCNP6.2 includes all of the standard features for NCS calculations that have been available for the past 15 years, along with new features for sensitivity-uncertainty based methods for NCS validation [3]. Results from the benchmark suitesmore » were compared with results from previous verification testing [4-8]. Criticality safety analysts should consider testing MCNP6.2 on their particular problems and validation suites. No further development of MCNP5 is planned. MCNP6.1 is now 4 years old, and MCNP6.1.1 is now 3 years old. In general, released versions of MCNP are supported only for about 5 years, due to resource limitations. All future MCNP improvements, bug fixes, user support, and new capabilities are targeted only to MCNP6.2 and beyond.« less
Benchmark Evaluation of the HTR-PROTEUS Absorber Rod Worths (Core 4)
DOE Office of Scientific and Technical Information (OSTI.GOV)
John D. Bess; Leland M. Montierth
2014-06-01
PROTEUS was a zero-power research reactor at the Paul Scherrer Institute (PSI) in Switzerland. The critical assembly was constructed from a large graphite annulus surrounding a central cylindrical cavity. Various experimental programs were investigated in PROTEUS; during the years 1992 through 1996, it was configured as a pebble-bed reactor and designated HTR-PROTEUS. Various critical configurations were assembled with each accompanied by an assortment of reactor physics experiments including differential and integral absorber rod measurements, kinetics, reaction rate distributions, water ingress effects, and small sample reactivity effects [1]. Four benchmark reports were previously prepared and included in the March 2013 editionmore » of the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook) [2] evaluating eleven critical configurations. A summary of that effort was previously provided [3] and an analysis of absorber rod worth measurements for Cores 9 and 10 have been performed prior to this analysis and included in PROTEUS-GCR-EXP-004 [4]. In the current benchmark effort, absorber rod worths measured for Core Configuration 4, which was the only core with a randomly-packed pebble loading, have been evaluated for inclusion as a revision to the HTR-PROTEUS benchmark report PROTEUS-GCR-EXP-002.« less
CSHM: Web-based safety and health monitoring system for construction management.
Cheung, Sai On; Cheung, Kevin K W; Suen, Henry C H
2004-01-01
This paper describes a web-based system for monitoring and assessing construction safety and health performance, entitled the Construction Safety and Health Monitoring (CSHM) system. The design and development of CSHM is an integration of internet and database systems, with the intent to create a total automated safety and health management tool. A list of safety and health performance parameters was devised for the management of safety and health in construction. A conceptual framework of the four key components of CSHM is presented: (a) Web-based Interface (templates); (b) Knowledge Base; (c) Output Data; and (d) Benchmark Group. The combined effect of these components results in a system that enables speedy performance assessment of safety and health activities on construction sites. With the CSHM's built-in functions, important management decisions can theoretically be made and corrective actions can be taken before potential hazards turn into fatal or injurious occupational accidents. As such, the CSHM system will accelerate the monitoring and assessing of performance safety and health management tasks.
Development and initial validation of an Aviation Safety Climate Scale.
Evans, Bronwyn; Glendon, A Ian; Creed, Peter A
2007-01-01
A need was identified for a consistent set of safety climate factors to provide a basis for aviation industry benchmarking. Six broad safety climate themes were identified from the literature and consultations with industry safety experts. Items representing each of the themes were prepared and administered to 940 Australian commercial pilots. Data from half of the sample (N=468) were used in an exploratory factor analysis that produced a 3-factor model of Management commitment and communication, Safety training and equipment, and Maintenance. A confirmatory factor analysis on the remaining half of the sample showed the 3-factor model to be an adequate fit to the data. The results of this study have produced a scale of safety climate for aviation that is both reliable and valid. This study developed a tool to assess the level of perceived safety climate, specifically of pilots, but may also, with minor modifications, be used to assess other groups' perceptions of safety climate.
Benchmarking and Threshold Standards in Higher Education. Staff and Educational Development Series.
ERIC Educational Resources Information Center
Smith, Helen, Ed.; Armstrong, Michael, Ed.; Brown, Sally, Ed.
This book explores the issues involved in developing standards in higher education, examining the practical issues involved in benchmarking and offering a critical analysis of the problems associated with this developmental tool. The book focuses primarily on experience in the United Kingdom (UK), but looks also at international activity in this…
Internal Quality Assurance Benchmarking. ENQA Workshop Report 20
ERIC Educational Resources Information Center
Blackstock, Douglas; Burquel, Nadine; Comet, Nuria; Kajaste, Matti; dos Santos, Sergio Machado; Marcos, Sandra; Moser, Marion; Ponds, Henri; Scheuthle, Harald; Sixto, Luis Carlos Velon
2012-01-01
The Internal Quality Assurance group of ENQA (IQA Group) has been organising a yearly seminar for its members since 2007. The main objective is to share experiences concerning the internal quality assurance of work processes in the participating agencies. The overarching theme of the 2011 seminar was how to use benchmarking as a tool for…
The SCALE Verified, Archived Library of Inputs and Data - VALID
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, William BJ J; Rearden, Bradley T
The Verified, Archived Library of Inputs and Data (VALID) at ORNL contains high quality, independently reviewed models and results that improve confidence in analysis. VALID is developed and maintained according to a procedure of the SCALE quality assurance (QA) plan. This paper reviews the origins of the procedure and its intended purpose, the philosophy of the procedure, some highlights of its implementation, and the future of the procedure and associated VALID library. The original focus of the procedure was the generation of high-quality models that could be archived at ORNL and applied to many studies. The review process associated withmore » model generation minimized the chances of errors in these archived models. Subsequently, the scope of the library and procedure was expanded to provide high quality, reviewed sensitivity data files for deployment through the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE). Sensitivity data files for approximately 400 such models are currently available. The VALID procedure and library continue fulfilling these multiple roles. The VALID procedure is based on the quality assurance principles of ISO 9001 and nuclear safety analysis. Some of these key concepts include: independent generation and review of information, generation and review by qualified individuals, use of appropriate references for design data and documentation, and retrievability of the models, results, and documentation associated with entries in the library. Some highlights of the detailed procedure are discussed to provide background on its implementation and to indicate limitations of data extracted from VALID for use by the broader community. Specifically, external users of data generated within VALID must take responsibility for ensuring that the files are used within the QA framework of their organization and that use is appropriate. The future plans for the VALID library include expansion to include additional experiments from the IHECSBE, to include experiments from areas beyond criticality safety, such as reactor physics and shielding, and to include application models. In the future, external SCALE users may also obtain qualification under the VALID procedure and be involved in expanding the library. The VALID library provides a pathway for the criticality safety community to leverage modeling and analysis expertise at ORNL.« less
Dismantlement of the TSF-SNAP Reactor Assembly
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peretz, Fred J
2009-01-01
This paper describes the dismantlement of the Tower Shielding Facility (TSF)?Systems for Nuclear Auxiliary Power (SNAP) reactor, a SNAP-10A reactor used to validate radiation source terms and shield performance models at Oak Ridge National Laboratory (ORNL) from 1967 through 1973. After shutdown, it was placed in storage at the Y-12 National Security Complex (Y-12), eventually falling under the auspices of the Highly Enriched Uranium (HEU) Disposition Program. To facilitate downblending of the HEU present in the fuel elements, the TSF-SNAP was moved to ORNL on June 24, 2006. The reactor assembly was removed from its packaging, inspected, and the sodium-potassiummore » (NaK) coolant was drained. A superheated steam process was used to chemically react the residual NaK inside the reactor assembly. The heat exchanger assembly was removed from the top of the reactor vessel, and the criticality safety sleeve was exchanged for a new safety sleeve that allowed for the removal of the vessel lid. A chain-mounted tubing cutter was used to separate the lid from the vessel, and the 36 fuel elements were removed and packaged in four U.S. Department of Transportation 2R/6M containers. The fuel elements were returned to Y-12 on July 13, 2006. The return of the fuel elements and disposal of all other reactor materials accomplished the formal objectives of the dismantlement project. In addition, a project model was established for the handling of a fully fueled liquid-metal?cooled reactor assembly. Current criticality safety codes have been benchmarked against experiments performed by Atomics International in the 1950s and 1960s. Execution of this project provides valuable experience applicable to future projects addressing space and liquid-metal-cooled reactors.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scaglione, John M; Mueller, Don; Wagner, John C
2011-01-01
One of the most significant remaining challenges associated with expanded implementation of burnup credit in the United States is the validation of depletion and criticality calculations used in the safety evaluation - in particular, the availability and use of applicable measured data to support validation, especially for fission products. Applicants and regulatory reviewers have been constrained by both a scarcity of data and a lack of clear technical basis or approach for use of the data. U.S. Nuclear Regulatory Commission (NRC) staff have noted that the rationale for restricting their Interim Staff Guidance on burnup credit (ISG-8) to actinide-only ismore » based largely on the lack of clear, definitive experiments that can be used to estimate the bias and uncertainty for computational analyses associated with using burnup credit. To address the issue of validation, the NRC initiated a project with the Oak Ridge National Laboratory to (1) develop and establish a technically sound validation approach (both depletion and criticality) for commercial spent nuclear fuel (SNF) criticality safety evaluations based on best-available data and methods and (2) apply the approach for representative SNF storage and transport configurations/conditions to demonstrate its usage and applicability, as well as to provide reference bias results. The purpose of this paper is to describe the criticality (k{sub eff}) validation approach, and resulting observations and recommendations. Validation of the isotopic composition (depletion) calculations is addressed in a companion paper at this conference. For criticality validation, the approach is to utilize (1) available laboratory critical experiment (LCE) data from the International Handbook of Evaluated Criticality Safety Benchmark Experiments and the French Haut Taux de Combustion (HTC) program to support validation of the principal actinides and (2) calculated sensitivities, nuclear data uncertainties, and the limited available fission product LCE data to predict and verify individual biases for relevant minor actinides and fission products. This paper (1) provides a detailed description of the approach and its technical bases, (2) describes the application of the approach for representative pressurized water reactor and boiling water reactor safety analysis models to demonstrate its usage and applicability, (3) provides reference bias results based on the prerelease SCALE 6.1 code package and ENDF/B-VII nuclear cross-section data, and (4) provides recommendations for application of the results and methods to other code and data packages.« less
Benchmarking initiatives in the water industry.
Parena, R; Smeets, E
2001-01-01
Customer satisfaction and service care are every day pushing professionals in the water industry to seek to improve their performance, lowering costs and increasing the provided service level. Process Benchmarking is generally recognised as a systematic mechanism of comparing one's own utility with other utilities or businesses with the intent of self-improvement by adopting structures or methods used elsewhere. The IWA Task Force on Benchmarking, operating inside the Statistics and Economics Committee, has been committed to developing a general accepted concept of Process Benchmarking to support water decision-makers in addressing issues of efficiency. In a first step the Task Force disseminated among the Committee members a questionnaire focused on providing suggestions about the kind, the evolution degree and the main concepts of Benchmarking adopted in the represented Countries. A comparison among the guidelines adopted in The Netherlands and Scandinavia has recently challenged the Task Force in drafting a methodology for a worldwide process benchmarking in water industry. The paper provides a framework of the most interesting benchmarking experiences in the water sector and describes in detail both the final results of the survey and the methodology focused on identification of possible improvement areas.
A Report on Girls in San Francisco: Benchmarks for the Future.
ERIC Educational Resources Information Center
Lehman, Ann; Sacco, Carol
This study collected information on girls in San Francisco, California in the areas of demographics, economics, education, health, safety and violence, and criminal justice. Data came from local, state, and national sources (e.g., the U.S. Census Bureau; the California Bureau of Justice and the Criminal Statistics Center; the California Department…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-16
... participants and spectators from the dangers associated with the pyrotechnics. Unauthorized persons or vessels... by the pyrotechnics used in these fireworks displays, it would be contrary to the public interest to... delay in the effective date of this rule would expose mariners to the dangers posed by the pyrotechnics...
Winning Strategy: Set Benchmarks of Early Success to Build Momentum for the Long Term
ERIC Educational Resources Information Center
Spiro, Jody
2012-01-01
Change is a highly personal experience. Everyone participating in the effort has different reactions to change, different concerns, and different motivations for being involved. The smart change leader sets benchmarks along the way so there are guideposts and pause points instead of an endless change process. "Early wins"--a term used to describe…
Hospital safety climate surveys: measurement issues.
Jackson, Jeanette; Sarac, Cakil; Flin, Rhona
2010-12-01
Organizational safety culture relates to behavioural norms in the workplace and is usually assessed by safety climate surveys. These can be a diagnostic indicator on the state of safety in a hospital. This review examines recent studies using staff surveys of hospital safety climate, focussing on measurement issues. Four questionnaires (hospital survey on patient safety culture, safety attitudes questionnaire, patient safety climate in healthcare organizations, hospital safety climate scale), with acceptable psychometric properties, are now applied across countries and clinical settings. Comparisons for benchmarking must be made with caution in case of questionnaire modifications. Increasing attention is being paid to the unit and hospital level wherein distinct cultures may be located, as well as to associated measurement and study design issues. Predictive validity of safety climate is tested against safety behaviours/outcomes, with some relationships reported, although effects may be specific to professional groups/units. Few studies test the role of intervening variables that could influence the effect of climate on outcomes. Hospital climate studies are becoming a key component of healthcare safety management systems. Large datasets have established more reliable instruments that allow a more focussed investigation of the role of culture in the improvement and maintenance of staff's safety perceptions within units, as well as within hospitals.
Benchmarking the Physical Therapist Academic Environment to Understand the Student Experience.
Shields, Richard K; Dudley-Javoroski, Shauna; Sass, Kelly J; Becker, Marcie
2018-04-19
Identifying excellence in physical therapist academic environments is complicated by the lack of nationally available benchmarking data. The objective of this study was to compare a physical therapist academic environment to another health care profession (medicine) academic environment using the Association of American Medical Colleges Graduation Questionnaire (GQ) survey. The design consisted of longitudinal benchmarking. Between 2009 and 2017, the GQ was administered to graduates of a physical therapist education program (Department of Physical Therapy and Rehabilitation Science, Carver College of Medicine, The University of Iowa [PTRS]). Their ratings of the educational environment were compared to nationwide data for a peer health care profession (medicine) educational environment. Benchmarking to the GQ capitalizes on a large, psychometrically validated database of academic domains that may be broadly applicable to health care education. The GQ captures critical information about the student experience (eg, faculty professionalism, burnout, student mistreatment) that can be used to characterize the educational environment. This study hypothesized that the ratings provided by 9 consecutive cohorts of PTRS students (n = 316) would reveal educational environment differences from academic medical education. PTRS students reported significantly higher ratings of the educational emotional climate and student-faculty interactions than medical students. PTRS and medical students did not differ on ratings of empathy and tolerance for ambiguity. PTRS students reported significantly lower ratings of burnout than medical students. PTRS students descriptively reported observing greater faculty professionalism and experiencing less mistreatment than medical students. The generalizability of these findings to other physical therapist education environments has not been established. Selected elements of the GQ survey revealed differences in the educational environments experienced by physical therapist students and medical students. All physical therapist academic programs should adopt a universal method to benchmark the educational environment to understand the student experience.
Quality management benchmarking: FDA compliance in pharmaceutical industry.
Jochem, Roland; Landgraf, Katja
2010-01-01
By analyzing and comparing industry and business best practice, processes can be optimized and become more successful mainly because efficiency and competitiveness increase. This paper aims to focus on some examples. Case studies are used to show knowledge exchange in the pharmaceutical industry. Best practice solutions were identified in two companies using a benchmarking method and five-stage model. Despite large administrations, there is much potential regarding business process organization. This project makes it possible for participants to fully understand their business processes. The benchmarking method gives an opportunity to critically analyze value chains (a string of companies or players working together to satisfy market demands for a special product). Knowledge exchange is interesting for companies that like to be global players. Benchmarking supports information exchange and improves competitive ability between different enterprises. Findings suggest that the five-stage model improves efficiency and effectiveness. Furthermore, the model increases the chances for reaching targets. The method gives security to partners that did not have benchmarking experience. The study identifies new quality management procedures. Process management and especially benchmarking is shown to support pharmaceutical industry improvements.
Gude, Wouter T; van Engen-Verheul, Mariëtte M; van der Veer, Sabine N; de Keizer, Nicolette F; Peek, Niels
2017-04-01
To identify factors that influence the intentions of health professionals to improve their practice when confronted with clinical performance feedback, which is an essential first step in the audit and feedback mechanism. We conducted a theory-driven laboratory experiment with 41 individual professionals, and a field study in 18 centres in the context of a cluster-randomised trial of electronic audit and feedback in cardiac rehabilitation. Feedback reports were provided through a web-based application, and included performance scores and benchmark comparisons (high, intermediate or low performance) for a set of process and outcome indicators. From each report participants selected indicators for improvement into their action plan. Our unit of observation was an indicator presented in a feedback report (selected yes/no); we considered selecting an indicator to reflect an intention to improve. We analysed 767 observations in the laboratory experiment and 614 in the field study, respectively. Each 10% decrease in performance score increased the probability of an indicator being selected by 54% (OR, 1.54; 95% CI 1.29% to 1.83%) in the laboratory experiment, and 25% (OR, 1.25; 95% CI 1.13% to 1.39%) in the field study. Also, performance being benchmarked as low and intermediate increased this probability in laboratory settings. Still, participants ignored the benchmarks in 34% (laboratory experiment) and 48% (field study) of their selections. When confronted with clinical performance feedback, performance scores and benchmark comparisons influenced health professionals' intentions to improve practice. However, there was substantial variation in these intentions, because professionals disagreed with benchmarks, deemed improvement unfeasible or did not consider the indicator an essential aspect of care quality. These phenomena impede intentions to improve practice, and are thus likely to dilute the effects of audit and feedback interventions. NTR3251, pre-results. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lacey, D.; Bacon, M.L.
The UK fully supports the objective of the Joint Convention on the Safety of Spent Fuel Management and on the Safety of Radioactive Waste Management to achieve and maintain a high level of safety worldwide in spent fuel and radioactive waste management, through the enhancement of national measures and international co-operation, including where appropriate, safety-related co-operation. The UK's Health and Safety Executive, through its Nuclear Safety Directorate (NSD), has been committed to the Convention since the initial negotiations to set up the Convention and provided the president of the first review meeting in 2003. It would be wrong of anymore » nation to believe that they have all the best solutions to managing spent fuel and radioactive waste. The process of compiling reports for the Convention review meetings provides a structured process through which every contracting party can review its provisions against a common set of standards and identify for itself possible areas of improvements. The sharing of reports and the asking and answering of questions then provides a further opportunity for both sharing of experience and learning. The UK was encouraged by the spirit of constructive discussion rather than negative criticism that pervaded the first review meeting that provided an incentive for all to learn and improve. While, as could be expected of the first meeting of such a group, not everything worked as well as could be hoped for, all parties seemed committed to learn from mistakes and to make the process more effective. Lessons were learned from the Nuclear Safety Convention on the process of submitting reports electronically and the UK actively supported aims to use IAEA requirements documents as an additional focus for reports. This should, we hope, provide for even better benchmarking of achievements and provide feedback for improvements of the IAEA requirements where appropriate. In summary, the UK finds the Joint Convention process to be a very positive one that can only improve the worldwide standards of safety in spent fuel and radioactive waste management. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Thomas Martin; Celik, Cihangir; McMahan, Kimberly L.
This benchmark experiment was conducted as a joint venture between the US Department of Energy (DOE) and the French Commissariat à l'Energie Atomique (CEA). Staff at the Oak Ridge National Laboratory (ORNL) in the US and the Centre de Valduc in France planned this experiment. The experiment was conducted on October 11, 2010 in the SILENE critical assembly facility at Valduc. Several other organizations contributed to this experiment and the subsequent evaluation, including CEA Saclay, Lawrence Livermore National Laboratory (LLNL), the Y-12 National Security Complex (NSC), Babcock International Group in the United Kingdom, and Los Alamos National Laboratory (LANL). Themore » goal of this experiment was to measure neutron activation and thermoluminescent dosimeter (TLD) doses from a source similar to a fissile solution critical excursion. The resulting benchmark can be used for validation of computer codes and nuclear data libraries as required when performing analysis of criticality accident alarm systems (CAASs). A secondary goal of this experiment was to qualitatively test performance of two CAAS detectors similar to those currently and formerly in use in some US DOE facilities. The detectors tested were the CIDAS MkX and the Rocky Flats NCD-91. These detectors were being evaluated to determine whether they would alarm, so they were not expected to generate benchmark quality data.« less
A Seafloor Benchmark for 3-dimensional Geodesy
NASA Astrophysics Data System (ADS)
Chadwell, C. D.; Webb, S. C.; Nooner, S. L.
2014-12-01
We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone. Using a ROV to place and remove sensors on the benchmarks will significantly reduce the number of sensors required by the community to monitor offshore strain in subduction zones.
Toward multimodal signal detection of adverse drug reactions.
Harpaz, Rave; DuMouchel, William; Schuemie, Martijn; Bodenreider, Olivier; Friedman, Carol; Horvitz, Eric; Ripple, Anna; Sorbello, Alfred; White, Ryen W; Winnenburg, Rainer; Shah, Nigam H
2017-12-01
Improving mechanisms to detect adverse drug reactions (ADRs) is key to strengthening post-marketing drug safety surveillance. Signal detection is presently unimodal, relying on a single information source. Multimodal signal detection is based on jointly analyzing multiple information sources. Building on, and expanding the work done in prior studies, the aim of the article is to further research on multimodal signal detection, explore its potential benefits, and propose methods for its construction and evaluation. Four data sources are investigated; FDA's adverse event reporting system, insurance claims, the MEDLINE citation database, and the logs of major Web search engines. Published methods are used to generate and combine signals from each data source. Two distinct reference benchmarks corresponding to well-established and recently labeled ADRs respectively are used to evaluate the performance of multimodal signal detection in terms of area under the ROC curve (AUC) and lead-time-to-detection, with the latter relative to labeling revision dates. Limited to our reference benchmarks, multimodal signal detection provides AUC improvements ranging from 0.04 to 0.09 based on a widely used evaluation benchmark, and a comparative added lead-time of 7-22 months relative to labeling revision dates from a time-indexed benchmark. The results support the notion that utilizing and jointly analyzing multiple data sources may lead to improved signal detection. Given certain data and benchmark limitations, the early stage of development, and the complexity of ADRs, it is currently not possible to make definitive statements about the ultimate utility of the concept. Continued development of multimodal signal detection requires a deeper understanding the data sources used, additional benchmarks, and further research on methods to generate and synthesize signals. Copyright © 2017 Elsevier Inc. All rights reserved.
Validation of the Child HCAHPS survey to measure pediatric inpatient experience of care in Flanders.
Bruyneel, Luk; Coeckelberghs, Ellen; Buyse, Gunnar; Casteels, Kristina; Lommers, Barbara; Vandersmissen, Jo; Van Eldere, Johan; Van Geet, Chris; Vanhaecht, Kris
2017-07-01
The recently developed Child HCAHPS provides a standard to measure US hospitals' performance on pediatric inpatient experiences of care. We field-tested Child HCAHPS in Belgium to instigate international comparison. In the development stage, forward/backward translation was conducted and patients assessed content validity index as excellent. The draft Flemish Child HCAHPS included 63 items: 38 items for five topics hypothesized to be similar to those proposed in the US (communication with parent, communication with child, attention to safety and comfort, hospital environment, and global rating), 10 screeners, a 14-item demographic and descriptive section, and one open-ended item. A 6-week pilot test was subsequently performed in three pediatric wards (general ward, hematology and oncology ward, infant and toddler ward) at a JCI-accredited university hospital. An overall response rate of 90.99% (303/333) was achieved and was consistent across wards. Confirmatory factor analysis largely confirmed the configuration of the proposed composites. Composite and single-item measures related well to patients' global rating of the hospital. Interpretation of different patient experiences across types of wards merits further investigation. Child HCAHPS provides an opportunity for systematic and cross-national assessment of pediatric inpatient experiences. Sharing and implementing international best practices are the next logical step. What is Known: • Patient experience surveys are increasingly used to reflect on the quality, safety, and centeredness of patient care. • While adult inpatient experience surveys are routinely used across countries around the world, the measurement of pediatric inpatient experiences is a young field of research that is essential to reflect on family-centered care. What is New: • We demonstrate that the US-developed Child HCAHPS provides an opportunity for international benchmarking of pediatric inpatient experiences with care through parents and guardians. • Our study findings show considerable variation in experiences for types of pediatric services. Support to share good practices and launch quality improvement initiatives can be obtained by organizing regular two-way feedback sessions with clinicians to place the findings in context.
Nurse staffing levels and outcomes - mining the UK national data sets for insight.
Leary, Alison; Tomai, Barbara; Swift, Adrian; Woodward, Andrew; Hurst, Keith
2017-04-18
Purpose Despite the generation of mass data by the nursing workforce, determining the impact of the contribution to patient safety remains challenging. Several cross-sectional studies have indicated a relationship between staffing and safety. The purpose of this paper is to uncover possible associations and explore if a deeper understanding of relationships between staffing and other factors such as safety could be revealed within routinely collected national data sets. Design/methodology/approach Two longitudinal routinely collected data sets consisting of 30 years of UK nurse staffing data and seven years of National Health Service (NHS) benchmark data such as survey results, safety and other indicators were used. A correlation matrix was built and a linear correlation operation was applied (Pearson product-moment correlation coefficient). Findings A number of associations were revealed within both the UK staffing data set and the NHS benchmarking data set. However, the challenges of using these data sets soon became apparent. Practical implications Staff time and effort are required to collect these data. The limitations of these data sets include inconsistent data collection and quality. The mode of data collection and the itemset collected should be reviewed to generate a data set with robust clinical application. Originality/value This paper revealed that relationships are likely to be complex and non-linear; however, the main contribution of the paper is the identification of the limitations of routinely collected data. Much time and effort is expended in collecting this data; however, its validity, usefulness and method of routine national data collection appear to require re-examination.
RETRANO3 benchmarks for Beaver Valley plant transients and FSAR analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beaumont, E.T.; Feltus, M.A.
1993-01-01
Any best-estimate code (e.g., RETRANO3) results must be validated against plant data and final safety analysis report (FSAR) predictions. The need for two independent means of benchmarking is necessary to ensure that the results were not biased toward a particular data set and to have a certain degree of accuracy. The code results need to be compared with previous results and show improvements over previous code results. Ideally, the two best means of benchmarking a thermal hydraulics code are comparing results from previous versions of the same code along with actual plant data. This paper describes RETRAN03 benchmarks against RETRAN02more » results, actual plant data, and FSAR predictions. RETRAN03, the Electric Power Research Institute's latest version of the RETRAN thermal-hydraulic analysis codes, offers several upgrades over its predecessor, RETRAN02 Mod5. RETRAN03 can use either implicit or semi-implicit numerics, whereas RETRAN02 Mod5 uses only semi-implicit numerics. Another major upgrade deals with slip model options. RETRAN03 added several new models, including a five-equation model for more accurate modeling of two-phase flow. RETPAN02 Mod5 should give similar but slightly more conservative results than RETRAN03 when executed with RETRAN02 Mod5 options.« less
Arnold, Scott M; Collins, Michael A; Graham, Cynthia; Jolly, Athena T; Parod, Ralph J; Poole, Alan; Schupp, Thomas; Shiotsuka, Ronald N; Woolhiser, Michael R
2012-12-01
Polyurethanes (PU) are polymers made from diisocyanates and polyols for a variety of consumer products. It has been suggested that PU foam may contain trace amounts of residual toluene diisocyanate (TDI) monomers and present a health risk. To address this concern, the exposure scenario and health risks posed by sleeping on a PU foam mattress were evaluated. Toxicity benchmarks for key non-cancer endpoints (i.e., irritation, sensitization, respiratory tract effects) were determined by dividing points of departure by uncertainty factors. The cancer benchmark was derived using the USEPA Benchmark Dose Software. Results of previous migration and emission data of TDI from PU foam were combined with conservative exposure factors to calculate upper-bound dermal and inhalation exposures to TDI as well as a lifetime average daily dose to TDI from dermal exposure. For each non-cancer endpoint, the toxicity benchmark was divided by the calculated exposure to determine the margin of safety (MOS), which ranged from 200 (respiratory tract) to 3×10(6) (irritation). Although available data indicate TDI is not carcinogenic, a theoretical excess cancer risk (1×10(-7)) was calculated. We conclude from this assessment that sleeping on a PU foam mattress does not pose TDI-related health risks to consumers. Copyright © 2012 Elsevier Inc. All rights reserved.
PSO algorithm enhanced with Lozi Chaotic Map - Tuning experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pluhacek, Michal; Senkerik, Roman; Zelinka, Ivan
2015-03-10
In this paper it is investigated the effect of tuning of control parameters of the Lozi Chaotic Map employed as a chaotic pseudo-random number generator for the particle swarm optimization algorithm. Three different benchmark functions are selected from the IEEE CEC 2013 competition benchmark set. The Lozi map is extensively tuned and the performance of PSO is evaluated.
Benchmarking infrastructure for mutation text mining
2014-01-01
Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600
Benchmarking infrastructure for mutation text mining.
Klein, Artjom; Riazanov, Alexandre; Hindle, Matthew M; Baker, Christopher Jo
2014-02-25
Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption.
Qualifying for the Green500: Experience with the newest generation of supercomputers at LANL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yilk, Todd
The High Performance Computing Division of Los Alamos National Laboratory recently brought four new supercomputing platforms on line: Trinity with separate partitions built around the Haswell and Knights Landing CPU architectures for capability computing and Grizzly, Fire, and Ice for capacity computing applications. The power monitoring infrastructure of these machines is significantly enhanced over previous supercomputing generations at LANL and all were qualified at the highest level of the Green500 benchmark. Here, this paper discusses supercomputing at LANL, the Green500 benchmark, and notes on our experience meeting the Green500's reporting requirements.
Qualifying for the Green500: Experience with the newest generation of supercomputers at LANL
Yilk, Todd
2018-02-17
The High Performance Computing Division of Los Alamos National Laboratory recently brought four new supercomputing platforms on line: Trinity with separate partitions built around the Haswell and Knights Landing CPU architectures for capability computing and Grizzly, Fire, and Ice for capacity computing applications. The power monitoring infrastructure of these machines is significantly enhanced over previous supercomputing generations at LANL and all were qualified at the highest level of the Green500 benchmark. Here, this paper discusses supercomputing at LANL, the Green500 benchmark, and notes on our experience meeting the Green500's reporting requirements.
Gururaj, Anupama E.; Chen, Xiaoling; Pournejati, Saeid; Alter, George; Hersh, William R.; Demner-Fushman, Dina; Ohno-Machado, Lucila
2017-01-01
Abstract The rapid proliferation of publicly available biomedical datasets has provided abundant resources that are potentially of value as a means to reproduce prior experiments, and to generate and explore novel hypotheses. However, there are a number of barriers to the re-use of such datasets, which are distributed across a broad array of dataset repositories, focusing on different data types and indexed using different terminologies. New methods are needed to enable biomedical researchers to locate datasets of interest within this rapidly expanding information ecosystem, and new resources are needed for the formal evaluation of these methods as they emerge. In this paper, we describe the design and generation of a benchmark for information retrieval of biomedical datasets, which was developed and used for the 2016 bioCADDIE Dataset Retrieval Challenge. In the tradition of the seminal Cranfield experiments, and as exemplified by the Text Retrieval Conference (TREC), this benchmark includes a corpus (biomedical datasets), a set of queries, and relevance judgments relating these queries to elements of the corpus. This paper describes the process through which each of these elements was derived, with a focus on those aspects that distinguish this benchmark from typical information retrieval reference sets. Specifically, we discuss the origin of our queries in the context of a larger collaborative effort, the biomedical and healthCAre Data Discovery Index Ecosystem (bioCADDIE) consortium, and the distinguishing features of biomedical dataset retrieval as a task. The resulting benchmark set has been made publicly available to advance research in the area of biomedical dataset retrieval. Database URL: https://biocaddie.org/benchmark-data PMID:29220453
DOE Office of Scientific and Technical Information (OSTI.GOV)
John D. Bess; Leland M. Montierth
2013-03-01
In its deployment as a pebble bed reactor (PBR) critical facility from 1992 to 1996, the PROTEUS facility was designated as HTR-PROTEUS. This experimental program was performed as part of an International Atomic Energy Agency (IAEA) Coordinated Research Project (CRP) on the Validation of Safety Related Physics Calculations for Low Enriched HTGRs. Within this project, critical experiments were conducted for graphite moderated LEU systems to determine core reactivity, flux and power profiles, reaction-rate ratios, the worth of control rods, both in-core and reflector based, the worth of burnable poisons, kinetic parameters, and the effects of moisture ingress on these parameters.more » One benchmark experiment was evaluated in this report: Core 4. Core 4 represents the only configuration with random pebble packing in the HTR-PROTEUS series of experiments, and has a moderator-to-fuel pebble ratio of 1:1. Three random configurations were performed. The initial configuration, Core 4.1, was rejected because the method for pebble loading, separate delivery tubes for the moderator and fuel pebbles, may not have been completely random; this core loading was rejected by the experimenters. Cores 4.2 and 4.3 were loaded using a single delivery tube, eliminating the possibility for systematic ordering effects. The second and third cores differed slightly in the quantity of pebbles loaded (40 each of moderator and fuel pebbles), stacked height of the pebbles in the core cavity (0.02 m), withdrawn distance of the stainless steel control rods (20 mm), and withdrawn distance of the autorod (30 mm). The 34 coolant channels in the upper axial reflector and the 33 coolant channels in the lower axial reflector were open. Additionally, the axial graphite fillers used in all other HTR-PROTEUS configurations to create a 12-sided core cavity were not used in the randomly packed cores. Instead, graphite fillers were placed on the cavity floor, creating a funnel-like base, to discourage ordering effects during pebble loading. Core 4 was determined to be acceptable benchmark experiment.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bess, John D.; Montierth, Leland M.; Sterbentz, James W.
2014-03-01
In its deployment as a pebble bed reactor (PBR) critical facility from 1992 to 1996, the PROTEUS facility was designated as HTR-PROTEUS. This experimental program was performed as part of an International Atomic Energy Agency (IAEA) Coordinated Research Project (CRP) on the Validation of Safety Related Physics Calculations for Low Enriched HTGRs. Within this project, critical experiments were conducted for graphite moderated LEU systems to determine core reactivity, flux and power profiles, reaction-rate ratios, the worth of control rods, both in-core and reflector based, the worth of burnable poisons, kinetic parameters, and the effects of moisture ingress on these parameters.more » One benchmark experiment was evaluated in this report: Core 4. Core 4 represents the only configuration with random pebble packing in the HTR-PROTEUS series of experiments, and has a moderator-to-fuel pebble ratio of 1:1. Three random configurations were performed. The initial configuration, Core 4.1, was rejected because the method for pebble loading, separate delivery tubes for the moderator and fuel pebbles, may not have been completely random; this core loading was rejected by the experimenters. Cores 4.2 and 4.3 were loaded using a single delivery tube, eliminating the possibility for systematic ordering effects. The second and third cores differed slightly in the quantity of pebbles loaded (40 each of moderator and fuel pebbles), stacked height of the pebbles in the core cavity (0.02 m), withdrawn distance of the stainless steel control rods (20 mm), and withdrawn distance of the autorod (30 mm). The 34 coolant channels in the upper axial reflector and the 33 coolant channels in the lower axial reflector were open. Additionally, the axial graphite fillers used in all other HTR-PROTEUS configurations to create a 12-sided core cavity were not used in the randomly packed cores. Instead, graphite fillers were placed on the cavity floor, creating a funnel-like base, to discourage ordering effects during pebble loading. Core 4 was determined to be acceptable benchmark experiment.« less
Tompa, Emile; de Boer, Henriette; Macdonald, Sara; Alamgir, Hasanat; Koehoorn, Mieke; Guzman, Jaime
2016-04-01
This study identified and prioritized resources and outcomes that should be considered in more comprehensive and scientifically rigorous health and safety economic evaluations according to healthcare sector stakeholders. A literature review and stakeholder interviews identified candidate resources and outcomes and then a Delphi panel ranked them. According to the panel, the top five resources were (a) health and safety staff time; (b) training workers; (c) program planning, promotion, and evaluation costs; (d) equipment purchases and upgrades; and (e) administration costs. The top five outcomes were (a) number of injuries, illnesses, and general sickness absences; (b) safety climate; (c) days lost due to injuries, illnesses, and general sickness absences; (d) job satisfaction and engagement; and (e) quality of care and patient safety. These findings emphasize stakeholders' stated priorities and are useful as a benchmark for assessing the quality of health and safety economic evaluations and the comprehensiveness of these findings. © 2016 The Author(s).
JENDL-4.0/HE Benchmark Test with Concrete and Iron Shielding Experiments at JAEA/TIARA
NASA Astrophysics Data System (ADS)
Konno, Chikara; Matsuda, Norihiro; Kwon, Saerom; Ohta, Masayuki; Sato, Satoshi
2017-09-01
As a benchmark test of JENDL-4.0/HE released in 2015, we have analyzed the concrete and iron shielding experiments with the quasi mono-energetic 40 and 65 MeV neutron sources at TIARA in JAEA by using MCNP5 and ACE files processed from JENDL-4.0/HE with NJOY2012. As a result, it was found out that the calculation results with JENDL-4.0/HE agreed with the measured ones in the concrete experiment well, while they underestimated the measured ones in the iron experiment with 65 MeV neutrons more for the thicker assemblies. We examined the 56Fe data of JENDL-4.0/HE in detail and it was considered that the larger non-elastic scattering cross sections of 56Fe caused the underestimation in the calculation with JENDL-4.0/HE for the iron experiment with 65 MeV neutrons.
Efficient Type Representation in TAL
NASA Technical Reports Server (NTRS)
Chen, Juan
2009-01-01
Certifying compilers generate proofs for low-level code that guarantee safety properties of the code. Type information is an essential part of safety proofs. But the size of type information remains a concern for certifying compilers in practice. This paper demonstrates type representation techniques in a large-scale compiler that achieves both concise type information and efficient type checking. In our 200,000-line certifying compiler, the size of type information is about 36% of the size of pure code and data for our benchmarks, the best result to the best of our knowledge. The type checking time is about 2% of the compilation time.
Nonparametric estimation of benchmark doses in environmental risk assessment
Piegorsch, Walter W.; Xiong, Hui; Bhattacharya, Rabi N.; Lin, Lizhen
2013-01-01
Summary An important statistical objective in environmental risk analysis is estimation of minimum exposure levels, called benchmark doses (BMDs), that induce a pre-specified benchmark response in a dose-response experiment. In such settings, representations of the risk are traditionally based on a parametric dose-response model. It is a well-known concern, however, that if the chosen parametric form is misspecified, inaccurate and possibly unsafe low-dose inferences can result. We apply a nonparametric approach for calculating benchmark doses, based on an isotonic regression method for dose-response estimation with quantal-response data (Bhattacharya and Kong, 2007). We determine the large-sample properties of the estimator, develop bootstrap-based confidence limits on the BMDs, and explore the confidence limits’ small-sample properties via a short simulation study. An example from cancer risk assessment illustrates the calculations. PMID:23914133
Gadolinia depletion analysis by CASMO-4
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kobayashi, Y.; Saji, E.; Toba, A.
1993-01-01
CASMO-4 is the most recent version of the lattice physics code CASMO introduced by Studsvik. The principal aspects of the CASMO-4 model that differ from the models in previous CASMO versions are as follows: (1) heterogeneous model for two-dimensional transport theory calculations; and (2) microregion depletion model for burnable absorbers, such as gadolinia. Of these aspects, the first has previously been benchmarked against measured data of critical experiments and Monte Carlo calculations, verifying the high degree of accuracy. To proceed with CASMO-4 benchmarking, it is desirable to benchmark the microregion depletion model, which enables CASMO-4 to calculate gadolinium depletion directlymore » without the need for precalculated MICBURN cross-section data. This paper presents the benchmarking results for the microregion depletion model in CASMO-4 using the measured data of depleted gadolinium rods.« less
HyspIRI Low Latency Concept and Benchmarks
NASA Technical Reports Server (NTRS)
Mandl, Dan
2010-01-01
Topics include HyspIRI low latency data ops concept, HyspIRI data flow, ongoing efforts, experiment with Web Coverage Processing Service (WCPS) approach to injecting new algorithms into SensorWeb, low fidelity HyspIRI IPM testbed, compute cloud testbed, open cloud testbed environment, Global Lambda Integrated Facility (GLIF) and OCC collaboration with Starlight, delay tolerant network (DTN) protocol benchmarking, and EO-1 configuration for preliminary DTN prototype.
RNA-seq mixology: designing realistic control experiments to compare protocols and analysis methods
Holik, Aliaksei Z.; Law, Charity W.; Liu, Ruijie; Wang, Zeya; Wang, Wenyi; Ahn, Jaeil; Asselin-Labat, Marie-Liesse; Smyth, Gordon K.
2017-01-01
Abstract Carefully designed control experiments provide a gold standard for benchmarking different genomics research tools. A shortcoming of many gene expression control studies is that replication involves profiling the same reference RNA sample multiple times. This leads to low, pure technical noise that is atypical of regular studies. To achieve a more realistic noise structure, we generated a RNA-sequencing mixture experiment using two cell lines of the same cancer type. Variability was added by extracting RNA from independent cell cultures and degrading particular samples. The systematic gene expression changes induced by this design allowed benchmarking of different library preparation kits (standard poly-A versus total RNA with Ribozero depletion) and analysis pipelines. Data generated using the total RNA kit had more signal for introns and various RNA classes (ncRNA, snRNA, snoRNA) and less variability after degradation. For differential expression analysis, voom with quality weights marginally outperformed other popular methods, while for differential splicing, DEXSeq was simultaneously the most sensitive and the most inconsistent method. For sample deconvolution analysis, DeMix outperformed IsoPure convincingly. Our RNA-sequencing data set provides a valuable resource for benchmarking different protocols and data pre-processing workflows. The extra noise mimics routine lab experiments more closely, ensuring any conclusions are widely applicable. PMID:27899618
featsel: A framework for benchmarking of feature selection algorithms and cost functions
NASA Astrophysics Data System (ADS)
Reis, Marcelo S.; Estrela, Gustavo; Ferreira, Carlos Eduardo; Barrera, Junior
In this paper, we introduce featsel, a framework for benchmarking of feature selection algorithms and cost functions. This framework allows the user to deal with the search space as a Boolean lattice and has its core coded in C++ for computational efficiency purposes. Moreover, featsel includes Perl scripts to add new algorithms and/or cost functions, generate random instances, plot graphs and organize results into tables. Besides, this framework already comes with dozens of algorithms and cost functions for benchmarking experiments. We also provide illustrative examples, in which featsel outperforms the popular Weka workbench in feature selection procedures on data sets from the UCI Machine Learning Repository.
BACT Simulation User Guide (Version 7.0)
NASA Technical Reports Server (NTRS)
Waszak, Martin R.
1997-01-01
This report documents the structure and operation of a simulation model of the Benchmark Active Control Technology (BACT) Wind-Tunnel Model. The BACT system was designed, built, and tested at NASA Langley Research Center as part of the Benchmark Models Program and was developed to perform wind-tunnel experiments to obtain benchmark quality data to validate computational fluid dynamics and computational aeroelasticity codes, to verify the accuracy of current aeroservoelasticity design and analysis tools, and to provide an active controls testbed for evaluating new and innovative control algorithms for flutter suppression and gust load alleviation. The BACT system has been especially valuable as a control system testbed.
Road safety risk evaluation and target setting using data envelopment analysis and its extensions.
Shen, Yongjun; Hermans, Elke; Brijs, Tom; Wets, Geert; Vanhoof, Koen
2012-09-01
Currently, comparison between countries in terms of their road safety performance is widely conducted in order to better understand one's own safety situation and to learn from those best-performing countries by indicating practical targets and formulating action programmes. In this respect, crash data such as the number of road fatalities and casualties are mostly investigated. However, the absolute numbers are not directly comparable between countries. Therefore, the concept of risk, which is defined as the ratio of road safety outcomes and some measure of exposure (e.g., the population size, the number of registered vehicles, or distance travelled), is often used in the context of benchmarking. Nevertheless, these risk indicators are not consistent in most cases. In other words, countries may have different evaluation results or ranking positions using different exposure information. In this study, data envelopment analysis (DEA) as a performance measurement technique is investigated to provide an overall perspective on a country's road safety situation, and further assess whether the road safety outcomes registered in a country correspond to the numbers that can be expected based on the level of exposure. In doing so, three model extensions are considered, which are the DEA based road safety model (DEA-RS), the cross-efficiency method, and the categorical DEA model. Using the measures of exposure to risk as the model's input and the number of road fatalities as output, an overall road safety efficiency score is computed for the 27 European Union (EU) countries based on the DEA-RS model, and the ranking of countries in accordance with their cross-efficiency scores is evaluated. Furthermore, after applying clustering analysis to group countries with inherent similarity in their practices, the categorical DEA-RS model is adopted to identify best-performing and underperforming countries in each cluster, as well as the reference sets or benchmarks for those underperforming ones. More importantly, the extent to which each reference set could be learned from is specified, and practical yet challenging targets are given for each underperforming country, which enables policymakers to recognize the gap with those best-performing countries and further develop their own road safety policy. Copyright © 2012 Elsevier Ltd. All rights reserved.
How do I know if my forecasts are better? Using benchmarks in hydrological ensemble prediction
NASA Astrophysics Data System (ADS)
Pappenberger, F.; Ramos, M. H.; Cloke, H. L.; Wetterhall, F.; Alfieri, L.; Bogner, K.; Mueller, A.; Salamon, P.
2015-03-01
The skill of a forecast can be assessed by comparing the relative proximity of both the forecast and a benchmark to the observations. Example benchmarks include climatology or a naïve forecast. Hydrological ensemble prediction systems (HEPS) are currently transforming the hydrological forecasting environment but in this new field there is little information to guide researchers and operational forecasters on how benchmarks can be best used to evaluate their probabilistic forecasts. In this study, it is identified that the forecast skill calculated can vary depending on the benchmark selected and that the selection of a benchmark for determining forecasting system skill is sensitive to a number of hydrological and system factors. A benchmark intercomparison experiment is then undertaken using the continuous ranked probability score (CRPS), a reference forecasting system and a suite of 23 different methods to derive benchmarks. The benchmarks are assessed within the operational set-up of the European Flood Awareness System (EFAS) to determine those that are 'toughest to beat' and so give the most robust discrimination of forecast skill, particularly for the spatial average fields that EFAS relies upon. Evaluating against an observed discharge proxy the benchmark that has most utility for EFAS and avoids the most naïve skill across different hydrological situations is found to be meteorological persistency. This benchmark uses the latest meteorological observations of precipitation and temperature to drive the hydrological model. Hydrological long term average benchmarks, which are currently used in EFAS, are very easily beaten by the forecasting system and the use of these produces much naïve skill. When decomposed into seasons, the advanced meteorological benchmarks, which make use of meteorological observations from the past 20 years at the same calendar date, have the most skill discrimination. They are also good at discriminating skill in low flows and for all catchment sizes. Simpler meteorological benchmarks are particularly useful for high flows. Recommendations for EFAS are to move to routine use of meteorological persistency, an advanced meteorological benchmark and a simple meteorological benchmark in order to provide a robust evaluation of forecast skill. This work provides the first comprehensive evidence on how benchmarks can be used in evaluation of skill in probabilistic hydrological forecasts and which benchmarks are most useful for skill discrimination and avoidance of naïve skill in a large scale HEPS. It is recommended that all HEPS use the evidence and methodology provided here to evaluate which benchmarks to employ; so forecasters can have trust in their skill evaluation and will have confidence that their forecasts are indeed better.
Anne E. Black; Brooke Baldauf McBride
2013-01-01
In an effort to improve organizational outcomes, including safety, in wildland fire management, researchers and practitioners have turned to a domain of research on organizational performance known as High Reliability Organizing (HRO). The HRO paradigm emerged in the late 1980s in an effort to identify commonalities among organizations that function under hazardous...
Young, Mark S; Birrell, Stewart A; Stanton, Neville A
2011-05-01
Road transport is a significant source of both safety and environmental concerns. With climate change and fuel prices increasingly prominent on social and political agendas, many drivers are turning their thoughts to fuel efficient or 'green' (i.e., environmentally friendly) driving practices. Many vehicle manufacturers are satisfying this demand by offering green driving feedback or advice tools. However, there is a legitimate concern regarding the effects of such devices on road safety--both from the point of view of change in driving styles, as well as potential distraction caused by the in-vehicle feedback. In this paper, we appraise the benchmarks for safe and green driving, concluding that whilst they largely overlap, there are some specific circumstances in which the goals are in conflict. We go on to review current and emerging in-vehicle information systems which purport to affect safe and/or green driving, and discuss some fundamental ergonomics principles for the design of such devices. The results of the review are being used in the Foot-LITE project, aimed at developing a system to encourage 'smart'--that is safe and green--driving. Copyright © 2010 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Protocol for a national blood transfusion data warehouse from donor to recipient
van Hoeven, Loan R; Hooftman, Babette H; Janssen, Mart P; de Bruijne, Martine C; de Vooght, Karen M K; Kemper, Peter; Koopman, Maria M W
2016-01-01
Introduction Blood transfusion has health-related, economical and safety implications. In order to optimise the transfusion chain, comprehensive research data are needed. The Dutch Transfusion Data warehouse (DTD) project aims to establish a data warehouse where data from donors and transfusion recipients are linked. This paper describes the design of the data warehouse, challenges and illustrative applications. Study design and methods Quantitative data on blood donors (eg, age, blood group, antibodies) and products (type of product, processing, storage time) are obtained from the national blood bank. These are linked to data on the transfusion recipients (eg, transfusions administered, patient diagnosis, surgical procedures, laboratory parameters), which are extracted from hospital electronic health records. Applications Expected scientific contributions are illustrated for 4 applications: determine risk factors, predict blood use, benchmark blood use and optimise process efficiency. For each application, examples of research questions are given and analyses planned. Conclusions The DTD project aims to build a national, continuously updated transfusion data warehouse. These data have a wide range of applications, on the donor/production side, recipient studies on blood usage and benchmarking and donor–recipient studies, which ultimately can contribute to the efficiency and safety of blood transfusion. PMID:27491665
Mental models of safety: do managers and employees see eye to eye?
Prussia, Gregory E; Brown, Karen A; Willis, P Geoff
2003-01-01
Disagreements between managers and employees about the causes of accidents and unsafe work behaviors can lead to serious workplace conflicts and distract organizations from the important work of establishing positive safety climate and reducing the incidence of accidents. In this study, the authors examine a model for predicting safe work behaviors and establish the model's consistency across managers and employees in a steel plant setting. Using the model previously described by Brown, Willis, and Prussia (2000), the authors found that when variables influencing safety are considered within a framework of safe work behaviors, managers and employees share a similar mental model. The study then contrasts employees' and managers' specific attributional perceptions. Findings from these more fine-grained analyses suggest the two groups differ in several respects about individual constructs. Most notable were contrasts in attributions based on their perceptions of safety climate. When perceived climate is poor, managers believe employees are responsible and employees believe managers are responsible for workplace safety. However, as perceived safety climate improves, managers and employees converge in their perceptions of who is responsible for safety. It can be concluded from this study that in a highly interdependent work environment, such as a steel mill, where high system reliability is essential and members possess substantial experience working together, managers and employees will share general mental models about the factors that contribute to unsafe behaviors, and, ultimately, to workplace accidents. It is possible that organizations not as tightly coupled as steel mills can use such organizations as benchmarks, seeking ways to create a shared understanding of factors that contribute to a safe work environment. Part of this improvement effort should focus on advancing organizational safety climate. As climate improves, managers and employees are likely to agree more about the causes of safe/unsafe behaviors and workplace accidents, ultimately increasing their ability to work in unison to prevent accidents and to respond appropriately when they do occur. Finally, the survey items included in this study may be useful to organizations wishing to conduct self-assessments.
Benchmarking: a method for continuous quality improvement in health.
Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe
2012-05-01
Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields of application and examples of how it has been used in the healthcare sector. Benchmarking is often thought to consist simply of comparing indicators and is not perceived in its entirety, that is, as a tool based on voluntary and active collaboration among several organizations to create a spirit of competition and to apply best practices. The key feature of benchmarking is its integration within a comprehensive and participatory policy of continuous quality improvement (CQI). Conditions for successful benchmarking focus essentially on careful preparation of the process, monitoring of the relevant indicators, staff involvement and inter-organizational visits. Compared to methods previously implemented in France (CQI and collaborative projects), benchmarking has specific features that set it apart as a healthcare innovation. This is especially true for healthcare or medical-social organizations, as the principle of inter-organizational visiting is not part of their culture. Thus, this approach will need to be assessed for feasibility and acceptability before it is more widely promoted.
Benchmarking of venous thromboembolism prophylaxis practice with ENT.UK guidelines.
Al-Qahtani, Ali S
2017-05-01
The aim of this study was to benchmark our guidelines of prevention of venous thromboembolism (VTE) in ENT surgical population against ENT.UK guidelines, and also to encourage healthcare providers to utilize benchmarking as an effective method of improving performance. The study design is prospective descriptive analysis. The setting of this study is tertiary referral centre (Assir Central Hospital, Abha, Saudi Arabia). In this study, we are benchmarking our practice guidelines of the prevention of VTE in the ENT surgical population against that of ENT.UK guidelines to mitigate any gaps. ENT guidelines 2010 were downloaded from the ENT.UK Website. Our guidelines were compared with the possibilities that either our performance meets or fall short of ENT.UK guidelines. Immediate corrective actions will take place if there is quality chasm between the two guidelines. ENT.UK guidelines are evidence-based and updated which may serve as role-model for adoption and benchmarking. Our guidelines were accordingly amended to contain all factors required in providing a quality service to ENT surgical patients. While not given appropriate attention, benchmarking is a useful tool in improving quality of health care. It allows learning from others' practices and experiences, and works towards closing any quality gaps. In addition, benchmarking clinical outcomes is critical for quality improvement and informing decisions concerning service provision. It is recommended to be included on the list of quality improvement methods of healthcare services.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greiner, Miles
Radial hydride formation in high-burnup used fuel cladding has the potential to radically reduce its ductility and suitability for long-term storage and eventual transport. To avoid this formation, the maximum post-reactor temperature must remain sufficiently low to limit the cladding hoop stress, and so that hydrogen from the existing circumferential hydrides will not dissolve and become available to re-precipitate into radial hydrides under the slow cooling conditions during drying, transfer and early dry-cask storage. The objective of this research is to develop and experimentallybenchmark computational fluid dynamics simulations of heat transfer in post-pool-storage drying operations, when high-burnup fuel cladding ismore » likely to experience its highest temperature. These benchmarked tools can play a key role in evaluating dry cask storage systems for extended storage of high-burnup fuels and post-storage transportation, including fuel retrievability. The benchmarked tools will be used to aid the design of efficient drying processes, as well as estimate variations of surface temperatures as a means of inferring helium integrity inside the canister or cask. This work will be conducted effectively because the principal investigator has experience developing these types of simulations, and has constructed a test facility that can be used to benchmark them.« less
Performance Monitoring of Distributed Data Processing Systems
NASA Technical Reports Server (NTRS)
Ojha, Anand K.
2000-01-01
Test and checkout systems are essential components in ensuring safety and reliability of aircraft and related systems for space missions. A variety of systems, developed over several years, are in use at the NASA/KSC. Many of these systems are configured as distributed data processing systems with the functionality spread over several multiprocessor nodes interconnected through networks. To be cost-effective, a system should take the least amount of resource and perform a given testing task in the least amount of time. There are two aspects of performance evaluation: monitoring and benchmarking. While monitoring is valuable to system administrators in operating and maintaining, benchmarking is important in designing and upgrading computer-based systems. These two aspects of performance evaluation are the foci of this project. This paper first discusses various issues related to software, hardware, and hybrid performance monitoring as applicable to distributed systems, and specifically to the TCMS (Test Control and Monitoring System). Next, a comparison of several probing instructions are made to show that the hybrid monitoring technique developed by the NIST (National Institutes for Standards and Technology) is the least intrusive and takes only one-fourth of the time taken by software monitoring probes. In the rest of the paper, issues related to benchmarking a distributed system have been discussed and finally a prescription for developing a micro-benchmark for the TCMS has been provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cohen, J; Dossa, D; Gokhale, M
Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe:more » (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows: SuperMicro X7DBE Xeon Dual Socket Blackford Server Motherboard; 2 Intel Xeon Dual-Core 2.66 GHz processors; 1 GB DDR2 PC2-5300 RAM (2 x 512); 80GB Hard Drive (Seagate SATA II Barracuda). The Fusion board is presently capable of 4X in a PCIe slot. The image resampling benchmark was run on a dual Xeon workstation with NVIDIA graphics card (see Chapter 5 for full specification). An XtremeData Opteron+FPGA was used for the language classification application. We observed that these benchmarks are not uniformly I/O intensive. The only benchmark that showed greater that 50% of the time in I/O was the graph algorithm when it accessed data files over NFS. When local disk was used, the graph benchmark spent at most 40% of its time in I/O. The other benchmarks were CPU dominated. The image resampling benchmark and language classification showed order of magnitude speedup over software by using co-processor technology to offload the CPU-intensive kernels. Our experiments to date suggest that emerging hardware technologies offer significant benefit to boosting the performance of data-intensive algorithms. Using GPU and FPGA co-processors, we were able to improve performance by more than an order of magnitude on the benchmark algorithms, eliminating the processor bottleneck of CPU-bound tasks. Experiments with a prototype solid state nonvolative memory available today show 10X better throughput on random reads than disk, with a 2X speedup on a graph processing benchmark when compared to the use of local SATA disk.« less
A hospital-based child protection programme evaluation instrument: a modified Delphi study.
Wilson, Denise; Koziol-McLain, Jane; Garrett, Nick; Sharma, Pritika
2010-08-01
Refine instrument for auditing hospital-based child abuse and neglect violence intervention programmes prior to field-testing. A modified Delphi study to identify and rate items and domains indicative of an effective and quality child abuse and neglect intervention programme. Experts participated in four Delphi rounds: two surveys, a one-day workshop and the opportunity to comment on the penultimate instrument. New Zealand. Twenty-four experts in the field of care and protection of children. Items with panel agreement >or=85% and mean importance rating >or=4.0 (scale from 1 (not important) to 5 (very important)). There was high-level consensus on items across Rounds 1 and 2 (89% and 85%, respectively). In Round 3 an additional domain (safety and security) was agreed upon and cultural issues, alert systems for children at risk, and collaboration among primary care, community, non-government and government agencies were discussed. The final instrument included nine domains ('policies and procedures', 'safety and security', 'collaboration', 'cultural environment', 'training of providers', 'intervention services', 'documentation' 'evaluation' and 'physical environment') and 64 items. The refined instrument represents the hallmarks of an ideal child abuse and neglect programme given current knowledge and experience. The instrument enables rigorous evaluations of hospital-based child abuse and neglect intervention programmes for quality improvement and benchmarking with other programmes.
Benchmarking image fusion system design parameters
NASA Astrophysics Data System (ADS)
Howell, Christopher L.
2013-06-01
A clear and absolute method for discriminating between image fusion algorithm performances is presented. This method can effectively be used to assist in the design and modeling of image fusion systems. Specifically, it is postulated that quantifying human task performance using image fusion should be benchmarked to whether the fusion algorithm, at a minimum, retained the performance benefit achievable by each independent spectral band being fused. The established benchmark would then clearly represent the threshold that a fusion system should surpass to be considered beneficial to a particular task. A genetic algorithm is employed to characterize the fused system parameters using a Matlab® implementation of NVThermIP as the objective function. By setting the problem up as a mixed-integer constraint optimization problem, one can effectively look backwards through the image acquisition process: optimizing fused system parameters by minimizing the difference between modeled task difficulty measure and the benchmark task difficulty measure. The results of an identification perception experiment are presented, where human observers were asked to identify a standard set of military targets, and used to demonstrate the effectiveness of the benchmarking process.
Importance of inlet boundary conditions for numerical simulation of combustor flows
NASA Technical Reports Server (NTRS)
Sturgess, G. J.; Syed, S. A.; Mcmanus, K. R.
1983-01-01
Fluid dynamic computer codes for the mathematical simulation of problems in gas turbine engine combustion systems are required as design and diagnostic tools. To eventually achieve a performance standard with these codes of more than qualitative accuracy it is desirable to use benchmark experiments for validation studies. Typical of the fluid dynamic computer codes being developed for combustor simulations is the TEACH (Teaching Elliptic Axisymmetric Characteristics Heuristically) solution procedure. It is difficult to find suitable experiments which satisfy the present definition of benchmark quality. For the majority of the available experiments there is a lack of information concerning the boundary conditions. A standard TEACH-type numerical technique is applied to a number of test-case experiments. It is found that numerical simulations of gas turbine combustor-relevant flows can be sensitive to the plane at which the calculations start and the spatial distributions of inlet quantities for swirling flows.
The challenges of numerically simulating analogue brittle thrust wedges
NASA Astrophysics Data System (ADS)
Buiter, Susanne; Ellis, Susan
2017-04-01
Fold-and-thrust belts and accretionary wedges form when sedimentary and crustal rocks are compressed into thrusts and folds in the foreland of an orogen or at a subduction trench. For over a century, analogue models have been used to investigate the deformation characteristics of such brittle wedges. These models predict wedge shapes that agree with analytical critical taper theory and internal deformation structures that well resemble natural observations. In a series of comparison experiments for thrust wedges, called the GeoMod2004 (1,2) and GeoMod2008 (3,4) experiments, it was shown that different numerical solution methods successfully reproduce sandbox thrust wedges. However, the GeoMod2008 benchmark also pointed to the difficulties of representing frictional boundary conditions and sharp velocity discontinuities with continuum numerical methods, in addition to the well-known challenges of numerical plasticity. Here we show how details in the numerical implementation of boundary conditions can substantially impact numerical wedge deformation. We consider experiment 1 of the GeoMod2008 brittle thrust wedge benchmarks. This experiment examines a triangular thrust wedge in the stable field of critical taper theory that should remain stable, that is, without internal deformation, when sliding over a basal frictional surface. The thrust wedge is translated by lateral displacement of a rigid mobile wall. The corner between the mobile wall and the subsurface is a velocity discontinuity. Using our finite-element code SULEC, we show how different approaches to implementing boundary friction (boundary layer or contact elements) and the velocity discontinuity (various smoothing schemes) can cause the wedge to indeed translate in a stable manner or to undergo internal deformation (which is a fail). We recommend that numerical studies of sandbox setups not only report the details of their implementation of boundary conditions, but also document the modelling attempts that failed. References 1. Buiter and the GeoMod2004 Team, 2006. The numerical sandbox: comparison of model results for a shortening and an extension experiment. Geol. Soc. Lond. Spec. Publ. 253, 29-64 2. Schreurs and the GeoMod2004 Team, 2006. Analogue benchmarks of shortening and extension experiments. Geol. Soc. Lond. Spec. Publ. 253, 1-27 3. Buiter, Schreurs and the GeoMod2008 Team, 2016. Benchmarking numerical models of brittle thrust wedges, J. Struct. Geol. 92, 140-177 4. Schreurs, Buiter and the GeoMod2008 Team, 2016. Benchmarking analogue models of brittle thrust wedges, J. Struct. Geol. 92, 116-13
Benchmarking of Neutron Production of Heavy-Ion Transport Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Remec, Igor; Ronningen, Reginald M.; Heilbronn, Lawrence
Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in design and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models andmore » codes and additional benchmarking are required.« less
Benchmarking of Heavy Ion Transport Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Remec, Igor; Ronningen, Reginald M.; Heilbronn, Lawrence
Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in designing and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models andmore » codes and additional benchmarking are required.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Thomas Martin; Celik, Cihangir; McMahan, Kimberly L.
This benchmark experiment was conducted as a joint venture between the US Department of Energy (DOE) and the French Commissariat à l'Energie Atomique (CEA). Staff at the Oak Ridge National Laboratory (ORNL) in the US and the Centre de Valduc in France planned this experiment. The experiment was conducted on October 19, 2010 in the SILENE critical assembly facility at Valduc. Several other organizations contributed to this experiment and the subsequent evaluation, including CEA Saclay, Lawrence Livermore National Laboratory (LLNL), the Y-12 National Security Complex (NSC), Babcock International Group in the United Kingdom, and Los Alamos National Laboratory (LANL). Themore » goal of this experiment was to measure neutron activation and thermoluminescent dosimeter (TLD) doses from a source similar to a fissile solution critical excursion. The resulting benchmark can be used for validation of computer codes and nuclear data libraries as required when performing analysis of criticality accident alarm systems (CAASs). A secondary goal of this experiment was to qualitatively test performance of two CAAS detectors similar to those currently and formerly in use in some US DOE facilities. The detectors tested were the CIDAS MkX and the Rocky Flats NCD-91. The CIDAS detects gammas with a Geiger-Muller tube and the Rocky Flats detects neutrons via charged particles produced in a thin 6LiF disc depositing energy in a Si solid state detector. These detectors were being evaluated to determine whether they would alarm, so they were not expected to generate benchmark quality data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Thomas Martin; Celik, Cihangir; Isbell, Kimberly McMahan
This benchmark experiment was conducted as a joint venture between the US Department of Energy (DOE) and the French Commissariat à l'Energie Atomique (CEA). Staff at the Oak Ridge National Laboratory (ORNL) in the US and the Centre de Valduc in France planned this experiment. The experiment was conducted on October 13, 2010 in the SILENE critical assembly facility at Valduc. Several other organizations contributed to this experiment and the subsequent evaluation, including CEA Saclay, Lawrence Livermore National Laboratory (LLNL), the Y-12 National Security Complex (NSC), Babcock International Group in the United Kingdom, and Los Alamos National Laboratory (LANL). Themore » goal of this experiment was to measure neutron activation and thermoluminescent dosimeter (TLD) doses from a source similar to a fissile solution critical excursion. The resulting benchmark can be used for validation of computer codes and nuclear data libraries as required when performing analysis of criticality accident alarm systems (CAASs). A secondary goal of this experiment was to qualitatively test performance of two CAAS detectors similar to those currently and formerly in use in some US DOE facilities. The detectors tested were the CIDAS MkX and the Rocky Flats NCD-91. The CIDAS detects gammas with a Geiger-Muller tube, and the Rocky Flats detects neutrons via charged particles produced in a thin 6LiF disc, depositing energy in a Si solid-state detector. These detectors were being evaluated to determine whether they would alarm, so they were not expected to generate benchmark quality data.« less
AlZhrani, Gmaan; Alotaibi, Fahad; Azarnoush, Hamed; Winkler-Schwartz, Alexander; Sabbagh, Abdulrahman; Bajunaid, Khalid; Lajoie, Susanne P; Del Maestro, Rolando F
2015-01-01
Assessment of neurosurgical technical skills involved in the resection of cerebral tumors in operative environments is complex. Educators emphasize the need to develop and use objective and meaningful assessment tools that are reliable and valid for assessing trainees' progress in acquiring surgical skills. The purpose of this study was to develop proficiency performance benchmarks for a newly proposed set of objective measures (metrics) of neurosurgical technical skills performance during simulated brain tumor resection using a new virtual reality simulator (NeuroTouch). Each participant performed the resection of 18 simulated brain tumors of different complexity using the NeuroTouch platform. Surgical performance was computed using Tier 1 and Tier 2 metrics derived from NeuroTouch simulator data consisting of (1) safety metrics, including (a) volume of surrounding simulated normal brain tissue removed, (b) sum of forces utilized, and (c) maximum force applied during tumor resection; (2) quality of operation metric, which involved the percentage of tumor removed; and (3) efficiency metrics, including (a) instrument total tip path lengths and (b) frequency of pedal activation. All studies were conducted in the Neurosurgical Simulation Research Centre, Montreal Neurological Institute and Hospital, McGill University, Montreal, Canada. A total of 33 participants were recruited, including 17 experts (board-certified neurosurgeons) and 16 novices (7 senior and 9 junior neurosurgery residents). The results demonstrated that "expert" neurosurgeons resected less surrounding simulated normal brain tissue and less tumor tissue than residents. These data are consistent with the concept that "experts" focused more on safety of the surgical procedure compared with novices. By analyzing experts' neurosurgical technical skills performance on these different metrics, we were able to establish benchmarks for goal proficiency performance training of neurosurgery residents. This study furthers our understanding of expert neurosurgical performance during the resection of simulated virtual reality tumors and provides neurosurgical trainees with predefined proficiency performance benchmarks designed to maximize the learning of specific surgical technical skills. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Ang, Darwin; McKenney, Mark; Norwood, Scott; Kurek, Stanley; Kimbrell, Brian; Liu, Huazhi; Ziglar, Michele; Hurst, James
2015-09-01
Improving clinical outcomes of trauma patients is a challenging problem at a statewide level, particularly if data from the state's registry are not publicly available. Promotion of optimal care throughout the state is not possible unless clinical benchmarks are available for comparison. Using publicly available administrative data from the State Department of Health and the Agency for Healthcare Research and Quality (AHRQ) patient safety indicators (PSIs), we sought to create a statewide method for benchmarking trauma mortality and at the same time also identifying a pattern of unique complications that have an independent influence on mortality. Data for this study were obtained from State of Florida Agency for Health Care Administration. Adult trauma patients were identified as having International Classification of Disease ninth edition codes defined by the state. Multivariate logistic regression was used to create a predictive inpatient expected mortality model. The expected value of PSIs was created using the multivariate model and their beta coefficients provided by the AHRQ. Case-mix adjusted mortality results were reported as observed to expected (O/E) ratios to examine mortality, PSIs, failure to prevent complications, and failure to rescue from death. There were 50,596 trauma patients evaluated during the study period. The overall fit of the expected mortality model was very strong at a c-statistic of 0.93. Twelve of 25 trauma centers had O/E ratios <1 or better than expected. Nine statewide PSIs had failure to prevent O/E ratios higher than expected. Five statewide PSIs had failure to rescue O/E ratios higher than expected. The PSI that had the strongest influence on trauma mortality for the state was PSI no. 9 or perioperative hemorrhage or hematoma. Mortality could be further substratified by PSI complications at the hospital level. AHRQ PSIs can have an integral role in an adjusted benchmarking method that screens at risk trauma centers in the state for higher than expected mortality. Stratifying mortality based on failure to prevent PSIs may identify areas of needed improvement at a statewide level. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Kang, Fei; Li, Junjie; Ma, Zhenyue
2013-02-01
Determination of the critical slip surface with the minimum factor of safety of a slope is a difficult constrained global optimization problem. In this article, an artificial bee colony algorithm with a multi-slice adjustment method is proposed for locating the critical slip surfaces of soil slopes, and the Spencer method is employed to calculate the factor of safety. Six benchmark examples are presented to illustrate the reliability and efficiency of the proposed technique, and it is also compared with some well-known or recent algorithms for the problem. The results show that the new algorithm is promising in terms of accuracy and efficiency.
Assessment of cognitive safety in clinical drug development
Roiser, Jonathan P.; Nathan, Pradeep J.; Mander, Adrian P.; Adusei, Gabriel; Zavitz, Kenton H.; Blackwell, Andrew D.
2016-01-01
Cognitive impairment is increasingly recognised as an important potential adverse effect of medication. However, many drug development programmes do not incorporate sensitive cognitive measurements. Here, we review the rationale for cognitive safety assessment, and explain several basic methodological principles for measuring cognition during clinical drug development, including study design and statistical analysis, from Phase I through to postmarketing. The crucial issue of how cognition should be assessed is emphasized, especially the sensitivity of measurement. We also consider how best to interpret the magnitude of any identified effects, including comparison with benchmarks. We conclude by discussing strategies for the effective communication of cognitive risks. PMID:26610416
Commencing Student Experience: New Insights and Implications for Action
ERIC Educational Resources Information Center
Grebennikov, Leonid; Shah, Mahsood
2012-01-01
In many developed countries, including Australia, it is common practice to regularly survey university students in order to assess their experience inside and beyond the classroom. Governments conduct nationwide surveys to assess the quality of student experience, benchmark outcomes nationally and in some cases reward better performing…
Sexton, J Bryan; Sharek, Paul J; Thomas, Eric J; Gould, Jeffrey B; Nisbet, Courtney C; Amspoker, Amber B; Kowalkowski, Mark A; Schwendimann, René; Profit, Jochen
2014-01-01
Background Leadership WalkRounds (WR) are widely used in healthcare organisations to improve patient safety. The relationship between WR and caregiver assessments of patient safety culture, and healthcare worker burnout is unknown. Methods This cross-sectional survey study evaluated the association between receiving feedback about actions taken as a result of WR and healthcare worker assessments of patient safety culture and burnout across 44 neonatal intensive care units (NICUs) actively participating in a structured delivery room management quality improvement initiative. Results Of 3294 administered surveys, 2073 were returned for an overall response rate of 62.9%. More WR feedback was associated with better safety culture results and lower burnout rates in the NICUs. Participation in WR and receiving feedback about WR were less common in NICUs than in a benchmarking comparison of adult clinical areas. Conclusions WR are linked to patient safety and burnout. In NICUs, where they occurred more often, the workplace appears to be a better place to deliver and to receive care. PMID:24825895
Benchmarking: A Method for Continuous Quality Improvement in Health
Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe
2012-01-01
Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields of application and examples of how it has been used in the healthcare sector. Benchmarking is often thought to consist simply of comparing indicators and is not perceived in its entirety, that is, as a tool based on voluntary and active collaboration among several organizations to create a spirit of competition and to apply best practices. The key feature of benchmarking is its integration within a comprehensive and participatory policy of continuous quality improvement (CQI). Conditions for successful benchmarking focus essentially on careful preparation of the process, monitoring of the relevant indicators, staff involvement and inter-organizational visits. Compared to methods previously implemented in France (CQI and collaborative projects), benchmarking has specific features that set it apart as a healthcare innovation. This is especially true for healthcare or medical–social organizations, as the principle of inter-organizational visiting is not part of their culture. Thus, this approach will need to be assessed for feasibility and acceptability before it is more widely promoted. PMID:23634166
A quasi two-dimensional benchmark experiment for the solidification of a tin lead binary alloy
NASA Astrophysics Data System (ADS)
Wang, Xiao Dong; Petitpas, Patrick; Garnier, Christian; Paulin, Jean-Pierre; Fautrelle, Yves
2007-05-01
A horizontal solidification benchmark experiment with pure tin and a binary alloy of Sn-10 wt.%Pb is proposed. The experiment consists in solidifying a rectangular sample using two lateral heat exchangers which allow the application a controlled horizontal temperature difference. An array of fifty thermocouples placed on the lateral wall permits the determination of the instantaneous temperature distribution. The cases with the temperature gradient G=0, and the cooling rates equal to 0.02 and 0.04 K/s are studied. The time evolution of the interfacial total heat flux and the temperature field are recorded and analyzed. This allows us to evaluate heat transfer evolution due to natural convection, as well as its influence on the solidification macrostructure. To cite this article: X.D. Wang et al., C. R. Mecanique 335 (2007).
Comas, J; Rodríguez-Roda, I; Poch, M; Gernaey, K V; Rosen, C; Jeppsson, U
2006-01-01
Wastewater treatment plant operators encounter complex operational problems related to the activated sludge process and usually respond to these by applying their own intuition and by taking advantage of what they have learnt from past experiences of similar problems. However, previous process experiences are not easy to integrate in numerical control, and new tools must be developed to enable re-use of plant operating experience. The aim of this paper is to investigate the usefulness of a case-based reasoning (CBR) approach to apply learning and re-use of knowledge gained during past incidents to confront actual complex problems through the IWA/COST Benchmark protocol. A case study shows that the proposed CBR system achieves a significant improvement of the benchmark plant performance when facing a high-flow event disturbance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Rui; Sumner, Tyler S.
2016-04-17
An advanced system analysis tool SAM is being developed for fast-running, improved-fidelity, and whole-plant transient analyses at Argonne National Laboratory under DOE-NE’s Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. As an important part of code development, companion validation activities are being conducted to ensure the performance and validity of the SAM code. This paper presents the benchmark simulations of two EBR-II tests, SHRT-45R and BOP-302R, whose data are available through the support of DOE-NE’s Advanced Reactor Technology (ART) program. The code predictions of major primary coolant system parameter are compared with the test results. Additionally, the SAS4A/SASSYS-1 code simulationmore » results are also included for a code-to-code comparison.« less
A benchmark study of the sea-level equation in GIA modelling
NASA Astrophysics Data System (ADS)
Martinec, Zdenek; Klemann, Volker; van der Wal, Wouter; Riva, Riccardo; Spada, Giorgio; Simon, Karen; Blank, Bas; Sun, Yu; Melini, Daniele; James, Tom; Bradley, Sarah
2017-04-01
The sea-level load in glacial isostatic adjustment (GIA) is described by the so called sea-level equation (SLE), which represents the mass redistribution between ice sheets and oceans on a deforming earth. Various levels of complexity of SLE have been proposed in the past, ranging from a simple mean global sea level (the so-called eustatic sea level) to the load with a deforming ocean bottom, migrating coastlines and a changing shape of the geoid. Several approaches to solve the SLE have been derived, from purely analytical formulations to fully numerical methods. Despite various teams independently investigating GIA, there has been no systematic intercomparison amongst the solvers through which the methods may be validated. The goal of this paper is to present a series of benchmark experiments designed for testing and comparing numerical implementations of the SLE. Our approach starts with simple load cases even though the benchmark will not result in GIA predictions for a realistic loading scenario. In the longer term we aim for a benchmark with a realistic loading scenario, and also for benchmark solutions with rotational feedback. The current benchmark uses an earth model for which Love numbers have been computed and benchmarked in Spada et al (2011). In spite of the significant differences in the numerical methods employed, the test computations performed so far show a satisfactory agreement between the results provided by the participants. The differences found can often be attributed to the different approximations inherent to the various algorithms. Literature G. Spada, V. R. Barletta, V. Klemann, R. E. M. Riva, Z. Martinec, P. Gasperini, B. Lund, D. Wolf, L. L. A. Vermeersen, and M. A. King, 2011. A benchmark study for glacial isostatic adjustment codes. Geophys. J. Int. 185: 106-132 doi:10.1111/j.1365-
Lopez-Regalado, María Luisa; Martínez-Granados, Luis; González-Utor, Antonio; Ortiz, Nereyda; Iglesias, Miriam; Ardoy, Manuel; Castilla, Jose A
2018-05-24
The Vienna consensus, based on the recommendations of an expert panel, has identified 19 performance indicators for assisted reproductive technology (ART) laboratories. Two levels of reference values are established for these performance indicators: competence and benchmark. For over 10 years, the Spanish embryology association (ASEBIR) has participated in the definition and design of ART performance indicators, seeking to establish specific guidelines for ART laboratories to enhance quality, safety and patient welfare. Four years ago, ASEBIR took part in an initiative by AENOR, the Spanish Association for Standardization and Certification, to develop a national standard in this field (UNE 17900:2013 System of quality management for assisted reproduction laboratories), extending the former requirements, based on ISO 9001, to include performance indicators. Considering the experience acquired, we discuss various aspects of the Vienna consensus and consider certain discrepancies in performance indicators between the consensus and UNE 179007:2013, and analyse the definitions, methodology and reference values used. Copyright © 2018. Published by Elsevier Ltd.
Benchmarking of neutron production of heavy-ion transport codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Remec, I.; Ronningen, R. M.; Heilbronn, L.
Document available in abstract form only, full text of document follows: Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in design and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondarymore » neutron production. Results are encouraging; however, further improvements in models and codes and additional benchmarking are required. (authors)« less
GENOPT 2016: Design of a generalization-based challenge in global optimization
NASA Astrophysics Data System (ADS)
Battiti, Roberto; Sergeyev, Yaroslav; Brunato, Mauro; Kvasov, Dmitri
2016-10-01
While comparing results on benchmark functions is a widely used practice to demonstrate the competitiveness of global optimization algorithms, fixed benchmarks can lead to a negative data mining process. To avoid this negative effect, the GENOPT contest benchmarks can be used which are based on randomized function generators, designed for scientific experiments, with fixed statistical characteristics but individual variation of the generated instances. The generators are available to participants for off-line tests and online tuning schemes, but the final competition is based on random seeds communicated in the last phase through a cooperative process. A brief presentation and discussion of the methods and results obtained in the framework of the GENOPT contest are given in this contribution.
Benchmarking of Decision-Support Tools Used for Tiered Sustainable Remediation Appraisal.
Smith, Jonathan W N; Kerrison, Gavin
2013-01-01
Sustainable remediation comprises soil and groundwater risk-management actions that are selected, designed, and operated to maximize net environmental, social, and economic benefit (while assuring protection of human health and safety). This paper describes a benchmarking exercise to comparatively assess potential differences in environmental management decision making resulting from application of different sustainability appraisal tools ranging from simple (qualitative) to more quantitative (multi-criteria and fully monetized cost-benefit analysis), as outlined in the SuRF-UK framework. The appraisal tools were used to rank remedial options for risk management of a subsurface petroleum release that occurred at a petrol filling station in central England. The remediation options were benchmarked using a consistent set of soil and groundwater data for each tier of sustainability appraisal. The ranking of remedial options was very similar in all three tiers, and an environmental management decision to select the most sustainable options at tier 1 would have been the same decision at tiers 2 and 3. The exercise showed that, for relatively simple remediation projects, a simple sustainability appraisal led to the same remediation option selection as more complex appraisal, and can be used to reliably inform environmental management decisions on other relatively simple land contamination projects.
Benchmarking worker nodes using LHCb productions and comparing with HEPSpec06
NASA Astrophysics Data System (ADS)
Charpentier, P.
2017-10-01
In order to estimate the capabilities of a computing slot with limited processing time, it is necessary to know with a rather good precision its “power”. This allows for example pilot jobs to match a task for which the required CPU-work is known, or to define the number of events to be processed knowing the CPU-work per event. Otherwise one always has the risk that the task is aborted because it exceeds the CPU capabilities of the resource. It also allows a better accounting of the consumed resources. The traditional way the CPU power is estimated in WLCG since 2007 is using the HEP-Spec06 benchmark (HS06) suite that was verified at the time to scale properly with a set of typical HEP applications. However, the hardware architecture of processors has evolved, all WLCG experiments moved to using 64-bit applications and use different compilation flags from those advertised for running HS06. It is therefore interesting to check the scaling of HS06 with the HEP applications. For this purpose, we have been using CPU intensive massive simulation productions from the LHCb experiment and compared their event throughput to the HS06 rating of the worker nodes. We also compared it with a much faster benchmark script that is used by the DIRAC framework used by LHCb for evaluating at run time the performance of the worker nodes. This contribution reports on the finding of these comparisons: the main observation is that the scaling with HS06 is no longer fulfilled, while the fast benchmarks have a better scaling but are less precise. One can also clearly see that some hardware or software features when enabled on the worker nodes may enhance their performance beyond expectation from either benchmark, depending on external factors.
Boyce, Maria B; Browne, John P; Greenhalgh, Joanne
2014-06-27
The use of patient-reported outcome measures (PROMs) to provide healthcare professionals with peer benchmarked feedback is growing. However, there is little evidence on the opinions of professionals on the value of this information in practice. The purpose of this research is to explore surgeon's experiences of receiving peer benchmarked PROMs feedback and to examine whether this information led to changes in their practice. This qualitative research employed a Framework approach. Semi-structured interviews were undertaken with surgeons who received peer benchmarked PROMs feedback. The participants included eleven consultant orthopaedic surgeons in the Republic of Ireland. Five themes were identified: conceptual, methodological, practical, attitudinal, and impact. A typology was developed based on the attitudinal and impact themes from which three distinct groups emerged. 'Advocates' had positive attitudes towards PROMs and confirmed that the information promoted a self-reflective process. 'Converts' were uncertain about the value of PROMs, which reduced their inclination to use the data. 'Sceptics' had negative attitudes towards PROMs and claimed that the information had no impact on their behaviour. The conceptual, methodological and practical factors were linked to the typology. Surgeons had mixed opinions on the value of peer benchmarked PROMs data. Many appreciated the feedback as it reassured them that their practice was similar to their peers. However, PROMs information alone was considered insufficient to help identify opportunities for quality improvements. The reasons for the observed reluctance of participants to embrace PROMs can be categorised into conceptual, methodological, and practical factors. Policy makers and researchers need to increase professionals' awareness of the numerous purposes and benefits of using PROMs, challenge the current methods to measure performance using PROMs, and reduce the burden of data collection and information dissemination on routine practice.
Benchmarking the ATLAS software through the Kit Validation engine
NASA Astrophysics Data System (ADS)
De Salvo, Alessandro; Brasolin, Franco
2010-04-01
The measurement of the experiment software performance is a very important metric in order to choose the most effective resources to be used and to discover the bottlenecks of the code implementation. In this work we present the benchmark techniques used to measure the ATLAS software performance through the ATLAS offline testing engine Kit Validation and the online portal Global Kit Validation. The performance measurements, the data collection, the online analysis and display of the results will be presented. The results of the measurement on different platforms and architectures will be shown, giving a full report on the CPU power and memory consumption of the Monte Carlo generation, simulation, digitization and reconstruction of the most CPU-intensive channels. The impact of the multi-core computing on the ATLAS software performance will also be presented, comparing the behavior of different architectures when increasing the number of concurrent processes. The benchmark techniques described in this paper have been used in the HEPiX group since the beginning of 2008 to help defining the performance metrics for the High Energy Physics applications, based on the real experiment software.
2015/2016 Quality Risk Management Benchmarking Survey.
Waldron, Kelly; Ramnarine, Emma; Hartman, Jeffrey
2017-01-01
This paper investigates the concept of quality risk management (QRM) maturity as it applies to the pharmaceutical and biopharmaceutical industries, using the results and analysis from a QRM benchmarking survey conducted in 2015 and 2016. QRM maturity can be defined as the effectiveness and efficiency of a quality risk management program, moving beyond "check-the-box" compliance with guidelines such as ICH Q9 Quality Risk Management , to explore the value QRM brings to business and quality operations. While significant progress has been made towards full adoption of QRM principles and practices across industry, the full benefits of QRM have not yet been fully realized. The results of the QRM Benchmarking Survey indicate that the pharmaceutical and biopharmaceutical industries are approximately halfway along the journey towards full QRM maturity. LAY ABSTRACT: The management of risks associated with medicinal product quality and patient safety are an important focus for the pharmaceutical and biopharmaceutical industries. These risks are identified, analyzed, and controlled through a defined process called quality risk management (QRM), which seeks to protect the patient from potential quality-related risks. This paper summarizes the outcomes of a comprehensive survey of industry practitioners performed in 2015 and 2016 that aimed to benchmark the level of maturity with regard to the application of QRM. The survey results and subsequent analysis revealed that the pharmaceutical and biopharmaceutical industries have made significant progress in the management of quality risks over the last ten years, and they are roughly halfway towards reaching full maturity of QRM. © PDA, Inc. 2017.
Schilling, Lisa; Chase, Alide; Kehrli, Sommer; Liu, Amy Y; Stiefel, Matt; Brentari, Ruth
2010-11-01
By 2004, senior leaders at Kaiser Permanente, the largest not-for-profit health plan in the United States, recognizing variations across service areas in quality, safety, service, and efficiency, began developing a performance improvement (PI) system to realizing best-in-class quality performance across all 35 medical centers. MEASURING SYSTEMWIDE PERFORMANCE: In 2005, a Web-based data dashboard, "Big Q," which tracks the performance of each medical center and service area against external benchmarks and internal goals, was created. PLANNING FOR PI AND BENCHMARKING PERFORMANCE: In 2006, Kaiser Permanente national and regional continued planning the PI system, and in 2007, quality, medical group, operations, and information technology leaders benchmarked five high-performing organizations to identify capabilities required to achieve consistent best-in-class organizational performance. THE PI SYSTEM: The PI system addresses the six capabilities: leadership priority setting, a systems approach to improvement, measurement capability, a learning organization, improvement capacity, and a culture of improvement. PI "deep experts" (mentors) consult with national, regional, and local leaders, and more than 500 improvement advisors are trained to manage portfolios of 90-120 day improvement initiatives at medical centers. Between the second quarter of 2008 and the first quarter of 2009, performance across all Kaiser Permanente medical centers improved on the Big Q metrics. The lessons learned in implementing and sustaining PI as it becomes fully integrated into all levels of Kaiser Permanente can be generalized to other health care systems, hospitals, and other health care organizations.
Introduction to benchmark dose methods and U.S. EPA's benchmark dose software (BMDS) version 2.1.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, J. Allen, E-mail: davis.allen@epa.gov; Gift, Jeffrey S.; Zhao, Q. Jay
2011-07-15
Traditionally, the No-Observed-Adverse-Effect-Level (NOAEL) approach has been used to determine the point of departure (POD) from animal toxicology data for use in human health risk assessments. However, this approach is subject to substantial limitations that have been well defined, such as strict dependence on the dose selection, dose spacing, and sample size of the study from which the critical effect has been identified. Also, the NOAEL approach fails to take into consideration the shape of the dose-response curve and other related information. The benchmark dose (BMD) method, originally proposed as an alternative to the NOAEL methodology in the 1980s, addressesmore » many of the limitations of the NOAEL method. It is less dependent on dose selection and spacing, and it takes into account the shape of the dose-response curve. In addition, the estimation of a BMD 95% lower bound confidence limit (BMDL) results in a POD that appropriately accounts for study quality (i.e., sample size). With the recent advent of user-friendly BMD software programs, including the U.S. Environmental Protection Agency's (U.S. EPA) Benchmark Dose Software (BMDS), BMD has become the method of choice for many health organizations world-wide. This paper discusses the BMD methods and corresponding software (i.e., BMDS version 2.1.1) that have been developed by the U.S. EPA, and includes a comparison with recently released European Food Safety Authority (EFSA) BMD guidance.« less
Flooding Experiments and Modeling for Improved Reactor Safety
DOE Office of Scientific and Technical Information (OSTI.GOV)
Solmos, M.; Hogan, K. J.; Vierow, K.
2008-09-14
Countercurrent two-phase flow and “flooding” phenomena in light water reactor systems are being investigated experimentally and analytically to improve reactor safety of current and future reactors. The aspects that will be better clarified are the effects of condensation and tube inclination on flooding in large diameter tubes. The current project aims to improve the level of understanding of flooding mechanisms and to develop an analysis model for more accurate evaluations of flooding in the pressurizer surge line of a Pressurized Water Reactor (PWR). Interest in flooding has recently increased because Countercurrent Flow Limitation (CCFL) in the AP600 pressurizer surge linemore » can affect the vessel refill rate following a small break LOCA and because analysis of hypothetical severe accidents with the current flooding models in reactor safety codes shows that these models represent the largest uncertainty in analysis of steam generator tube creep rupture. During a hypothetical station blackout without auxiliary feedwater recovery, should the hot leg become voided, the pressurizer liquid will drain to the hot leg and flooding may occur in the surge line. The flooding model heavily influences the pressurizer emptying rate and the potential for surge line structural failure due to overheating and creep rupture. The air-water test results in vertical tubes are presented in this paper along with a semi-empirical correlation for the onset of flooding. The unique aspects of the study include careful experimentation on large-diameter tubes and an integrated program in which air-water testing provides benchmark knowledge and visualization data from which to conduct steam-water testing.« less
Predictive Trip Detection for Nuclear Power Plants
NASA Astrophysics Data System (ADS)
Rankin, Drew J.; Jiang, Jin
2016-08-01
This paper investigates the use of a Kalman filter (KF) to predict, within the shutdown system (SDS) of a nuclear power plant (NPP), whether safety parameter measurements have reached a trip set-point. In addition, least squares (LS) estimation compensates for prediction error due to system-model mismatch. The motivation behind predictive shutdown is to reduce the amount of time between the occurrence of a fault or failure and the time of trip detection, referred to as time-to-trip. These reductions in time-to-trip can ultimately lead to increases in safety and productivity margins. The proposed predictive SDS differs from conventional SDSs in that it compares point-predictions of the measurements, rather than sensor measurements, against trip set-points. The predictive SDS is validated through simulation and experiments for the steam generator water level safety parameter. Performance of the proposed predictive SDS is compared against benchmark conventional SDS with respect to time-to-trip. In addition, this paper analyzes: prediction uncertainty, as well as; the conditions under which it is possible to achieve reduced time-to-trip. Simulation results demonstrate that on average the predictive SDS reduces time-to-trip by an amount of time equal to the length of the prediction horizon and that the distribution of times-to-trip is approximately Gaussian. Experimental results reveal that a reduced time-to-trip can be achieved in a real-world system with unknown system-model mismatch and that the predictive SDS can be implemented with a scan time of under 100ms. Thus, this paper is a proof of concept for KF/LS-based predictive trip detection.
Benchmarking a Visual-Basic based multi-component one-dimensional reactive transport modeling tool
NASA Astrophysics Data System (ADS)
Torlapati, Jagadish; Prabhakar Clement, T.
2013-01-01
We present the details of a comprehensive numerical modeling tool, RT1D, which can be used for simulating biochemical and geochemical reactive transport problems. The code can be run within the standard Microsoft EXCEL Visual Basic platform, and it does not require any additional software tools. The code can be easily adapted by others for simulating different types of laboratory-scale reactive transport experiments. We illustrate the capabilities of the tool by solving five benchmark problems with varying levels of reaction complexity. These literature-derived benchmarks are used to highlight the versatility of the code for solving a variety of practical reactive transport problems. The benchmarks are described in detail to provide a comprehensive database, which can be used by model developers to test other numerical codes. The VBA code presented in the study is a practical tool that can be used by laboratory researchers for analyzing both batch and column datasets within an EXCEL platform.
Analyzing the BBOB results by means of benchmarking concepts.
Mersmann, O; Preuss, M; Trautmann, H; Bischl, B; Weihs, C
2015-01-01
We present methods to answer two basic questions that arise when benchmarking optimization algorithms. The first one is: which algorithm is the "best" one? and the second one is: which algorithm should I use for my real-world problem? Both are connected and neither is easy to answer. We present a theoretical framework for designing and analyzing the raw data of such benchmark experiments. This represents a first step in answering the aforementioned questions. The 2009 and 2010 BBOB benchmark results are analyzed by means of this framework and we derive insight regarding the answers to the two questions. Furthermore, we discuss how to properly aggregate rankings from algorithm evaluations on individual problems into a consensus, its theoretical background and which common pitfalls should be avoided. Finally, we address the grouping of test problems into sets with similar optimizer rankings and investigate whether these are reflected by already proposed test problem characteristics, finding that this is not always the case.
Validation of Shielding Analysis Capability of SuperMC with SINBAD
NASA Astrophysics Data System (ADS)
Chen, Chaobin; Yang, Qi; Wu, Bin; Han, Yuncheng; Song, Jing
2017-09-01
Abstract: The shielding analysis capability of SuperMC was validated with the Shielding Integral Benchmark Archive Database (SINBAD). The SINBAD was compiled by RSICC and NEA, it includes numerous benchmark experiments performed with the D-T fusion neutron source facilities of OKTAVIAN, FNS, IPPE, etc. The results from SuperMC simulation were compared with experimental data and MCNP results. Very good agreement with deviation lower than 1% was achieved and it suggests that SuperMC is reliable in shielding calculation.
Gatemon Benchmarking and Two-Qubit Operation
NASA Astrophysics Data System (ADS)
Casparis, Lucas; Larsen, Thorvald; Olsen, Michael; Petersson, Karl; Kuemmeth, Ferdinand; Krogstrup, Peter; Nygard, Jesper; Marcus, Charles
Recent experiments have demonstrated superconducting transmon qubits with semiconductor nanowire Josephson junctions. These hybrid gatemon qubits utilize field effect tunability singular to semiconductors to allow complete qubit control using gate voltages, potentially a technological advantage over conventional flux-controlled transmons. Here, we present experiments with a two-qubit gatemon circuit. We characterize qubit coherence and stability and use randomized benchmarking to demonstrate single-qubit gate errors of ~0.5 % for all gates, including voltage-controlled Z rotations. We show coherent capacitive coupling between two gatemons and coherent SWAP operations. Finally, we perform a two-qubit controlled-phase gate with an estimated fidelity of ~91 %, demonstrating the potential of gatemon qubits for building scalable quantum processors. We acknowledge financial support from Microsoft Project Q and the Danish National Research Foundation.
FDNS CFD Code Benchmark for RBCC Ejector Mode Operation
NASA Technical Reports Server (NTRS)
Holt, James B.; Ruf, Joe
1999-01-01
Computational Fluid Dynamics (CFD) analysis results are compared with benchmark quality test data from the Propulsion Engineering Research Center's (PERC) Rocket Based Combined Cycle (RBCC) experiments to verify fluid dynamic code and application procedures. RBCC engine flowpath development will rely on CFD applications to capture the multi-dimensional fluid dynamic interactions and to quantify their effect on the RBCC system performance. Therefore, the accuracy of these CFD codes must be determined through detailed comparisons with test data. The PERC experiments build upon the well-known 1968 rocket-ejector experiments of Odegaard and Stroup by employing advanced optical and laser based diagnostics to evaluate mixing and secondary combustion. The Finite Difference Navier Stokes (FDNS) code was used to model the fluid dynamics of the PERC RBCC ejector mode configuration. Analyses were performed for both Diffusion and Afterburning (DAB) and Simultaneous Mixing and Combustion (SMC) test conditions. Results from both the 2D and the 3D models are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lell, R.; Grimm, K.; McKnight, R.
The Zero Power Physics Reactor (ZPPR) fast critical facility was built at the Argonne National Laboratory-West (ANL-W) site in Idaho in 1969 to obtain neutron physics information necessary for the design of fast breeder reactors. The ZPPR-20D Benchmark Assembly was part of a series of cores built in Assembly 20 (References 1 through 3) of the ZPPR facility to provide data for developing a nuclear power source for space applications (SP-100). The assemblies were beryllium oxide reflected and had core fuel compositions containing enriched uranium fuel, niobium and rhenium. ZPPR-20 Phase C (HEU-MET-FAST-075) was built as the reference flight configuration.more » Two other configurations, Phases D and E, simulated accident scenarios. Phase D modeled the water immersion scenario during a launch accident, and Phase E (SUB-HEU-MET-FAST-001) modeled the earth burial scenario during a launch accident. Two configurations were recorded for the simulated water immersion accident scenario (Phase D); the critical configuration, documented here, and the subcritical configuration (SUB-HEU-MET-MIXED-001). Experiments in Assembly 20 Phases 20A through 20F were performed in 1988. The reference water immersion configuration for the ZPPR-20D assembly was obtained as reactor loading 129 on October 7, 1988 with a fissile mass of 167.477 kg and a reactivity of -4.626 {+-} 0.044{cents} (k {approx} 0.9997). The SP-100 core was to be constructed of highly enriched uranium nitride, niobium, rhenium and depleted lithium. The core design called for two enrichment zones with niobium-1% zirconium alloy fuel cladding and core structure. Rhenium was to be used as a fuel pin liner to provide shut down in the event of water immersion and flooding. The core coolant was to be depleted lithium metal ({sup 7}Li). The core was to be surrounded radially with a niobium reactor vessel and bypass which would carry the lithium coolant to the forward inlet plenum. Immediately inside the reactor vessel was a rhenium baffle which would act as a neutron curtain in the event of water immersion. A fission gas plenum and coolant inlet plenum were located axially forward of the core. Some material substitutions had to be made in mocking up the SP-100 design. The ZPPR-20 critical assemblies were fueled by 93% enriched uranium metal because uranium nitride, which was the SP-100 fuel type, was not available. ZPPR Assembly 20D was designed to simulate a water immersion accident. The water was simulated by polyethylene (CH{sub 2}), which contains a similar amount of hydrogen and has a similar density. A very accurate transformation to a simplified model is needed to make any of the ZPPR assemblies a practical criticality-safety benchmark. There is simply too much geometric detail in an exact model of a ZPPR assembly, particularly as complicated an assembly as ZPPR-20D. The transformation must reduce the detail to a practical level without masking any of the important features of the critical experiment. And it must do this without increasing the total uncertainty far beyond that of the original experiment. Such a transformation will be described in a later section. First, Assembly 20D was modeled in full detail--every plate, drawer, matrix tube, and air gap was modeled explicitly. Then the regionwise compositions and volumes from this model were converted to an RZ model. ZPPR Assembly 20D has been determined to be an acceptable criticality-safety benchmark experiment.« less
Revisiting the PLUMBER Experiments from a Process-Diagnostics Perspective
NASA Astrophysics Data System (ADS)
Nearing, G. S.; Ruddell, B. L.; Clark, M. P.; Nijssen, B.; Peters-Lidard, C. D.
2017-12-01
The PLUMBER benchmarking experiments [1] showed that some of the most sophisticated land models (CABLE, CH-TESSEL, COLA-SSiB, ISBA-SURFEX, JULES, Mosaic, Noah, ORCHIDEE) were outperformed - in simulations of half-hourly surface energy fluxes - by instantaneous, out-of-sample, and globally-stationary regressions with no state memory. One criticism of PLUMBER is that the benchmarking methodology was not derived formally, so that applying a similar methodology with different performance metrics can result in qualitatively different results. Another common criticism of model intercomparison projects in general is that they offer little insight into process-level deficiencies in the models, and therefore are of marginal value for helping to improve the models. We address both of these issues by proposing a formal benchmarking methodology that also yields a formal and quantitative method for process-level diagnostics. We apply this to the PLUMBER experiments to show that (1) the PLUMBER conclusions were generally correct - the models use only a fraction of the information available to them from met forcing data (<50% by our analysis), and (2) all of the land models investigated by PLUMBER have similar process-level error structures, and therefore together do not represent a meaningful sample of structural or epistemic uncertainty. We conclude by suggesting two ways to improve the experimental design of model intercomparison and/or model benchmarking studies like PLUMBER. First, PLUMBER did not report model parameter values, and it is necessary to know these values to separate parameter uncertainty from structural uncertainty. This is a first order requirement if we want to use intercomparison studies to provide feedback to model development. Second, technical documentation of land models is inadequate. Future model intercomparison projects should begin with a collaborative effort by model developers to document specific differences between model structures. This could be done in a reproducible way using a unified, process-flexible system like SUMMA [2]. [1] Best, M.J. et al. (2015) 'The plumbing of land surface models: benchmarking model performance', J. Hydrometeor. [2] Clark, M.P. et al. (2015) 'A unified approach for process-based hydrologic modeling: 1. Modeling concept', Water Resour. Res.
Adaptive unified continuum FEM modeling of a 3D FSI benchmark problem.
Jansson, Johan; Degirmenci, Niyazi Cem; Hoffman, Johan
2017-09-01
In this paper, we address a 3D fluid-structure interaction benchmark problem that represents important characteristics of biomedical modeling. We present a goal-oriented adaptive finite element methodology for incompressible fluid-structure interaction based on a streamline diffusion-type stabilization of the balance equations for mass and momentum for the entire continuum in the domain, which is implemented in the Unicorn/FEniCS software framework. A phase marker function and its corresponding transport equation are introduced to select the constitutive law, where the mesh tracks the discontinuous fluid-structure interface. This results in a unified simulation method for fluids and structures. We present detailed results for the benchmark problem compared with experiments, together with a mesh convergence study. Copyright © 2016 John Wiley & Sons, Ltd.
Model Prediction Results for 2007 Ultrasonic Benchmark Problems
NASA Astrophysics Data System (ADS)
Kim, Hak-Joon; Song, Sung-Jin
2008-02-01
The World Federation of NDE Centers (WFNDEC) has addressed two types of problems for the 2007 ultrasonic benchmark problems: prediction of side-drilled hole responses with 45° and 60° refracted shear waves, and effects of surface curvatures on the ultrasonic responses of flat-bottomed hole. To solve this year's ultrasonic benchmark problems, we applied multi-Gaussian beam models for calculation of ultrasonic beam fields and the Kirchhoff approximation and the separation of variables method for calculation of far-field scattering amplitudes of flat-bottomed holes and side-drilled holes respectively In this paper, we present comparison results of model predictions to experiments for side-drilled holes and discuss effect of interface curvatures on ultrasonic responses by comparison of peak-to-peak amplitudes of flat-bottomed hole responses with different sizes and interface curvatures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, Margaret A.
In the early 1970s Dr. John T. Mihalczo (team leader), J.J. Lynn, and J.R. Taylor performed experiments at the Oak Ridge Critical Experiments Facility (ORCEF) with highly enriched uranium (HEU) metal (called Oak Ridge Alloy or ORALLOY) in an effort to recreate GODIVA I results with greater accuracy than those performed at Los Alamos National Laboratory in the 1950s. The purpose of the Oak Ridge ORALLOY Sphere (ORSphere) experiments was to estimate the unreflected and unmoderated critical mass of an idealized sphere of uranium metal corrected to a density, purity, and enrichment such that it could be compared with themore » GODIVA I experiments. Additionally, various material reactivity worths, the surface material worth coefficient, the delayed neutron fraction, the prompt neutron decay constant, relative fission density, and relative neutron importance were all measured. The critical assembly, material reactivity worths, the surface material worth coefficient, and the delayed neutron fraction were all evaluated as benchmark experiment measurements. The reactor physics measurements are the focus of this paper; although for clarity the critical assembly benchmark specifications are briefly discussed.« less
Clinical audit of leg ulceration prevalence in a community area: a case study of good practice.
Hindley, Jenny
2014-09-01
This article presents the findings of an audit on venous leg ulceration prevalence in a community area as a framework for discussing the concept and importance of audit as a tool to inform practice and as a means to benchmark care against national or international standards. It is hoped that the discussed audit will practically demonstrate how such procedures can be implemented in practice for those who have not yet undertaken it, as well as highlighting the unexpected extra benefits of this type of qualitative data collection that can often unexpectedly inform practice and influence change. Audit can be used to measure, monitor and disseminate evidence-based practice across community localities, facilitating the identification of learning needs and the instigation of clinical change, thereby prioritising patient needs by ensuring safety through the benchmarking of clinical practice.
Outage management and health physics issue, 2009
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agnihotri, Newal
2009-05-15
The focus of the May-June issue is on outage management and health physics. Major articles include the following: Planning and scheduling to minimize refueling outage, by Pat McKenna, AmerenUE; Prioritizing safety, quality and schedule, by Tom Sharkey, Dominion; Benchmarking to high standards, by Margie Jepson, Energy Nuclear; Benchmarking against U.S. standards, by Magnox North, United Kingdom; Enabling suppliers for new build activity, by Marcus Harrington, GE Hitachi Nuclear Energy; Identifying, cultivating and qualifying suppliers, by Thomas E. Silva, AREVA NP; Creating new U.S. jobs, by Francois Martineau, Areva NP. Industry innovation articles include: MSL Acoustic source load reduction, by Amirmore » Shahkarami, Exelon Nuclear; Dual Methodology NDE of CRDM nozzles, by Michael Stark, Dominion Nuclear; and Electronic circuit board testing, by James Amundsen, FirstEnergy Nuclear Operating Company. The plant profile article is titled The future is now, by Julia Milstead, Progress Energy Service Company, LLC.« less
Experimental Criticality Benchmarks for SNAP 10A/2 Reactor Cores
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krass, A.W.
2005-12-19
This report describes computational benchmark models for nuclear criticality derived from descriptions of the Systems for Nuclear Auxiliary Power (SNAP) Critical Assembly (SCA)-4B experimental criticality program conducted by Atomics International during the early 1960's. The selected experimental configurations consist of fueled SNAP 10A/2-type reactor cores subject to varied conditions of water immersion and reflection under experimental control to measure neutron multiplication. SNAP 10A/2-type reactor cores are compact volumes fueled and moderated with the hydride of highly enriched uranium-zirconium alloy. Specifications for the materials and geometry needed to describe a given experimental configuration for a model using MCNP5 are provided. Themore » material and geometry specifications are adequate to permit user development of input for alternative nuclear safety codes, such as KENO. A total of 73 distinct experimental configurations are described.« less
Clinical Assessment of Risk Management: an INtegrated Approach (CARMINA).
Tricarico, Pierfrancesco; Tardivo, Stefano; Sotgiu, Giovanni; Moretti, Francesca; Poletti, Piera; Fiore, Alberto; Monturano, Massimo; Mura, Ida; Privitera, Gaetano; Brusaferro, Silvio
2016-08-08
Purpose - The European Union recommendations for patient safety calls for shared clinical risk management (CRM) safety standards able to guide organizations in CRM implementation. The purpose of this paper is to develop a self-evaluation tool to measure healthcare organization performance on CRM and guide improvements over time. Design/methodology/approach - A multi-step approach was implemented including: a systematic literature review; consensus meetings with an expert panel from eight Italian leader organizations to get to an agreement on the first version; field testing to test instrument feasibility and flexibility; Delphi strategy with a second expert panel for content validation and balanced scoring system development. Findings - The self-assessment tool - Clinical Assessment of Risk Management: an INtegrated Approach includes seven areas (governance, communication, knowledge and skills, safe environment, care processes, adverse event management, learning from experience) and 52 standards. Each standard is evaluated according to four performance levels: minimum; monitoring; outcomes; and improvement actions, which resulted in a feasible, flexible and valid instrument to be used throughout different organizations. Practical implications - This tool allows practitioners to assess their CRM activities compared to minimum levels, monitor performance, benchmarking with other institutions and spreading results to different stakeholders. Originality/value - The multi-step approach allowed us to identify core minimum CRM levels in a field where no consensus has been reached. Most standards may be easily adopted in other countries.
40% Whole-House Energy Savings in the Hot-Humid Climate
DOE Office of Scientific and Technical Information (OSTI.GOV)
none,
This guide book is a resource to help builders design and construct highly energy-efficient homes, while addressing building durability, indoor air quality, and occupant health, safety, and comfort. With the measures described in this guide, builders in the hot-humid climate can build homes that achieve whole house energy savings of 40% over the Building America benchmark (the 1993 Model Energy Code) with no added overall costs for consumers.
40% Whole-House Energy Savings in the Mixed-Humid Climate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baechler, Michael C.; Gilbride, T. L.; Hefty, M. G.
2011-09-01
This guide book is a resource to help builders design and construct highly energy-efficient homes, while addressing building durability, indoor air quality, and occupant health, safety, and comfort. With the measures described in this guide, builders in the mixed-humid climate can build homes that achieve whole house energy savings of 40% over the Building America benchmark (the 1993 Model Energy Code) with no added overall costs for consumers.
Interior Head Impact Protective Components and Materials for Use in US Army Vehicles
2015-08-01
benchmarked the automotive industry to identify potential commercial-off-the-shelf (COTS) materials. TARDEC initially tested the energy attenuating...this effort leverages the performance criterion used in the automotive industry according to SAE TP201U-01, FMVSS (Federal Motor Vehicle Safety...of the core material not being fully engaged on the Ancra tract. The backing of material ID 14 was reinforced with steel , this resulted in the
ERIC Educational Resources Information Center
Center for the Study of Social Policy, 2009
2009-01-01
The "Policy Matters" project provides coherent, comprehensive information regarding the strength and adequacy of state policies affecting children, families, and communities. The project seeks to establish consensus among policy experts and state leaders regarding the mix of policies believed to offer the best opportunity for improving…
Drugs in pregnancy--the issues for 2010.
Davis, Donald B
2010-01-01
A Motherisk symposium on establishing benchmarks for the evaluation of medications during pregnancy, was held on May 10, 2006, under the auspices of the Canadian Society of Pharmacology and Therapeutics. From that symposium came a consensus on the need for collection and analysis of data on fetal safety and ongoing post-marketing surveillance, which in turn led to the establishment of CaseMed-Pregnancy--the Canadian Alliance for Safe and Effective Medication During Pregnancy and Breastfeeding.
Wheeler, Matthew W; Bailer, A John
2007-06-01
Model averaging (MA) has been proposed as a method of accounting for model uncertainty in benchmark dose (BMD) estimation. The technique has been used to average BMD dose estimates derived from dichotomous dose-response experiments, microbial dose-response experiments, as well as observational epidemiological studies. While MA is a promising tool for the risk assessor, a previous study suggested that the simple strategy of averaging individual models' BMD lower limits did not yield interval estimators that met nominal coverage levels in certain situations, and this performance was very sensitive to the underlying model space chosen. We present a different, more computationally intensive, approach in which the BMD is estimated using the average dose-response model and the corresponding benchmark dose lower bound (BMDL) is computed by bootstrapping. This method is illustrated with TiO(2) dose-response rat lung cancer data, and then systematically studied through an extensive Monte Carlo simulation. The results of this study suggest that the MA-BMD, estimated using this technique, performs better, in terms of bias and coverage, than the previous MA methodology. Further, the MA-BMDL achieves nominal coverage in most cases, and is superior to picking the "best fitting model" when estimating the benchmark dose. Although these results show utility of MA for benchmark dose risk estimation, they continue to highlight the importance of choosing an adequate model space as well as proper model fit diagnostics.
Nonlinear viscoplasticity in ASPECT: benchmarking and applications to subduction
NASA Astrophysics Data System (ADS)
Glerum, Anne; Thieulot, Cedric; Fraters, Menno; Blom, Constantijn; Spakman, Wim
2018-03-01
ASPECT (Advanced Solver for Problems in Earth's ConvecTion) is a massively parallel finite element code originally designed for modeling thermal convection in the mantle with a Newtonian rheology. The code is characterized by modern numerical methods, high-performance parallelism and extensibility. This last characteristic is illustrated in this work: we have extended the use of ASPECT from global thermal convection modeling to upper-mantle-scale applications of subduction. Subduction modeling generally requires the tracking of multiple materials with different properties and with nonlinear viscous and viscoplastic rheologies. To this end, we implemented a frictional plasticity criterion that is combined with a viscous diffusion and dislocation creep rheology. Because ASPECT uses compositional fields to represent different materials, all material parameters are made dependent on a user-specified number of fields. The goal of this paper is primarily to describe and verify our implementations of complex, multi-material rheology by reproducing the results of four well-known two-dimensional benchmarks: the indentor benchmark, the brick experiment, the sandbox experiment and the slab detachment benchmark. Furthermore, we aim to provide hands-on examples for prospective users by demonstrating the use of multi-material viscoplasticity with three-dimensional, thermomechanical models of oceanic subduction, putting ASPECT on the map as a community code for high-resolution, nonlinear rheology subduction modeling.
van Rijssen, Fredrika W Jansen; Morris, E Jane; Eloff, Jacobus N
2013-09-04
The importance of food composition in safety assessments of genetically modified (GM) food is described for cassava ( Manihot esculenta Crantz) that naturally contains significantly high levels of cyanogenic glycoside (CG) toxicants in roots and leaves. The assessment of the safety of GM cassava would logically require comparison with a non-GM crop with a proven "history of safe use". This study investigates this statement for cassava. A non-GM comparator that qualifies would be a processed product with CG level below the approved maximum level in food and that also satisfies a "worst case" of total dietary consumption. Although acute and chronic toxicity benchmark CG values for humans have been determined, intake data are scarce. Therefore, the non-GM cassava comparator is defined on the "best available knowledge". We consider nutritional values for cassava and conclude that CG residues in food should be a priority topic for research.
Adsorption structures and energetics of molecules on metal surfaces: Bridging experiment and theory
NASA Astrophysics Data System (ADS)
Maurer, Reinhard J.; Ruiz, Victor G.; Camarillo-Cisneros, Javier; Liu, Wei; Ferri, Nicola; Reuter, Karsten; Tkatchenko, Alexandre
2016-05-01
Adsorption geometry and stability of organic molecules on surfaces are key parameters that determine the observable properties and functions of hybrid inorganic/organic systems (HIOSs). Despite many recent advances in precise experimental characterization and improvements in first-principles electronic structure methods, reliable databases of structures and energetics for large adsorbed molecules are largely amiss. In this review, we present such a database for a range of molecules adsorbed on metal single-crystal surfaces. The systems we analyze include noble-gas atoms, conjugated aromatic molecules, carbon nanostructures, and heteroaromatic compounds adsorbed on five different metal surfaces. The overall objective is to establish a diverse benchmark dataset that enables an assessment of current and future electronic structure methods, and motivates further experimental studies that provide ever more reliable data. Specifically, the benchmark structures and energetics from experiment are here compared with the recently developed van der Waals (vdW) inclusive density-functional theory (DFT) method, DFT + vdWsurf. In comparison to 23 adsorption heights and 17 adsorption energies from experiment we find a mean average deviation of 0.06 Å and 0.16 eV, respectively. This confirms the DFT + vdWsurf method as an accurate and efficient approach to treat HIOSs. A detailed discussion identifies remaining challenges to be addressed in future development of electronic structure methods, for which the here presented benchmark database may serve as an important reference.
Applicability of a Bonner Shere technique for pulsed neutron in 120 GeV proton facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanami, T.; Hagiwara, M.; Iwase, H.
2008-02-01
The data on neutron spectra and intensity behind shielding are important for radiation safety design of high-energy accelerators since neutrons are capable of penetrating thick shielding and activating materials. Corresponding particle transport codes--that involve physics models of neutron and other particle production, transportation, and interaction--have been developed and used world-wide [1-8]. The results of these codes have been ensured through plenty of comparisons with experimental results taken in simple geometries. For neutron generation and transport, several related experiments have been performed to measure neutron spectra, attenuation length and reaction rates behind shielding walls of various thicknesses and materials in energymore » range up to several hundred of MeV [9-11]. The data have been used to benchmark--and modify if needed--the simulation modes and parameters in the codes, as well as the reference data for radiation safety design. To obtain such kind of data above several hundred of MeV, Japan-Fermi National Accelerator Laboratory (FNAL) collaboration for shielding experiments has been started in 2007, based on suggestion from the specialist meeting of shielding, Shielding Aspects of Target, Irradiation Facilities (SATIF), because of very limited data available in high-energy region (see, for example, [12]). As a part of this shielding experiment, a set of Bonner sphere (BS) was tested at the antiproton production target facility (pbar target station) at FNAL to obtain neutron spectra induced by a 120-GeV proton beam in concrete and iron shielding. Generally, utilization of an active detector around high-energy accelerators requires an improvement on its readout to overcome burst of secondary radiation since the accelerator delivers an intense beam to a target in a short period after relatively long acceleration period. In this paper, we employ BS for a spectrum measurement of neutrons that penetrate the shielding wall of the pbar target station in FNAL.« less
Energy benchmarking of commercial buildings: a low-cost pathway toward urban sustainability
NASA Astrophysics Data System (ADS)
Cox, Matt; Brown, Marilyn A.; Sun, Xiaojing
2013-09-01
US cities are beginning to experiment with a regulatory approach to address information failures in the real estate market by mandating the energy benchmarking of commercial buildings. Understanding how a commercial building uses energy has many benefits; for example, it helps building owners and tenants identify poor-performing buildings and subsystems and it enables high-performing buildings to achieve greater occupancy rates, rents, and property values. This paper estimates the possible impacts of a national energy benchmarking mandate through analysis chiefly utilizing the Georgia Tech version of the National Energy Modeling System (GT-NEMS). Correcting input discount rates results in a 4.0% reduction in projected energy consumption for seven major classes of equipment relative to the reference case forecast in 2020, rising to 8.7% in 2035. Thus, the official US energy forecasts appear to overestimate future energy consumption by underestimating investments in energy-efficient equipment. Further discount rate reductions spurred by benchmarking policies yield another 1.3-1.4% in energy savings in 2020, increasing to 2.2-2.4% in 2035. Benchmarking would increase the purchase of energy-efficient equipment, reducing energy bills, CO2 emissions, and conventional air pollution. Achieving comparable CO2 savings would require more than tripling existing US solar capacity. Our analysis suggests that nearly 90% of the energy saved by a national benchmarking policy would benefit metropolitan areas, and the policy’s benefits would outweigh its costs, both to the private sector and society broadly.
Benchmark study on glyphosate-resistant crop systems in the United States. Part 2: Perspectives.
Owen, Micheal D K; Young, Bryan G; Shaw, David R; Wilson, Robert G; Jordan, David L; Dixon, Philip M; Weller, Stephen C
2011-07-01
A six-state, 5 year field project was initiated in 2006 to study weed management methods that foster the sustainability of genetically engineered (GE) glyphosate-resistant (GR) crop systems. The benchmark study field-scale experiments were initiated following a survey, conducted in the winter of 2005-2006, of farmer opinions on weed management practices and their views on GR weeds and management tactics. The main survey findings supported the premise that growers were generally less aware of the significance of evolved herbicide resistance and did not have a high recognition of the strong selection pressure from herbicides on the evolution of herbicide-resistant (HR) weeds. The results of the benchmark study survey indicated that there are educational challenges to implement sustainable GR-based crop systems and helped guide the development of the field-scale benchmark study. Paramount is the need to develop consistent and clearly articulated science-based management recommendations that enable farmers to reduce the potential for HR weeds. This paper provides background perspectives about the use of GR crops, the impact of these crops and an overview of different opinions about the use of GR crops on agriculture and society, as well as defining how the benchmark study will address these issues. Copyright © 2011 Society of Chemical Industry.
TRIPOLI-4® - MCNP5 ITER A-lite neutronic model benchmarking
NASA Astrophysics Data System (ADS)
Jaboulay, J.-C.; Cayla, P.-Y.; Fausser, C.; Lee, Y.-K.; Trama, J.-C.; Li-Puma, A.
2014-06-01
The aim of this paper is to present the capability of TRIPOLI-4®, the CEA Monte Carlo code, to model a large-scale fusion reactor with complex neutron source and geometry. In the past, numerous benchmarks were conducted for TRIPOLI-4® assessment on fusion applications. Experiments (KANT, OKTAVIAN, FNG) analysis and numerical benchmarks (between TRIPOLI-4® and MCNP5) on the HCLL DEMO2007 and ITER models were carried out successively. In this previous ITER benchmark, nevertheless, only the neutron wall loading was analyzed, its main purpose was to present MCAM (the FDS Team CAD import tool) extension for TRIPOLI-4®. Starting from this work a more extended benchmark has been performed about the estimation of neutron flux, nuclear heating in the shielding blankets and tritium production rate in the European TBMs (HCLL and HCPB) and it is presented in this paper. The methodology to build the TRIPOLI-4® A-lite model is based on MCAM and the MCNP A-lite model (version 4.1). Simplified TBMs (from KIT) have been integrated in the equatorial-port. Comparisons of neutron wall loading, flux, nuclear heating and tritium production rate show a good agreement between the two codes. Discrepancies are mainly included in the Monte Carlo codes statistical error.
Hogan, Bridget; Keating, Matthew; Chambers, Neil A; von Ungern-Sternberg, Britta
2016-05-01
There are no internationally accepted guidelines about what constitutes adequate clinical exposure during pediatric anesthetic training. In Australia, no data have been published on the level of experience obtained by anesthetic trainees in pediatric anesthesia. There is, however, a new ANZCA (Australian and New Zealand College of Anaesthetists) curriculum that quantifies new training requirements. To quantify our trainees' exposure to clinical work in order to assess compliance with new curriculum and to provide other institutions with a benchmark for pediatric anesthetic training. We performed a prospective audit to estimate and quantify our anesthetic registrars' exposure to pediatric anesthesia during their 6-month rotation at our institution, a tertiary pediatric hospital in Perth, Western Australia. Our data suggest that trainees at our institution will achieve the new ANZCA training standards comfortably, in terms of the required volume and breadth of exposure. Experience, however, of some advanced pediatric anesthetic procedures appears limited. Experience gained at our hospital easily meets the new College requirements. Experience of fiber-optic intubation and regional blocks would appear insufficient to develop sufficient skills or confidence. The study provides other institutions with information to benchmark against their own trainee experience. © 2016 John Wiley & Sons Ltd.
Open wide: looking into the safety culture of dental school clinics.
Ramoni, Rachel; Walji, Muhammad F; Tavares, Anamaria; White, Joel; Tokede, Oluwabunmi; Vaderhobli, Ram; Kalenderian, Elsbeth
2014-05-01
Although dentists perform highly technical procedures in complex environments, patient safety has not received the same focus in dentistry as in medicine. Cultivating a robust patient safety culture is foundational to minimizing patient harm, but little is known about how dental teams view patient safety or the patient safety culture within their practice. As a step toward rectifying that omission, the goals of this study were to benchmark the patient safety culture in three U.S. dental schools, identifying areas for improvement. The extensively validated Medical Office Survey on Patient Safety Culture (MOSOPS), developed by the Agency for Healthcare Research and Quality, was administered to dental faculty, dental hygienists, dental students, and staff at the three schools. Forty-seven percent of the 328 invited individuals completed the survey. The "Teamwork" category received the highest marks and "Patient Care Tracking and Follow-Up" and "Leadership Support for Patient Safety" the lowest. Only 48 percent of the respondents rated systems and processes in place to prevent/catch patient problems as good/excellent. All patient safety dimensions received lower marks than in medical practices. These findings and the inherent risk associated with dental procedures lead to the conclusion that dentistry in general, and academic dental clinics in particular, stands to benefit from an increased focus on patient safety. This first published use of the MOSOPS in a dental clinic setting highlights both clinical and educational priorities for improving the safety of care in dental school clinics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Nicholas; Burns, Joseph R.
The aftermath of the Tōhoku earthquake and the Fukushima accident has led to a global push to improve the safety of existing light water reactors. A key component of this initiative is the development of nuclear fuel and cladding materials with potentially enhanced accident tolerance, also known as accident-tolerant fuels (ATF). These materials are intended to improve core fuel and cladding integrity under beyond design basis accident conditions while maintaining or enhancing reactor performance and safety characteristics during normal operation. To complement research that has already been carried out to characterize ATF neutronics, the present study provides an initial investigationmore » of the sensitivity and uncertainty of ATF systems responses to nuclear cross section data. ATF concepts incorporate novel materials, including SiC and FeCrAl cladding and high density uranium silicide composite fuels, in turn introducing new cross section sensitivities and uncertainties which may behave differently from traditional fuel and cladding materials. In this paper, we conducted sensitivity and uncertainty analysis using the TSUNAMI-2D sequence of SCALE with infinite lattice models of ATF assemblies. Of all the ATF materials considered, it is found that radiative capture in 56Fe in FeCrAl cladding is the most significant contributor to eigenvalue uncertainty. 56Fe yields significant potential eigenvalue uncertainty associated with its radiative capture cross section; this is by far the largest ATF-specific uncertainty found in these cases, exceeding even those of uranium. We found that while significant new sensitivities indeed arise, the general sensitivity behavior of ATF assemblies does not markedly differ from traditional UO2/zirconium-based fuel/cladding systems, especially with regard to uncertainties associated with uranium. We assessed the similarity of the IPEN/MB-01 reactor benchmark model to application models with FeCrAl cladding. We used TSUNAMI-IP to calculate similarity indices of the application model and IPEN/MB-01 reactor benchmark model. This benchmark was selected for its use of SS304 as a cladding and structural material, with significant 56Fe content. The similarity indices suggest that while many differences in reactor physics arise from differences in design, sensitivity to and behavior of 56Fe absorption is comparable between systems, thus indicating the potential for this benchmark to reduce uncertainties in 56Fe radiative capture cross sections.« less
Orsphere: Physics Measurments For Bare, HEU(93.2)-Metal Sphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, Margaret A.; Bess, John D.; Briggs, J. Blair
In the early 1970s Dr. John T. Mihalczo (team leader), J.J. Lynn, and J.R. Taylor performed experiments at the Oak Ridge Critical Experiments Facility (ORCEF) with highly enriched uranium (HEU) metal (called Oak Ridge Alloy or ORALLOY) in an attempt to recreate GODIVA I results with greater accuracy than those performed at Los Alamos National Laboratory in the 1950s (HEU-MET-FAST-001). The purpose of the Oak Ridge ORALLOY Sphere (ORSphere) experiments was to estimate the unreflected and unmoderated critical mass of an idealized sphere of uranium metal corrected to a density, purity, and enrichment such that it could be compared withmore » the GODIVA I experiments. “The very accurate description of this sphere, as assembled, establishes it as an ideal benchmark for calculational methods and cross-section data files” (Reference 1). While performing the ORSphere experiments care was taken to accurately document component dimensions (±0.0001 inches), masses (±0.01 g), and material data. The experiment was also set up to minimize the amount of structural material in the sphere proximity. Two, correlated spheres were evaluated and judged to be acceptable as criticality benchmark experiments. This evaluation is given in HEU-MET-FAST-100. The second, smaller sphere was used for additional reactor physics measurements. Worth measurements (Reference 1, 2, 3 and 4), the delayed neutron fraction (Reference 3, 4 and 5) and surface material worth coefficient (Reference 1 and 2) are all measured and judged to be acceptable as benchmark data. The prompt neutron decay (Reference 6), relative fission density (Reference 7) and relative neutron importance (Reference 7) were measured, but are not evaluated. Information for the evaluation was compiled from References 1 through 7, the experimental logbooks 8 and 9 ; additional drawings and notes provided by the experimenter; and communication with the lead experimenter, John T. Mihalczo.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, Margaret A.
In the early 1970s Dr. John T. Mihalczo (team leader), J.J. Lynn, and J.R. Taylor performed experiments at the Oak Ridge Critical Experiments Facility (ORCEF) with highly enriched uranium (HEU) metal (called Oak Ridge Alloy or ORALLOY) in an attempt to recreate GODIVA I results with greater accuracy than those performed at Los Alamos National Laboratory in the 1950s (HEU-MET-FAST-001). The purpose of the Oak Ridge ORALLOY Sphere (ORSphere) experiments was to estimate the unreflected and unmoderated critical mass of an idealized sphere of uranium metal corrected to a density, purity, and enrichment such that it could be compared withmore » the GODIVA I experiments. “The very accurate description of this sphere, as assembled, establishes it as an ideal benchmark for calculational methods and cross-section data files” (Reference 1). While performing the ORSphere experiments care was taken to accurately document component dimensions (±0.0001 inches), masses (±0.01 g), and material data. The experiment was also set up to minimize the amount of structural material in the sphere proximity. Two, correlated spheres were evaluated and judged to be acceptable as criticality benchmark experiments. This evaluation is given in HEU-MET-FAST-100. The second, smaller sphere was used for additional reactor physics measurements. Worth measurements (Reference 1, 2, 3 and 4), the delayed neutron fraction (Reference 3, 4 and 5) and surface material worth coefficient (Reference 1 and 2) are all measured and judged to be acceptable as benchmark data. The prompt neutron decay (Reference 6), relative fission density (Reference 7) and relative neutron importance (Reference 7) were measured, but are not evaluated. Information for the evaluation was compiled from References 1 through 7, the experimental logbooks 8 and 9 ; additional drawings and notes provided by the experimenter; and communication with the lead experimenter, John T. Mihalczo.« less
Potential of mean force for electrical conductivity of dense plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Starrett, C. E.
The electrical conductivity in dense plasmas can be calculated with the relaxation-time approximation provided that the interaction potential between the scattering electron and the ion is known. To date there has been considerable uncertainty as to the best way to define this interaction potential so that it correctly includes the effects of ionic structure, screening by electrons and partial ionization. The current approximations lead to significantly different results with varying levels of agreement when compared to bench-mark calculations and experiments. Here, we present a new way to define this potential, drawing on ideas from classical fluid theory to define amore » potential of mean force. This new potential results in significantly improved agreement with experiments and bench-mark calculations, and includes all the aforementioned physics self-consistently.« less
Potential of mean force for electrical conductivity of dense plasmas
Starrett, C. E.
2017-09-28
The electrical conductivity in dense plasmas can be calculated with the relaxation-time approximation provided that the interaction potential between the scattering electron and the ion is known. To date there has been considerable uncertainty as to the best way to define this interaction potential so that it correctly includes the effects of ionic structure, screening by electrons and partial ionization. The current approximations lead to significantly different results with varying levels of agreement when compared to bench-mark calculations and experiments. Here, we present a new way to define this potential, drawing on ideas from classical fluid theory to define amore » potential of mean force. This new potential results in significantly improved agreement with experiments and bench-mark calculations, and includes all the aforementioned physics self-consistently.« less
Potential of mean force for electrical conductivity of dense plasmas
NASA Astrophysics Data System (ADS)
Starrett, C. E.
2017-12-01
The electrical conductivity in dense plasmas can be calculated with the relaxation-time approximation provided that the interaction potential between the scattering electron and the ion is known. To date there has been considerable uncertainty as to the best way to define this interaction potential so that it correctly includes the effects of ionic structure, screening by electrons and partial ionization. Current approximations lead to significantly different results with varying levels of agreement when compared to bench-mark calculations and experiments. We present a new way to define this potential, drawing on ideas from classical fluid theory to define a potential of mean force. This new potential results in significantly improved agreement with experiments and bench-mark calculations, and includes all the aforementioned physics self-consistently.
Benchmark studies of induced radioactivity produced in LHC materials, Part II: Remanent dose rates.
Brugger, M; Khater, H; Mayer, S; Prinz, A; Roesler, S; Ulrici, L; Vincke, H
2005-01-01
A new method to estimate remanent dose rates, to be used with the Monte Carlo code FLUKA, was benchmarked against measurements from an experiment that was performed at the CERN-EU high-energy reference field facility. An extensive collection of samples of different materials were placed downstream of, and laterally to, a copper target, intercepting a positively charged mixed hadron beam with a momentum of 120 GeV c(-1). Emphasis was put on the reduction of uncertainties by taking measures such as careful monitoring of the irradiation parameters, using different instruments to measure dose rates, adopting detailed elemental analyses of the irradiated materials and making detailed simulations of the irradiation experiment. The measured and calculated dose rates are in good agreement.
Integral Full Core Multi-Physics PWR Benchmark with Measured Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forget, Benoit; Smith, Kord; Kumar, Shikhar
In recent years, the importance of modeling and simulation has been highlighted extensively in the DOE research portfolio with concrete examples in nuclear engineering with the CASL and NEAMS programs. These research efforts and similar efforts worldwide aim at the development of high-fidelity multi-physics analysis tools for the simulation of current and next-generation nuclear power reactors. Like all analysis tools, verification and validation is essential to guarantee proper functioning of the software and methods employed. The current approach relies mainly on the validation of single physic phenomena (e.g. critical experiment, flow loops, etc.) and there is a lack of relevantmore » multiphysics benchmark measurements that are necessary to validate high-fidelity methods being developed today. This work introduces a new multi-cycle full-core Pressurized Water Reactor (PWR) depletion benchmark based on two operational cycles of a commercial nuclear power plant that provides a detailed description of fuel assemblies, burnable absorbers, in-core fission detectors, core loading and re-loading patterns. This benchmark enables analysts to develop extremely detailed reactor core models that can be used for testing and validation of coupled neutron transport, thermal-hydraulics, and fuel isotopic depletion. The benchmark also provides measured reactor data for Hot Zero Power (HZP) physics tests, boron letdown curves, and three-dimensional in-core flux maps from 58 instrumented assemblies. The benchmark description is now available online and has been used by many groups. However, much work remains to be done on the quantification of uncertainties and modeling sensitivities. This work aims to address these deficiencies and make this benchmark a true non-proprietary international benchmark for the validation of high-fidelity tools. This report details the BEAVRS uncertainty quantification for the first two cycle of operations and serves as the final report of the project.« less
Benchmark tests of JENDL-3.2 for thermal and fast reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takano, Hideki; Akie, Hiroshi; Kikuchi, Yasuyuki
1994-12-31
Benchmark calculations for a variety of thermal and fast reactors have been performed by using the newly evaluated JENDL-3 Version-2 (JENDL-3.2) file. In the thermal reactor calculations for the uranium and plutonium fueled cores of TRX and TCA, the k{sub eff} and lattice parameters were well predicted. The fast reactor calculations for ZPPR-9 and FCA assemblies showed that the k{sub eff} reactivity worths of Doppler, sodium void and control rod, and reaction rate distribution were in a very good agreement with the experiments.
The National Practice Benchmark for oncology, 2014 report on 2013 data.
Towle, Elaine L; Barr, Thomas R; Senese, James L
2014-11-01
The National Practice Benchmark (NPB) is a unique tool to measure oncology practices against others across the country in a way that allows meaningful comparisons despite differences in practice size or setting. In today's economic environment every oncology practice, regardless of business structure or affiliation, should be able to produce, monitor, and benchmark basic metrics to meet current business pressures for increased efficiency and efficacy of care. Although we recognize that the NPB survey results do not capture the experience of all oncology practices, practices that can and do participate demonstrate exceptional managerial capability, and this year those practices are recognized for their participation. In this report, we continue to emphasize the methodology introduced last year in which we reported medical revenue net of the cost of the drugs as net medical revenue for the hematology/oncology product line. The effect of this is to capture only the gross margin attributable to drugs as revenue. New this year, we introduce six measures of clinical data density and expand the radiation oncology benchmarks. Copyright © 2014 by American Society of Clinical Oncology.
Dong, Nianbo; Lipsey, Mark W
2017-01-01
It is unclear whether propensity score analysis (PSA) based on pretest and demographic covariates will meet the ignorability assumption for replicating the results of randomized experiments. This study applies within-study comparisons to assess whether pre-Kindergarten (pre-K) treatment effects on achievement outcomes estimated using PSA based on a pretest and demographic covariates can approximate those found in a randomized experiment. Data-Four studies with samples of pre-K children each provided data on two math achievement outcome measures with baseline pretests and child demographic variables that included race, gender, age, language spoken at home, and mother's highest education. Research Design and Data Analysis-A randomized study of a pre-K math curriculum provided benchmark estimates of effects on achievement measures. Comparison samples from other pre-K studies were then substituted for the original randomized control and the effects were reestimated using PSA. The correspondence was evaluated using multiple criteria. The effect estimates using PSA were in the same direction as the benchmark estimates, had similar but not identical statistical significance, and did not differ from the benchmarks at statistically significant levels. However, the magnitude of the effect sizes differed and displayed both absolute and relative bias larger than required to show statistical equivalence with formal tests, but those results were not definitive because of the limited statistical power. We conclude that treatment effect estimates based on a single pretest and demographic covariates in PSA correspond to those from a randomized experiment on the most general criteria for equivalence.
Dietterich, Hannah; Lev, Einat; Chen, Jiangzhi; Richardson, Jacob A.; Cashman, Katharine V.
2017-01-01
Numerical simulations of lava flow emplacement are valuable for assessing lava flow hazards, forecasting active flows, designing flow mitigation measures, interpreting past eruptions, and understanding the controls on lava flow behavior. Existing lava flow models vary in simplifying assumptions, physics, dimensionality, and the degree to which they have been validated against analytical solutions, experiments, and natural observations. In order to assess existing models and guide the development of new codes, we conduct a benchmarking study of computational fluid dynamics (CFD) models for lava flow emplacement, including VolcFlow, OpenFOAM, FLOW-3D, COMSOL, and MOLASSES. We model viscous, cooling, and solidifying flows over horizontal planes, sloping surfaces, and into topographic obstacles. We compare model results to physical observations made during well-controlled analogue and molten basalt experiments, and to analytical theory when available. Overall, the models accurately simulate viscous flow with some variability in flow thickness where flows intersect obstacles. OpenFOAM, COMSOL, and FLOW-3D can each reproduce experimental measurements of cooling viscous flows, and OpenFOAM and FLOW-3D simulations with temperature-dependent rheology match results from molten basalt experiments. We assess the goodness-of-fit of the simulation results and the computational cost. Our results guide the selection of numerical simulation codes for different applications, including inferring emplacement conditions of past lava flows, modeling the temporal evolution of ongoing flows during eruption, and probabilistic assessment of lava flow hazard prior to eruption. Finally, we outline potential experiments and desired key observational data from future flows that would extend existing benchmarking data sets.
"Best practice" in inflammatory bowel disease: an international survey and audit.
Van Der Eijk, Ingrid; Verheggen, Frank W.; Russel, Maurice G.; Buckley, Martin; Katsanos, Kostas; Munkholm, Pia; Engdahl, Ingemar; Politi, Patrizia; Odes, Selwyn; Fossen, Jan; Stockbrügger, Reinhold W.
2004-04-01
Background: An observational study was conducted at eight university and four district hospitals in eight countries collaborating in clinical and epidemiological research in inflammatory bowel disease (IBD) to compare European health care facilities and to define current "best practice" with regard to IBD. Methods: The approach used in this multi-national survey was unique. Existing quality norms, developed for total hospital care by a specialized organization, were restricted to IBD-specific care and adapted to the frame of reference of the study group. In each center, these norms were surveyed by means of questionnaires and professional audits in all participating centers. The collected data were reported to the center, compared to data from other hospitals, and used to benchmark. Group consensus was reached with regard to defining current "best practice". Results: The observations in each center involved patient-oriented processes, technical and patient safety, and quality of the medical standard. Several findings could be directly implemented to improve IBD care in another hospital (benchmarks). These included a confidential relationship between health care worker(s) and patients, and availability of patient data. Conclusions: The observed benchmarks, in combination with other subjectively chosen "positive" procedures, have been defined as current "best practice in IBD", representing practical guidelines towards better quality of care in IBD.
Psychophysiological Sensing and State Classification for Attention Management in Commercial Aviation
NASA Technical Reports Server (NTRS)
Harrivel, Angela R.; Liles, Charles; Stephens, Chad L.; Ellis, Kyle K.; Prinzel, Lawrence J.; Pope, Alan T.
2016-01-01
Attention-related human performance limiting states (AHPLS) can cause pilots to lose airplane state awareness (ASA), and their detection is important to improving commercial aviation safety. The Commercial Aviation Safety Team found that the majority of recent international commercial aviation accidents attributable to loss of control inflight involved flight crew loss of airplane state awareness, and that distraction of various forms was involved in all of them. Research on AHPLS, including channelized attention, diverted attention, startle / surprise, and confirmation bias, has been recommended in a Safety Enhancement (SE) entitled "Training for Attention Management." To accomplish the detection of such cognitive and psychophysiological states, a broad suite of sensors has been implemented to simultaneously measure their physiological markers during high fidelity flight simulation human subject studies. Pilot participants were asked to perform benchmark tasks and experimental flight scenarios designed to induce AHPLS. Pattern classification was employed to distinguish the AHPLS induced by the benchmark tasks. Unimodal classification using pre-processed electroencephalography (EEG) signals as input features to extreme gradient boosting, random forest and deep neural network multiclass classifiers was implemented. Multi-modal classification using galvanic skin response (GSR) in addition to the same EEG signals and using the same types of classifiers produced increased accuracy with respect to the unimodal case (90 percent vs. 86 percent), although only via the deep neural network classifier. These initial results are a first step toward the goal of demonstrating simultaneous real time classification of multiple states using multiple sensing modalities in high-fidelity flight simulators. This detection is intended to support and inform training methods under development to mitigate the loss of ASA and thus reduce accidents and incidents.
The safety and efficacy of lorcaserin in the management of obesity.
Hess, Rick; Cross, L Brian
2013-11-01
Lorcaserin represents a new serotonergic medication used as an adjunct to a reduced-calorie diet and increased physical activity treatment plan for chronic weight management in adult patients with an initial body mass index ≥ 30 kg/m 2 or in adult patients with an initial body mass index ≥ 27 kg/m 2 who have ≥ 1 comorbid condition associated with weight (eg, hypertension, dyslipidemia, or type 2 diabetes mellitus). In 2012, lorcaserin became the first obesity treatment medication to gain US Food and Drug Administration (FDA) approval since 1999. Lorcaserin is a centrally acting, selective serotonin C (5-HT2C) receptor full agonist that is associated with increased satiety and decreased food consumption in patients. The selectivity of lorcaserin for 5-HT2C receptors should reduce patient risk for the serious adverse complications that are associated with nonselective 5-HT agonist therapies, such as cardiac valvulopathy and pulmonary hypertension. The safety and efficacy of lorcaserin (10 mg twice daily) for ≥ 52 weeks has been evaluated in 3 separate Phase 3 trials. The primary outcome of patient weight loss in the 3 trials satisfied the FDA categorical benchmark but patient outcomes in the trials failed to achieve the FDA mean benchmark of patient weight loss. Secondary patient outcomes after lorcaserin therapy were favorable. Lorcaserin appears to be well tolerated in patients and the most common adverse events reported did not include serious complications. The incidence of FDA-defined valvulopathy in patients after 1 year of treatment was low and nonsignificant, but the statistical analysis of this safety endpoint was limited due to the small size of the study populations and high patient dropout rates. Continued post-marketing surveillance of patients taking lorcaserin is warranted.
Plutonium Critical Mass Curve Comparison to Mass at Upper Subcritical Limit (USL) Using Whisper
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alwin, Jennifer Louise; Zhang, Ning
Whisper is computational software designed to assist the nuclear criticality safety analyst with validation studies with the MCNP ® Monte Carlo radiation transport package. Standard approaches to validation rely on the selection of benchmarks based upon expert judgment. Whisper uses sensitivity/uncertainty (S/U) methods to select relevant benchmarks to a particular application or set of applications being analyzed. Using these benchmarks, Whisper computes a calculational margin. Whisper attempts to quantify the margin of subcriticality (MOS) from errors in software and uncertainties in nuclear data. The combination of the Whisper-derived calculational margin and MOS comprise the baseline upper subcritical limit (USL), tomore » which an additional margin may be applied by the nuclear criticality safety analyst as appropriate to ensure subcriticality. A series of critical mass curves for plutonium, similar to those found in Figure 31 of LA-10860-MS, have been generated using MCNP6.1.1 and the iterative parameter study software, WORM_Solver. The baseline USL for each of the data points of the curves was then computed using Whisper 1.1. The USL was then used to determine the equivalent mass for plutonium metal-water system. ANSI/ANS-8.1 states that it is acceptable to use handbook data, such as the data directly from the LA-10860-MS, as it is already considered validated (Section 4.3 4) “Use of subcritical limit data provided in ANSI/ANS standards or accepted reference publications does not require further validation.”). This paper attempts to take a novel approach to visualize traditional critical mass curves and allows comparison with the amount of mass for which the k eff is equal to the USL (calculational margin + margin of subcriticality). However, the intent is to plot the critical mass data along with USL, not to suggest that already accepted handbook data should have new and more rigorous requirements for validation.« less
Advanced propulsion engine assessment based on a cermet reactor
NASA Technical Reports Server (NTRS)
Parsley, Randy C.
1993-01-01
A preferred Pratt & Whitney conceptual Nuclear Thermal Rocket Engine (NTRE) has been designed based on the fundamental NASA priorities of safety, reliability, cost, and performance. The basic philosophy underlying the design of the XNR2000 is the utilization of the most reliable form of ultrahigh temperature nuclear fuel and development of a core configuration which is optimized for uniform power distribution, operational flexibility, power maneuverability, weight, and robustness. The P&W NTRE system employs a fast spectrum, cermet fueled reactor configured in an expander cycle to ensure maximum operational safety. The cermet fuel form provides retention of fuel and fission products as well as high strength. A high level of confidence is provided by benchmark analysis and independent evaluations.
CALiPER Exploratory Study. Recessed Troffer Lighting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, N. J.; Royer, M. P.; Poplawski, M. E.
This CALiPER study examines the problems and benefits likely to be encountered with LED products intended to replace linear fluorescent lamps. LED dedicated troffers, replacement tubes, and non-tube retrofit kits were evaluated against fluorescent benchmark troffers in a simulated office space for photometric distribution, uniformity of light on the task surface, suitability of light output, flicker, dimming performance, color quality, power quality, safety and certification issues, ease of installation, energy efficiency, and life-cycle cost.
ASIS healthcare security benchmarking study.
2001-01-01
Effective security has aligned itself into the everyday operations of a healthcare organization. This is evident in every regional market segment, regardless of size, location, and provider clinical expertise or organizational growth. This research addresses key security issues from an acute care provider to freestanding facilities, from rural hospitals and community hospitals to large urban teaching hospitals. Security issues and concerns are identified and addressed daily by senior and middle management. As provider campuses become larger and more diverse, the hospitals surveyed have identified critical changes and improvements that are proposed or pending. Mitigating liabilities and improving patient, visitor, and/or employee safety are consequential to the performance and viability of all healthcare providers. Healthcare organizations have identified the requirement to compete for patient volume and revenue. The facility that can deliver high-quality healthcare in a comfortable, safe, secure, and efficient atmosphere will have a significant competitive advantage over a facility where patient or visitor security and safety is deficient. Continuing changes in healthcare organizations' operating structure and healthcare geographic layout mean changes in leadership and direction. These changes have led to higher levels of corporate responsibility. As a result, each organization participating in this benchmark study has added value and will derive value for the overall benefit of the healthcare providers throughout the nation. This study provides a better understanding of how the fundamental security needs of security in healthcare organizations are being addressed and its solutions identified and implemented.
Modernization at the Y-12 National Security Complex: A Case for Additional Experimental Benchmarks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thornbury, M. L.; Juarez, C.; Krass, A. W.
Efforts are underway at the Y-12 National Security Complex (Y-12) to modernize the recovery, purification, and consolidation of un-irradiated, highly enriched uranium metal. Successful integration of advanced technology such as Electrorefining (ER) eliminates many of the intermediate chemistry systems and processes that are the current and historical basis of the nuclear fuel cycle at Y-12. The cost of operations, the inventory of hazardous chemicals, and the volume of waste are significantly reduced by ER. It also introduces unique material forms and compositions related to the chemistry of chloride salts for further consideration in safety analysis and engineering. The work hereinmore » briefly describes recent investigations of nuclear criticality for 235UO2Cl2 (uranyl chloride) and 6LiCl (lithium chloride) in aqueous solution. Of particular interest is the minimum critical mass of highly enriched uranium as a function of the molar ratio of 6Li to 235U. The work herein also briefly describes recent investigations of nuclear criticality for 235U metal reflected by salt mixtures of 6LiCl or 7LiCl (lithium chloride), KCl (potassium chloride), and 235UCl3 or 238UCl3 (uranium tri-chloride). Computational methods for analysis of nuclear criticality safety and published nuclear data are employed in the absence of directly relevant experimental criticality benchmarks.« less
Protocol for a national blood transfusion data warehouse from donor to recipient.
van Hoeven, Loan R; Hooftman, Babette H; Janssen, Mart P; de Bruijne, Martine C; de Vooght, Karen M K; Kemper, Peter; Koopman, Maria M W
2016-08-04
Blood transfusion has health-related, economical and safety implications. In order to optimise the transfusion chain, comprehensive research data are needed. The Dutch Transfusion Data warehouse (DTD) project aims to establish a data warehouse where data from donors and transfusion recipients are linked. This paper describes the design of the data warehouse, challenges and illustrative applications. Quantitative data on blood donors (eg, age, blood group, antibodies) and products (type of product, processing, storage time) are obtained from the national blood bank. These are linked to data on the transfusion recipients (eg, transfusions administered, patient diagnosis, surgical procedures, laboratory parameters), which are extracted from hospital electronic health records. Expected scientific contributions are illustrated for 4 applications: determine risk factors, predict blood use, benchmark blood use and optimise process efficiency. For each application, examples of research questions are given and analyses planned. The DTD project aims to build a national, continuously updated transfusion data warehouse. These data have a wide range of applications, on the donor/production side, recipient studies on blood usage and benchmarking and donor-recipient studies, which ultimately can contribute to the efficiency and safety of blood transfusion. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Vlayen, Annemie; Hellings, Johan; Claes, Neree; Peleman, Hilde; Schrooten, Ward
2012-09-01
To measure patient safety culture in Belgian hospitals and to examine the homogeneous grouping of underlying safety culture dimensions. The Hospital Survey on Patient Safety Culture was distributed organisation-wide in 180 Belgian hospitals participating in the federal program on quality and safety between 2007 and 2009. Participating hospitals were invited to submit their data to a comparative database. Homogeneous groups of underlying safety culture dimensions were sought by hierarchical cluster analysis. 90 acute, 42 psychiatric and 11 long-term care hospitals submitted their data for comparison to other hospitals. The benchmark database included 55 225 completed questionnaires (53.7% response rate). Overall dimensional scores were low, although scores were found to be higher for psychiatric and long-term care hospitals than for acute hospitals. The overall perception of patient safety was lower in French-speaking hospitals. Hierarchical clustering of dimensions resulted in two distinct clusters. Cluster I grouped supervisor/manager expectations and actions promoting safety, organisational learning-continuous improvement, teamwork within units and communication openness, while Cluster II included feedback and communication about error, overall perceptions of patient safety, non-punitive response to error, frequency of events reported, teamwork across units, handoffs and transitions, staffing and management support for patient safety. The nationwide safety culture assessment confirms the need for a long-term national initiative to improve patient safety culture and provides each hospital with a baseline patient safety culture profile to direct an intervention plan. The identification of clusters of safety culture dimensions indicates the need for a different approach and context towards the implementation of interventions aimed at improving the safety culture. Certain clusters require unit level improvements, whereas others demand a hospital-wide policy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plaschy, M.; Murphy, M.; Jatuff, F.
2006-07-01
The PROTEUS research reactor at the Paul Scherrer Inst. (PSI) has been operating since the sixties and has already permitted, due to its high flexibility, investigation of a large range of very different nuclear systems. Currently, the ongoing experimental programme is called LWR-PROTEUS. This programme was started in 1997 and concerns large-scale investigations of advanced light water reactors (LWR) fuels. Until now, the different LWR-PROTEUS phases have permitted to study more than fifteen different configurations, each of them having to be demonstrated to be operationally safe, in particular, for the Swiss safety authorities. In this context, recent developments of themore » PSI computer capabilities have made possible the use of full-scale SD-heterogeneous MCNPX models to calculate accurately different safety related parameters (e.g. the critical driver loading and the shutdown rod worth). The current paper presents the MCNPX predictions of these operational characteristics for seven different LWR-PROTEUS configurations using a large number of nuclear data libraries. More specifically, this significant benchmarking exercise is based on the ENDF/B6v2, ENDF/B6v8, JEF2.2, JEFF3.0, JENDL3.2, and JENDL3.3 libraries. The results highlight certain library specific trends in the prediction of the multiplication factor k{sub eff} (e.g. the systematically larger reactivity calculated with JEF2.2 and the smaller reactivity associated with JEFF3.0). They also confirm the satisfactory determination of reactivity variations by all calculational schemes, for instance, due to the introduction of a safety rod pair, these calculations having been compared with experiments. (authors)« less
NASA Astrophysics Data System (ADS)
Pierazzo, E.; Artemieva, N.; Asphaug, E.; Baldwin, E. C.; Cazamias, J.; Coker, R.; Collins, G. S.; Crawford, D. A.; Davison, T.; Elbeshausen, D.; Holsapple, K. A.; Housen, K. R.; Korycansky, D. G.; Wünnemann, K.
2008-12-01
Over the last few decades, rapid improvement of computer capabilities has allowed impact cratering to be modeled with increasing complexity and realism, and has paved the way for a new era of numerical modeling of the impact process, including full, three-dimensional (3D) simulations. When properly benchmarked and validated against observation, computer models offer a powerful tool for understanding the mechanics of impact crater formation. This work presents results from the first phase of a project to benchmark and validate shock codes. A variety of 2D and 3D codes were used in this study, from commercial products like AUTODYN, to codes developed within the scientific community like SOVA, SPH, ZEUS-MP, iSALE, and codes developed at U.S. National Laboratories like CTH, SAGE/RAGE, and ALE3D. Benchmark calculations of shock wave propagation in aluminum-on-aluminum impacts were performed to examine the agreement between codes for simple idealized problems. The benchmark simulations show that variability in code results is to be expected due to differences in the underlying solution algorithm of each code, artificial stability parameters, spatial and temporal resolution, and material models. Overall, the inter-code variability in peak shock pressure as a function of distance is around 10 to 20%. In general, if the impactor is resolved by at least 20 cells across its radius, the underestimation of peak shock pressure due to spatial resolution is less than 10%. In addition to the benchmark tests, three validation tests were performed to examine the ability of the codes to reproduce the time evolution of crater radius and depth observed in vertical laboratory impacts in water and two well-characterized aluminum alloys. Results from these calculations are in good agreement with experiments. There appears to be a general tendency of shock physics codes to underestimate the radius of the forming crater. Overall, the discrepancy between the model and experiment results is between 10 and 20%, similar to the inter-code variability.
Estimating the value of life and injury for pedestrians using a stated preference framework.
Niroomand, Naghmeh; Jenkins, Glenn P
2017-09-01
The incidence of pedestrian death over the period 2010 to 2014 per 1000,000 in North Cyprus is about 2.5 times that of the EU, with 10.5 times more pedestrian road injuries than deaths. With the prospect of North Cyprus entering the EU, many investments need to be undertaken to improve road safety in order to reach EU benchmarks. We conducted a stated choice experiment to identify the preferences and tradeoffs of pedestrians in North Cyprus for improved walking times, pedestrian costs, and safety. The choice of route was examined using mixed logit models to obtain the marginal utilities associated with each attribute of the routes that consumers chose. These were used to estimate the individuals' willingness to pay (WTP) to save walking time and to avoid pedestrian fatalities and injuries. We then used the results to obtain community-wide estimates of the value of a statistical life (VSL) saved, the value of an injury (VI) prevented, and the value per hour of walking time saved. The estimate of the VSL was €699,434 and the estimate of VI was €20,077. These values are consistent, after adjusting for differences in incomes, with the median results of similar studies done for EU countries. The estimated value of time to pedestrians is €7.20 per person hour. The ratio of deaths to injuries is much higher for pedestrians than for road accidents, and this is completely consistent with the higher estimated WTP to avoid a pedestrian accident than to avoid a car accident. The value of time of €7.20 is quite high relative to the wages earned. Findings provide a set of information on the VRR for fatalities and injuries and the value of pedestrian time that is critical for conducing ex ante appraisals of investments to improve pedestrian safety. Copyright © 2017 National Safety Council and Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
McConeghy, Kevin; Wing, Coady; Wong, Vivian C.
2015-01-01
Randomized experiments have long been established as the gold standard for addressing causal questions. However, experiments are not always feasible or desired, so observational methods are also needed. When multiple observations on the same variable are available, a repeated measures design may be used to assess whether a treatment administered…
Benchmark Eye Movement Effects during Natural Reading in Autism Spectrum Disorder
ERIC Educational Resources Information Center
Howard, Philippa L.; Liversedge, Simon P.; Benson, Valerie
2017-01-01
In 2 experiments, eye tracking methodology was used to assess on-line lexical, syntactic and semantic processing in autism spectrum disorder (ASD). In Experiment 1, lexical identification was examined by manipulating the frequency of target words. Both typically developed (TD) and ASD readers showed normal frequency effects, suggesting that the…
Etchegaray, Jason M; Thomas, Eric J
2012-06-01
To examine the reliability and predictive validity of two patient safety culture surveys-Safety Attitudes Questionnaire (SAQ) and Hospital Survey on Patient Safety Culture (HSOPS)-when administered to the same participants. Also to determine the ability to convert HSOPS scores to SAQ scores. Employees working in intensive care units in 12 hospitals within a large hospital system in the southern United States were invited to anonymously complete both safety culture surveys electronically. All safety culture dimensions from both surveys (with the exception of HSOPS's Staffing) had adequate levels of reliability. Three of HSOPS's outcomes-frequency of event reporting, overall perceptions of patient safety, and overall patient safety grade-were significantly correlated with SAQ and HSOPS dimensions of culture at the individual level, with correlations ranging from r=0.41 to 0.65 for the SAQ dimensions and from r=0.22 to 0.72 for the HSOPS dimensions. Neither the SAQ dimensions nor the HSOPS dimensions predicted the fourth HSOPS outcome-number of events reported within the last 12 months. Regression analyses indicated that HSOPS safety culture dimensions were the best predictors of frequency of event reporting and overall perceptions of patient safety while SAQ and HSOPS dimensions both predicted patient safety grade. Unit-level analyses were not conducted because indices did not indicate that aggregation was appropriate. Scores were converted between the surveys, although much variance remained unexplained. Given that the SAQ and HSOPS had similar reliability and predictive validity, investigators and quality and safety leaders should consider survey length, content, sensitivity to change and the ability to benchmark when selecting a patient safety culture survey.
FDNS CFD Code Benchmark for RBCC Ejector Mode Operation: Continuing Toward Dual Rocket Effects
NASA Technical Reports Server (NTRS)
West, Jeff; Ruf, Joseph H.; Turner, James E. (Technical Monitor)
2000-01-01
Computational Fluid Dynamics (CFD) analysis results are compared with benchmark quality test data from the Propulsion Engineering Research Center's (PERC) Rocket Based Combined Cycle (RBCC) experiments to verify fluid dynamic code and application procedures. RBCC engine flowpath development will rely on CFD applications to capture the multi -dimensional fluid dynamic interactions and to quantify their effect on the RBCC system performance. Therefore, the accuracy of these CFD codes must be determined through detailed comparisons with test data. The PERC experiments build upon the well-known 1968 rocket-ejector experiments of Odegaard and Stroup by employing advanced optical and laser based diagnostics to evaluate mixing and secondary combustion. The Finite Difference Navier Stokes (FDNS) code [2] was used to model the fluid dynamics of the PERC RBCC ejector mode configuration. Analyses were performed for the Diffusion and Afterburning (DAB) test conditions at the 200-psia thruster operation point, Results with and without downstream fuel injection are presented.
The MCUCN simulation code for ultracold neutron physics
NASA Astrophysics Data System (ADS)
Zsigmond, G.
2018-02-01
Ultracold neutrons (UCN) have very low kinetic energies 0-300 neV, thereby can be stored in specific material or magnetic confinements for many hundreds of seconds. This makes them a very useful tool in probing fundamental symmetries of nature (for instance charge-parity violation by neutron electric dipole moment experiments) and contributing important parameters for the Big Bang nucleosynthesis (neutron lifetime measurements). Improved precision experiments are in construction at new and planned UCN sources around the world. MC simulations play an important role in the optimization of such systems with a large number of parameters, but also in the estimation of systematic effects, in benchmarking of analysis codes, or as part of the analysis. The MCUCN code written at PSI has been extensively used for the optimization of the UCN source optics and in the optimization and analysis of (test) experiments within the nEDM project based at PSI. In this paper we present the main features of MCUCN and interesting benchmark and application examples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maekawa, Fujio; Meigo, Shin-ichiro; Kasugai, Yoshimi
2005-05-15
A neutronic benchmark experiment on a simulated spallation neutron target assembly was conducted by using the Alternating Gradient Synchrotron at Brookhaven National Laboratory and was analyzed to investigate the prediction capability of Monte Carlo simulation codes used in neutronic designs of spallation neutron sources. The target assembly consisting of a mercury target, a light water moderator, and a lead reflector was bombarded by 1.94-, 12-, and 24-GeV protons, and the fast neutron flux distributions around the target and the spectra of thermal neutrons leaking from the moderator were measured in the experiment. In this study, the Monte Carlo particle transportmore » simulation codes NMTC/JAM, MCNPX, and MCNP-4A with associated cross-section data in JENDL and LA-150 were verified based on benchmark analysis of the experiment. As a result, all the calculations predicted the measured quantities adequately; calculated integral fluxes of fast and thermal neutrons agreed approximately within {+-}40% with the experiments although the overall energy range encompassed more than 12 orders of magnitude. Accordingly, it was concluded that these simulation codes and cross-section data were adequate for neutronics designs of spallation neutron sources.« less
Barry, Heather E; Campbell, John L; Asprey, Anthea; Richards, Suzanne H
2016-11-01
English National Quality Requirements mandate out-of-hours primary care services to routinely audit patient experience, but do not state how it should be done. We explored how providers collect patient feedback data and use it to inform service provision. We also explored staff views on the utility of out-of-hours questions from the English General Practice Patient Survey (GPPS). A qualitative study was conducted with 31 staff (comprising service managers, general practitioners and administrators) from 11 out-of-hours primary care providers in England, UK. Staff responsible for patient experience audits within their service were sampled and data collected via face-to-face semistructured interviews. Although most providers regularly audited their patients' experiences by using patient surveys, many participants expressed a strong preference for additional qualitative feedback. Staff provided examples of small changes to service delivery resulting from patient feedback, but service-wide changes were not instigated. Perceptions that patients lacked sufficient understanding of the urgent care system in which out-of-hours primary care services operate were common and a barrier to using feedback to enable change. Participants recognised the value of using patient experience feedback to benchmark services, but perceived weaknesses in the out-of-hours items from the GPPS led them to question the validity of using these data for benchmarking in its current form. The lack of clarity around how out-of-hours providers should audit patient experience hinders the utility of the National Quality Requirements. Although surveys were common, patient feedback data had only a limited role in service change. Data derived from the GPPS may be used to benchmark service providers, but refinement of the out-of-hours items is needed. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Constable, A; Jonas, D; Cockburn, A; Davi, A; Edwards, G; Hepburn, P; Herouet-Guicheney, C; Knowles, M; Moseley, B; Oberdörfer, R; Samuels, F
2007-12-01
Very few traditional foods that are consumed have been subjected to systematic toxicological and nutritional assessment, yet because of their long history and customary preparation and use and absence of evidence of harm, they are generally regarded as safe to eat. This 'history of safe use' of traditional foods forms the benchmark for the comparative safety assessment of novel foods, and of foods derived from genetically modified organisms. However, the concept is hard to define, since it relates to an existing body of information which describes the safety profile of a food, rather than a precise checklist of criteria. The term should be regarded as a working concept used to assist the safety assessment of a food product. Important factors in establishing a history of safe use include: the period over which the traditional food has been consumed; the way in which it has been prepared and used and at what intake levels; its composition and the results of animal studies and observations from human exposure. This paper is aimed to assist food safety professionals in the safety evaluation and regulation of novel foods and foods derived from genetically modified organisms, by describing the practical application and use of the concept of 'history of safe use'.
A novel hybrid meta-heuristic technique applied to the well-known benchmark optimization problems
NASA Astrophysics Data System (ADS)
Abtahi, Amir-Reza; Bijari, Afsane
2017-03-01
In this paper, a hybrid meta-heuristic algorithm, based on imperialistic competition algorithm (ICA), harmony search (HS), and simulated annealing (SA) is presented. The body of the proposed hybrid algorithm is based on ICA. The proposed hybrid algorithm inherits the advantages of the process of harmony creation in HS algorithm to improve the exploitation phase of the ICA algorithm. In addition, the proposed hybrid algorithm uses SA to make a balance between exploration and exploitation phases. The proposed hybrid algorithm is compared with several meta-heuristic methods, including genetic algorithm (GA), HS, and ICA on several well-known benchmark instances. The comprehensive experiments and statistical analysis on standard benchmark functions certify the superiority of the proposed method over the other algorithms. The efficacy of the proposed hybrid algorithm is promising and can be used in several real-life engineering and management problems.
A note on bound constraints handling for the IEEE CEC'05 benchmark function suite.
Liao, Tianjun; Molina, Daniel; de Oca, Marco A Montes; Stützle, Thomas
2014-01-01
The benchmark functions and some of the algorithms proposed for the special session on real parameter optimization of the 2005 IEEE Congress on Evolutionary Computation (CEC'05) have played and still play an important role in the assessment of the state of the art in continuous optimization. In this article, we show that if bound constraints are not enforced for the final reported solutions, state-of-the-art algorithms produce infeasible best candidate solutions for the majority of functions of the IEEE CEC'05 benchmark function suite. This occurs even though the optima of the CEC'05 functions are within the specified bounds. This phenomenon has important implications on algorithm comparisons, and therefore on algorithm designs. This article's goal is to draw the attention of the community to the fact that some authors might have drawn wrong conclusions from experiments using the CEC'05 problems.
Ergonomics: CTD management evaluation tool.
Ostendorf, J S; Rogers, B; Bertsche, P K
2000-01-01
Cumulative trauma disorder (CTD) occurrences peaked in number in 1994 and although decreasing in 1995, still accounted for 62% of all illness cases reported. A CTD Management Evaluation Tool was developed to assist Occupational Safety and Health Compliance Officers (CSHOs) in program evaluation and documentation of the occupational health management component and the need for an ergonomics program. Occupational and environmental health nurses may use the tool not only to reduce and prevent CTD occurrences, but also as a benchmark for program evaluation.
General Aviation Aircraft Reliability Study
NASA Technical Reports Server (NTRS)
Pettit, Duane; Turnbull, Andrew; Roelant, Henk A. (Technical Monitor)
2001-01-01
This reliability study was performed in order to provide the aviation community with an estimate of Complex General Aviation (GA) Aircraft System reliability. To successfully improve the safety and reliability for the next generation of GA aircraft, a study of current GA aircraft attributes was prudent. This was accomplished by benchmarking the reliability of operational Complex GA Aircraft Systems. Specifically, Complex GA Aircraft System reliability was estimated using data obtained from the logbooks of a random sample of the Complex GA Aircraft population.
NASA Software Engineering Benchmarking Study
NASA Technical Reports Server (NTRS)
Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.
2013-01-01
To identify best practices for the improvement of software engineering on projects, NASA's Offices of Chief Engineer (OCE) and Safety and Mission Assurance (OSMA) formed a team led by Heather Rarick and Sally Godfrey to conduct this benchmarking study. The primary goals of the study are to identify best practices that: Improve the management and technical development of software intensive systems; Have a track record of successful deployment by aerospace industries, universities [including research and development (R&D) laboratories], and defense services, as well as NASA's own component Centers; and Identify candidate solutions for NASA's software issues. Beginning in the late fall of 2010, focus topics were chosen and interview questions were developed, based on the NASA top software challenges. Between February 2011 and November 2011, the Benchmark Team interviewed a total of 18 organizations, consisting of five NASA Centers, five industry organizations, four defense services organizations, and four university or university R and D laboratory organizations. A software assurance representative also participated in each of the interviews to focus on assurance and software safety best practices. Interviewees provided a wealth of information on each topic area that included: software policy, software acquisition, software assurance, testing, training, maintaining rigor in small projects, metrics, and use of the Capability Maturity Model Integration (CMMI) framework, as well as a number of special topics that came up in the discussions. NASA's software engineering practices compared favorably with the external organizations in most benchmark areas, but in every topic, there were ways in which NASA could improve its practices. Compared to defense services organizations and some of the industry organizations, one of NASA's notable weaknesses involved communication with contractors regarding its policies and requirements for acquired software. One of NASA's strengths was its software assurance practices, which seemed to rate well in comparison to the other organizational groups and also seemed to include a larger scope of activities. An unexpected benefit of the software benchmarking study was the identification of many opportunities for collaboration in areas including metrics, training, sharing of CMMI experiences and resources such as instructors and CMMI Lead Appraisers, and even sharing of assets such as documented processes. A further unexpected benefit of the study was the feedback on NASA practices that was received from some of the organizations interviewed. From that feedback, other potential areas where NASA could improve were highlighted, such as accuracy of software cost estimation and budgetary practices. The detailed report contains discussion of the practices noted in each of the topic areas, as well as a summary of observations and recommendations from each of the topic areas. The resulting 24 recommendations from the topic areas were then consolidated to eliminate duplication and culled into a set of 14 suggested actionable recommendations. This final set of actionable recommendations, listed below, are items that can be implemented to improve NASA's software engineering practices and to help address many of the items that were listed in the NASA top software engineering issues. 1. Develop and implement standard contract language for software procurements. 2. Advance accurate and trusted software cost estimates for both procured and in-house software and improve the capture of actual cost data to facilitate further improvements. 3. Establish a consistent set of objectives and expectations, specifically types of metrics at the Agency level, so key trends and models can be identified and used to continuously improve software processes and each software development effort. 4. Maintain the CMMI Maturity Level requirement for critical NASA projects and use CMMI to measure organizations developing software for NASA. 5.onsolidate, collect and, if needed, develop common processes principles and other assets across the Agency in order to provide more consistency in software development and acquisition practices and to reduce the overall cost of maintaining or increasing current NASA CMMI maturity levels. 6. Provide additional support for small projects that includes: (a) guidance for appropriate tailoring of requirements for small projects, (b) availability of suitable tools, including support tool set-up and training, and (c) training for small project personnel, assurance personnel and technical authorities on the acceptable options for tailoring requirements and performing assurance on small projects. 7. Develop software training classes for the more experienced software engineers using on-line training, videos, or small separate modules of training that can be accommodated as needed throughout a project. 8. Create guidelines to structure non-classroom training opportunities such as mentoring, peer reviews, lessons learned sessions, and on-the-job training. 9. Develop a set of predictive software defect data and a process for assessing software testing metric data against it. 10. Assess Agency-wide licenses for commonly used software tools. 11. Fill the knowledge gap in common software engineering practices for new hires and co-ops.12. Work through the Science, Technology, Engineering and Mathematics (STEM) program with universities in strengthening education in the use of common software engineering practices and standards. 13. Follow up this benchmark study with a deeper look into what both internal and external organizations perceive as the scope of software assurance, the value they expect to obtain from it, and the shortcomings they experience in the current practice. 14. Continue interactions with external software engineering environment through collaborations, knowledge sharing, and benchmarking.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Margaret A. Marshall
In the early 1970’s Dr. John T. Mihalczo (team leader), J.J. Lynn, and J.R. Taylor performed experiments at the Oak Ridge Critical Experiments Facility (ORCEF) with highly enriched uranium (HEU) metal (called Oak Ridge Alloy or ORALLOY) in an attempt to recreate GODIVA I results with greater accuracy than those performed at Los Alamos National Laboratory in the 1950’s (HEU-MET-FAST-001). The purpose of the Oak Ridge ORALLOY Sphere (ORSphere) experiments was to estimate the unreflected and unmoderated critical mass of an idealized sphere of uranium metal corrected to a density, purity, and enrichment such that it could be compared withmore » the GODIVA I experiments. “The very accurate description of this sphere, as assembled, establishes it as an ideal benchmark for calculational methods and cross-section data files.” (Reference 1) While performing the ORSphere experiments care was taken to accurately document component dimensions (±0. 0001 in. for non-spherical parts), masses (±0.01 g), and material data The experiment was also set up to minimize the amount of structural material in the sphere proximity. A three part sphere was initially assembled with an average radius of 3.4665 in. and was then machined down to an average radius of 3.4420 in. (3.4425 in. nominal). These two spherical configurations were evaluated and judged to be acceptable benchmark experiments; however, the two experiments are highly correlated.« less
Benchmark gas core critical experiment.
NASA Technical Reports Server (NTRS)
Kunze, J. F.; Lofthouse, J. H.; Cooper, C. G.; Hyland, R. E.
1972-01-01
A critical experiment with spherical symmetry has been conducted on the gas core nuclear reactor concept. The nonspherical perturbations in the experiment were evaluated experimentally and produce corrections to the observed eigenvalue of approximately 1% delta k. The reactor consisted of a low density, central uranium hexafluoride gaseous core, surrounded by an annulus of void or low density hydrocarbon, which in turn was surrounded with a 97-cm-thick heavy water reflector.
Lim, Keah-Ying; Jiang, Sunny C
2013-12-15
Health risk concerns associated with household use of rooftop-harvested rainwater (HRW) constitute one of the main impediments to exploit the benefits of rainwater harvesting in the United States. However, the benchmark based on the U.S. EPA acceptable annual infection risk level of ≤1 case per 10,000 persons per year (≤10(-4) pppy) developed to aid drinking water regulations may be unnecessarily stringent for sustainable water practice. In this study, we challenge the current risk benchmark by quantifying the potential microbial risk associated with consumption of HRW-irrigated home produce and comparing it against the current risk benchmark. Microbial pathogen data for HRW and exposure rates reported in literature are applied to assess the potential microbial risk posed to household consumers of their homegrown produce. A Quantitative Microbial Risk Assessment (QMRA) model based on worst-case scenario (e.g. overhead irrigation, no pathogen inactivation) is applied to three crops that are most popular among home gardeners (lettuce, cucumbers, and tomatoes) and commonly consumed raw. The infection risks of household consumers attributed to consumption of these home produce vary with the type of produce. The lettuce presents the highest risk, which is followed by tomato and cucumber, respectively. Results show that the 95th percentile values of infection risk per intake event of home produce are one to three orders of magnitude (10(-7) to 10(-5)) lower than U.S. EPA risk benchmark (≤10(-4) pppy). However, annual infection risks under the same scenario (multiple intake events in a year) are very likely to exceed the risk benchmark by one order of magnitude in some cases. Estimated 95th percentile values of the annual risk are in the 10(-4) to 10(-3) pppy range, which are still lower than the 10(-3) to 10(-1) pppy risk range of reclaimed water irrigated produce estimated in comparable studies. We further discuss the desirability of HRW for irrigating home produce based on the relative risk of HRW to reclaimed wastewater for irrigation of food crops. The appropriateness of the ≤10(-4) pppy risk benchmark for assessing safety level of HRW-irrigated fresh produce is questioned by considering the assumptions made for the QMRA model. Consequently, the need of an updated approach to assess appropriateness of sustainable water practice for making guidelines and policies is proposed. Copyright © 2013 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bezler, P.; Hartzman, M.; Reich, M.
1980-08-01
A set of benchmark problems and solutions have been developed for verifying the adequacy of computer programs used for dynamic analysis and design of nuclear piping systems by the Response Spectrum Method. The problems range from simple to complex configurations which are assumed to experience linear elastic behavior. The dynamic loading is represented by uniform support motion, assumed to be induced by seismic excitation in three spatial directions. The solutions consist of frequencies, participation factors, nodal displacement components and internal force and moment components. Solutions to associated anchor point motion static problems are not included.
Turbofan forced mixer-nozzle internal flowfield. Volume 1: A benchmark experimental study
NASA Technical Reports Server (NTRS)
Paterson, R. W.
1982-01-01
An experimental investigation of the flow field within a model turbofan forced mixer nozzle is described. Velocity and thermodynamic state variable data for use in assessing the accuracy and assisting the further development of computational procedures for predicting the flow field within mixer nozzles are provided. Velocity and temperature data suggested that the nozzle mixing process was dominated by circulations (secondary flows) of a length scale on the order the lobe dimensions which were associated with strong radial velocities observed near the lobe exit plane. The 'benchmark' model mixer experiment conducted for code assessment purposes is discussed.
Multitasking and microtasking experience on the NA S Cray-2 and ACF Cray X-MP
NASA Technical Reports Server (NTRS)
Raiszadeh, Farhad
1987-01-01
The fast Fourier transform (FFT) kernel of the NAS benchmark program has been utilized to experiment with the multitasking library on the Cray-2 and Cray X-MP/48, and microtasking directives on the Cray X-MP. Some performance figures are shown, and the state of multitasking software is described.
ERIC Educational Resources Information Center
Brandt Brecheisen, Shannon M.
2014-01-01
The purpose of this national, quantitative study was to (1) provide psychometrics for the ACUHO-I/EBI RA Survey, a joint project between between Educational Benchmarking, Inc (EBI) and The Association of College and University Housing Officers--International (ACUHO-I), and (2) explore the sophomore resident assistant (RA) experience. This study…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dewald, E; Kozioziemski, B; Moody, J
2008-06-26
We use x-ray phase contrast imaging to characterize the inner surface roughness of DT ice layers in capsules planned for future ignition experiments. It is therefore important to quantify how well the x-ray data correlates with the actual ice roughness. We benchmarked the accuracy of our system using surrogates with fabricated roughness characterized with high precision standard techniques. Cylindrical artifacts with azimuthally uniform sinusoidal perturbations with 100 um period and 1 um amplitude demonstrated 0.02 um accuracy limited by the resolution of the imager and the source size of our phase contrast system. Spherical surrogates with random roughness close tomore » that required for the DT ice for a successful ignition experiment were used to correlate the actual surface roughness to that obtained from the x-ray measurements. When comparing average power spectra of individual measurements, the accuracy mode number limits of the x-ray phase contrast system benchmarked against surface characterization performed by Atomic Force Microscopy are 60 and 90 for surrogates smoother and rougher than the required roughness for the ice. These agreement mode number limits are >100 when comparing matching individual measurements. We will discuss the implications for interpreting DT ice roughness data derived from phase-contrast x-ray imaging.« less
NASA Astrophysics Data System (ADS)
Koscheev, Vladimir; Manturov, Gennady; Pronyaev, Vladimir; Rozhikhin, Evgeny; Semenov, Mikhail; Tsibulya, Anatoly
2017-09-01
Several k∞ experiments were performed on the KBR critical facility at the Institute of Physics and Power Engineering (IPPE), Obninsk, Russia during the 1970s and 80s for study of neutron absorption properties of Cr, Mn, Fe, Ni, Zr, and Mo. Calculations of these benchmarks with almost any modern evaluated nuclear data libraries demonstrate bad agreement with the experiment. Neutron capture cross sections of the odd isotopes of Cr, Mn, Fe, and Ni in the ROSFOND-2010 library have been reevaluated and another evaluation of the Zr nuclear data has been adopted. Use of the modified nuclear data for Cr, Mn, Fe, Ni, and Zr leads to significant improvement of the C/E ratio for the KBR assemblies. Also a significant improvement in agreement between calculated and evaluated values for benchmarks with Fe reflectors was observed. C/E results obtained with the modified ROSFOND library for complex benchmark models that are highly sensitive to the cross sections of structural materials are no worse than results obtained with other major evaluated data libraries. Possible improvement in results by decreasing the capture cross section for Zr and Mo at the energies above 1 keV is indicated.
Sintered Cathodes for All-Solid-State Structural Lithium-Ion Batteries
NASA Technical Reports Server (NTRS)
Huddleston, William; Dynys, Frederick; Sehirlioglu, Alp
2017-01-01
All-solid-state structural lithium ion batteries serve as both structural load-bearing components and as electrical energy storage devices to achieve system level weight savings in aerospace and other transportation applications. This multifunctional design goal is critical for the realization of next generation hybrid or all-electric propulsion systems. Additionally, transitioning to solid state technology improves upon battery safety from previous volatile architectures. This research established baseline solid state processing conditions and performance benchmarks for intercalation-type layered oxide materials for multifunctional application. Under consideration were lithium cobalt oxide and lithium nickel manganese cobalt oxide. Pertinent characteristics such as electrical conductivity, strength, chemical stability, and microstructure were characterized for future application in all-solid-state structural battery cathodes. The study includes characterization by XRD, ICP, SEM, ring-on-ring mechanical testing, and electrical impedance spectroscopy to elucidate optimal processing parameters, material characteristics, and multifunctional performance benchmarks. These findings provide initial conditions for implementing existing cathode materials in load bearing applications.
Performance Evaluation and Benchmarking of Next Intelligent Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
del Pobil, Angel; Madhavan, Raj; Bonsignorio, Fabio
Performance Evaluation and Benchmarking of Intelligent Systems presents research dedicated to the subject of performance evaluation and benchmarking of intelligent systems by drawing from the experiences and insights of leading experts gained both through theoretical development and practical implementation of intelligent systems in a variety of diverse application domains. This contributed volume offers a detailed and coherent picture of state-of-the-art, recent developments, and further research areas in intelligent systems. The chapters cover a broad range of applications, such as assistive robotics, planetary surveying, urban search and rescue, and line tracking for automotive assembly. Subsystems or components described in this bookmore » include human-robot interaction, multi-robot coordination, communications, perception, and mapping. Chapters are also devoted to simulation support and open source software for cognitive platforms, providing examples of the type of enabling underlying technologies that can help intelligent systems to propagate and increase in capabilities. Performance Evaluation and Benchmarking of Intelligent Systems serves as a professional reference for researchers and practitioners in the field. This book is also applicable to advanced courses for graduate level students and robotics professionals in a wide range of engineering and related disciplines including computer science, automotive, healthcare, manufacturing, and service robotics.« less
Dodd, Lori E; Proschan, Michael A; Neuhaus, Jacqueline; Koopmeiners, Joseph S; Neaton, James; Beigel, John D; Barrett, Kevin; Lane, Henry Clifford; Davey, Richard T
2016-06-15
Unique challenges posed by emerging infectious diseases often expose inadequacies in the conventional phased investigational therapeutic development paradigm. The recent Ebola outbreak in West Africa presents a critical case-study highlighting barriers to faster development. During the outbreak, clinical trials were implemented with unprecedented speed. Yet, in most cases, this fast-tracked approach proved too slow for the rapidly evolving epidemic. Controversy abounded as to the most appropriate study designs to yield safety and efficacy data, potentially causing delays in pivotal studies. Preparation for research during future outbreaks may require acceptance of a paradigm that circumvents, accelerates, or reorders traditional phases, without losing sight of the traditional benchmarks by which drug candidates must be assessed for activity, safety and efficacy. We present the design of an adaptive, parent protocol, ongoing in West Africa until January 2016. The exigent circumstances of the outbreak and limited prior clinical experience with experimental treatments, led to more direct bridging from preclinical studies to human trials than the conventional paradigm would typically have sanctioned, and required considerable design flexibility. Preliminary evaluation of the "barely Bayesian" design was provided through computer simulation studies. The understanding and public discussion of the study design will help its future implementation. Published by Oxford University Press for the Infectious Diseases Society of America 2016. This work is written by (a) US Government employee(s) and is in the public domain in the US.
Wang, Huan-Huan; Liu, Shu-Ming; Jiang, Shuaiz; Meng, Fan-Lin; Bai, Lu
2013-01-01
In the last few decades, anti-negative pressure facility (ANPF) has been emerged as a revolutionary approach for sloving the pollution in the Second Water Supply System (SWSS) in China. This study analyzed implications of the safety in SWSS with ANPF, utilizing the water distribution network hydraulic model. A method of hydraulic simulation and security assessment was presented which was able to reflect the number and location of nodes that can be installed in ANPF. Benchmark results through two instance networks showed that 67% and 89% of nodes in each network did not fit the ANPFs for installation. The simple and pratical algorithm was recommended in the water distribution network design and planing in order to increase the security of SWSS.
FDA Benchmark Medical Device Flow Models for CFD Validation.
Malinauskas, Richard A; Hariharan, Prasanna; Day, Steven W; Herbertson, Luke H; Buesen, Martin; Steinseifer, Ulrich; Aycock, Kenneth I; Good, Bryan C; Deutsch, Steven; Manning, Keefe B; Craven, Brent A
Computational fluid dynamics (CFD) is increasingly being used to develop blood-contacting medical devices. However, the lack of standardized methods for validating CFD simulations and blood damage predictions limits its use in the safety evaluation of devices. Through a U.S. Food and Drug Administration (FDA) initiative, two benchmark models of typical device flow geometries (nozzle and centrifugal blood pump) were tested in multiple laboratories to provide experimental velocities, pressures, and hemolysis data to support CFD validation. In addition, computational simulations were performed by more than 20 independent groups to assess current CFD techniques. The primary goal of this article is to summarize the FDA initiative and to report recent findings from the benchmark blood pump model study. Discrepancies between CFD predicted velocities and those measured using particle image velocimetry most often occurred in regions of flow separation (e.g., downstream of the nozzle throat, and in the pump exit diffuser). For the six pump test conditions, 57% of the CFD predictions of pressure head were within one standard deviation of the mean measured values. Notably, only 37% of all CFD submissions contained hemolysis predictions. This project aided in the development of an FDA Guidance Document on factors to consider when reporting computational studies in medical device regulatory submissions. There is an accompanying podcast available for this article. Please visit the journal's Web site (www.asaiojournal.com) to listen.
Watkinson, William; Raison, Nicholas; Abe, Takashige; Harrison, Patrick; Khan, Shamim; Van der Poel, Henk; Dasgupta, Prokar; Ahmed, Kamran
2018-05-01
To establish objective benchmarks at the level of a competent robotic surgeon across different exercises and metrics for the RobotiX Mentor virtual reality (VR) simulator suitable for use within a robotic surgical training curriculum. This retrospective observational study analysed results from multiple data sources, all of which used the RobotiX Mentor VR simulator. 123 participants with varying experience from novice to expert completed the exercises. Competency was established as the 25th centile of the mean advanced intermediate score. Three basic skill exercises and two advanced skill exercises were used. King's College London. 84 Novice, 26 beginner intermediates, 9 advanced intermediates and 4 experts were used in this retrospective observational study. Objective benchmarks derived from the 25th centile of the mean scores of the advanced intermediates provided suitably challenging yet also achievable targets for training surgeons. The disparity in scores was greatest for the advanced exercises. Novice surgeons are able to achieve the benchmarks across all exercises in the majority of metrics. We have successfully created this proof-of-concept study, which requires validation in a larger cohort. Objective benchmarks obtained from the 25th centile of the mean scores of advanced intermediates provide clinically relevant benchmarks at the standard of a competent robotic surgeon that are challenging yet also attainable. That can be used within a VR training curriculum allowing participants to track and monitor their progress in a structured and progressional manner through five exercises. Providing clearly defined targets, ensuring that a universal training standard has been achieved across training surgeons. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Arthur, Jennifer; Bahran, Rian; Hutchinson, Jesson; ...
2018-06-14
Historically, radiation transport codes have uncorrelated fission emissions. In reality, the particles emitted by both spontaneous and induced fissions are correlated in time, energy, angle, and multiplicity. This work validates the performance of various current Monte Carlo codes that take into account the underlying correlated physics of fission neutrons, specifically neutron multiplicity distributions. The performance of 4 Monte Carlo codes - MCNP®6.2, MCNP®6.2/FREYA, MCNP®6.2/CGMF, and PoliMi - was assessed using neutron multiplicity benchmark experiments. In addition, MCNP®6.2 simulations were run using JEFF-3.2 and JENDL-4.0, rather than ENDF/B-VII.1, data for 239Pu and 240Pu. The sensitive benchmark parameters that in this workmore » represent the performance of each correlated fission multiplicity Monte Carlo code include the singles rate, the doubles rate, leakage multiplication, and Feynman histograms. Although it is difficult to determine which radiation transport code shows the best overall performance in simulating subcritical neutron multiplication inference benchmark measurements, it is clear that correlations exist between the underlying nuclear data utilized by (or generated by) the various codes, and the correlated neutron observables of interest. This could prove useful in nuclear data validation and evaluation applications, in which a particular moment of the neutron multiplicity distribution is of more interest than the other moments. It is also quite clear that, because transport is handled by MCNP®6.2 in 3 of the 4 codes, with the 4th code (PoliMi) being based on an older version of MCNP®, the differences in correlated neutron observables of interest are most likely due to the treatment of fission event generation in each of the different codes, as opposed to the radiation transport.« less
Boulesteix, Anne-Laure; Wilson, Rory; Hapfelmeier, Alexander
2017-09-09
The goal of medical research is to develop interventions that are in some sense superior, with respect to patient outcome, to interventions currently in use. Similarly, the goal of research in methodological computational statistics is to develop data analysis tools that are themselves superior to the existing tools. The methodology of the evaluation of medical interventions continues to be discussed extensively in the literature and it is now well accepted that medicine should be at least partly "evidence-based". Although we statisticians are convinced of the importance of unbiased, well-thought-out study designs and evidence-based approaches in the context of clinical research, we tend to ignore these principles when designing our own studies for evaluating statistical methods in the context of our methodological research. In this paper, we draw an analogy between clinical trials and real-data-based benchmarking experiments in methodological statistical science, with datasets playing the role of patients and methods playing the role of medical interventions. Through this analogy, we suggest directions for improvement in the design and interpretation of studies which use real data to evaluate statistical methods, in particular with respect to dataset inclusion criteria and the reduction of various forms of bias. More generally, we discuss the concept of "evidence-based" statistical research, its limitations and its impact on the design and interpretation of real-data-based benchmark experiments. We suggest that benchmark studies-a method of assessment of statistical methods using real-world datasets-might benefit from adopting (some) concepts from evidence-based medicine towards the goal of more evidence-based statistical research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arthur, Jennifer; Bahran, Rian; Hutchinson, Jesson
Historically, radiation transport codes have uncorrelated fission emissions. In reality, the particles emitted by both spontaneous and induced fissions are correlated in time, energy, angle, and multiplicity. This work validates the performance of various current Monte Carlo codes that take into account the underlying correlated physics of fission neutrons, specifically neutron multiplicity distributions. The performance of 4 Monte Carlo codes - MCNP®6.2, MCNP®6.2/FREYA, MCNP®6.2/CGMF, and PoliMi - was assessed using neutron multiplicity benchmark experiments. In addition, MCNP®6.2 simulations were run using JEFF-3.2 and JENDL-4.0, rather than ENDF/B-VII.1, data for 239Pu and 240Pu. The sensitive benchmark parameters that in this workmore » represent the performance of each correlated fission multiplicity Monte Carlo code include the singles rate, the doubles rate, leakage multiplication, and Feynman histograms. Although it is difficult to determine which radiation transport code shows the best overall performance in simulating subcritical neutron multiplication inference benchmark measurements, it is clear that correlations exist between the underlying nuclear data utilized by (or generated by) the various codes, and the correlated neutron observables of interest. This could prove useful in nuclear data validation and evaluation applications, in which a particular moment of the neutron multiplicity distribution is of more interest than the other moments. It is also quite clear that, because transport is handled by MCNP®6.2 in 3 of the 4 codes, with the 4th code (PoliMi) being based on an older version of MCNP®, the differences in correlated neutron observables of interest are most likely due to the treatment of fission event generation in each of the different codes, as opposed to the radiation transport.« less
Food Recognition: A New Dataset, Experiments, and Results.
Ciocca, Gianluigi; Napoletano, Paolo; Schettini, Raimondo
2017-05-01
We propose a new dataset for the evaluation of food recognition algorithms that can be used in dietary monitoring applications. Each image depicts a real canteen tray with dishes and foods arranged in different ways. Each tray contains multiple instances of food classes. The dataset contains 1027 canteen trays for a total of 3616 food instances belonging to 73 food classes. The food on the tray images has been manually segmented using carefully drawn polygonal boundaries. We have benchmarked the dataset by designing an automatic tray analysis pipeline that takes a tray image as input, finds the regions of interest, and predicts for each region the corresponding food class. We have experimented with three different classification strategies using also several visual descriptors. We achieve about 79% of food and tray recognition accuracy using convolutional-neural-networks-based features. The dataset, as well as the benchmark framework, are available to the research community.
All Entangled States can Demonstrate Nonclassical Teleportation.
Cavalcanti, Daniel; Skrzypczyk, Paul; Šupić, Ivan
2017-09-15
Quantum teleportation, the process by which Alice can transfer an unknown quantum state to Bob by using preshared entanglement and classical communication, is one of the cornerstones of quantum information. The standard benchmark for certifying quantum teleportation consists in surpassing the maximum average fidelity between the teleported and the target states that can be achieved classically. According to this figure of merit, not all entangled states are useful for teleportation. Here we propose a new benchmark that uses the full information available in a teleportation experiment and prove that all entangled states can implement a quantum channel which cannot be reproduced classically. We introduce the idea of nonclassical teleportation witness to certify if a teleportation experiment is genuinely quantum and discuss how to quantify this phenomenon. Our work provides new techniques for studying teleportation that can be immediately applied to certify the quality of quantum technologies.
CFD validation experiments for hypersonic flows
NASA Technical Reports Server (NTRS)
Marvin, Joseph G.
1992-01-01
A roadmap for CFD code validation is introduced. The elements of the roadmap are consistent with air-breathing vehicle design requirements and related to the important flow path components: forebody, inlet, combustor, and nozzle. Building block and benchmark validation experiments are identified along with their test conditions and measurements. Based on an evaluation criteria, recommendations for an initial CFD validation data base are given and gaps identified where future experiments could provide new validation data.
NASA Technical Reports Server (NTRS)
Radovcich, N. A.
1984-01-01
The design experience associated with a benchmark aeroelastic design of an out of production transport aircraft is discussed. Current work being performed on a high aspect ratio wing design is reported. The Preliminary Aeroelastic Design of Structures (PADS) system is briefly summarized and some operational aspects of generating the design in an automated aeroelastic design environment are discussed.
Robust performance of multiple tasks by a mobile robot
NASA Technical Reports Server (NTRS)
Beckerman, Martin; Barnett, Deanna L.; Dickens, Mike; Weisbin, Charles R.
1989-01-01
While there have been many successful mobile robot experiments, only a few papers have addressed issues pertaining to the range of applicability, or robustness, of robotic systems. The purpose of this paper is to report results of a series of benchmark experiments done to determine and quantify the robustness of an integrated hardware and software system of a mobile robot.
What can one learn from experiments about the elusive transition state?
Chang, Iksoo; Cieplak, Marek; Banavar, Jayanth R.; Maritan, Amos
2004-01-01
We present the results of an exact analysis of a model energy landscape of a protein to clarify the idea of the transition state and the physical meaning of the φ values determined in protein engineering experiments. We benchmark our findings to various theoretical approaches proposed in the literature for the identification and characterization of the transition state. PMID:15295118
Present Status and Extensions of the Monte Carlo Performance Benchmark
NASA Astrophysics Data System (ADS)
Hoogenboom, J. Eduard; Petrovic, Bojan; Martin, William R.
2014-06-01
The NEA Monte Carlo Performance benchmark started in 2011 aiming to monitor over the years the abilities to perform a full-size Monte Carlo reactor core calculation with a detailed power production for each fuel pin with axial distribution. This paper gives an overview of the contributed results thus far. It shows that reaching a statistical accuracy of 1 % for most of the small fuel zones requires about 100 billion neutron histories. The efficiency of parallel execution of Monte Carlo codes on a large number of processor cores shows clear limitations for computer clusters with common type computer nodes. However, using true supercomputers the speedup of parallel calculations is increasing up to large numbers of processor cores. More experience is needed from calculations on true supercomputers using large numbers of processors in order to predict if the requested calculations can be done in a short time. As the specifications of the reactor geometry for this benchmark test are well suited for further investigations of full-core Monte Carlo calculations and a need is felt for testing other issues than its computational performance, proposals are presented for extending the benchmark to a suite of benchmark problems for evaluating fission source convergence for a system with a high dominance ratio, for coupling with thermal-hydraulics calculations to evaluate the use of different temperatures and coolant densities and to study the correctness and effectiveness of burnup calculations. Moreover, other contemporary proposals for a full-core calculation with realistic geometry and material composition will be discussed.
A new numerical benchmark for variably saturated variable-density flow and transport in porous media
NASA Astrophysics Data System (ADS)
Guevara, Carlos; Graf, Thomas
2016-04-01
In subsurface hydrological systems, spatial and temporal variations in solute concentration and/or temperature may affect fluid density and viscosity. These variations could lead to potentially unstable situations, in which a dense fluid overlies a less dense fluid. These situations could produce instabilities that appear as dense plume fingers migrating downwards counteracted by vertical upwards flow of freshwater (Simmons et al., Transp. Porous Medium, 2002). As a result of unstable variable-density flow, solute transport rates are increased over large distances and times as compared to constant-density flow. The numerical simulation of variable-density flow in saturated and unsaturated media requires corresponding benchmark problems against which a computer model is validated (Diersch and Kolditz, Adv. Water Resour, 2002). Recorded data from a laboratory-scale experiment of variable-density flow and solute transport in saturated and unsaturated porous media (Simmons et al., Transp. Porous Medium, 2002) is used to define a new numerical benchmark. The HydroGeoSphere code (Therrien et al., 2004) coupled with PEST (www.pesthomepage.org) are used to obtain an optimized parameter set capable of adequately representing the data set by Simmons et al., (2002). Fingering in the numerical model is triggered using random hydraulic conductivity fields. Due to the inherent randomness, a large number of simulations were conducted in this study. The optimized benchmark model adequately predicts the plume behavior and the fate of solutes. This benchmark is useful for model verification of variable-density flow problems in saturated and/or unsaturated media.
Sexton, J Bryan; Schwartz, Stephanie P; Chadwick, Whitney A; Rehder, Kyle J; Bae, Jonathan; Bokovoy, Joanna; Doram, Keith; Sotile, Wayne; Adair, Kathryn C; Profit, Jochen
2017-08-01
Improving the resiliency of healthcare workers is a national imperative, driven in part by healthcare workers having minimal exposure to the skills and culture to achieve work-life balance (WLB). Regardless of current policies, healthcare workers feel compelled to work more and take less time to recover from work. Satisfaction with WLB has been measured, as has work-life conflict, but how frequently healthcare workers engage in specific WLB behaviours is rarely assessed. Measurement of behaviours may have advantages over measurement of perceptions; behaviours more accurately reflect WLB and can be targeted by leaders for improvement. 1. To describe a novel survey scale for evaluating work-life climate based on specific behavioural frequencies in healthcare workers.2. To evaluate the scale's psychometric properties and provide benchmarking data from a large healthcare system.3. To investigate associations between work-life climate, teamwork climate and safety climate. Cross-sectional survey study of US healthcare workers within a large healthcare system. 7923 of 9199 eligible healthcare workers across 325 work settings within 16 hospitals completed the survey in 2009 (86% response rate). The overall work-life climate scale internal consistency was Cronbach α=0.790. t-Tests of top versus bottom quartile work settings revealed that positive work-life climate was associated with better teamwork climate, safety climate and increased participation in safety leadership WalkRounds with feedback (p<0.001). Univariate analysis of variance demonstrated differences that varied significantly in WLB between healthcare worker role, hospitals and work setting. The work-life climate scale exhibits strong psychometric properties, elicits results that vary widely by work setting, discriminates between positive and negative workplace norms, and aligns well with other culture constructs that have been found to correlate with clinical outcomes. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Information System Implementation: Benchmarking the Stages.
ERIC Educational Resources Information Center
Calbos, Dennis P.
1984-01-01
The evolution of administrative data processing systems at the University of Georgia is summarized. Nolan's revised stage model was used as a framework to present the university's experience and relate it to the growing body of system implementation research. (Author/MLW)
Code of Federal Regulations, 2014 CFR
2014-10-01
... adjustments made pursuant to the benchmark standards described in § 156.110 of this subchapter. Benefit design... this subchapter. Enrollee satisfaction survey vendor means an organization that has relevant survey administration experience (for example, CAHPS® surveys), organizational survey capacity, and quality control...
NASA Astrophysics Data System (ADS)
Hess, Alexander Jay
Science and agriculture professional organizations have argued for agricultural literacy as a goal for K-12 public education. Due to the complexity of our modern agri-food system, with social, economic, and environmental concerns embedded, an agriculturally literate society is needed for informed decision making, democratic participation, and system reform. While grade-span specific benchmarks for gauging agri-food system literacy have been developed, little attention has been paid to existing ideas individuals hold about the agri-food system, how these existing ideas relate to benchmarks, how experience shapes such ideas, or how ideas change overtime. Developing a body of knowledge on students' agri-food system understandings as they develop across K-12 grades can ground efforts seeking to promote a learning progression toward agricultural literacy. This study compares existing perceptions held by 18 upper elementary students from a large urban center in California to agri-food system literacy benchmarks and examines the perceptions against student background and experiences. Data were collected via semi-structured interviews and analyzed using the constant comparative method. Constructivist theoretical perspectives framed the study. No student had ever grown their own food, raised a plant, or cared for an animal. Participation in school fieldtrips to farms or visits to a relative's garden were agricultural experiences most frequently mentioned. Students were able to identify common food items, but could not elaborate on their origins, especially those that were highly processed. Students' understanding of post-production activities (i.e. food processing, manufacturing, or food marketing) was not apparent. Students' understanding of farms reflected a 1900's subsistence farming operation commonly found in a literature written for the primary grades. Students were unaware that plants and animals were selected for production based on desired genetic traits. Obtaining food from areas with favorable growing conditions and supporting technology (such as transportation and refrigeration) was an understanding lacking in the group. Furthermore, most spoilage prevention technologies employed today were not an expressed part of student's schema. Students' backgrounds and experiences did not appear to support the development of a robust agri-food system schema. An agricultural science and technology schema appears poorly developed in each of the students.
Sensitivity and Uncertainty Analysis of the GFR MOX Fuel Subassembly
NASA Astrophysics Data System (ADS)
Lüley, J.; Vrban, B.; Čerba, Š.; Haščík, J.; Nečas, V.; Pelloni, S.
2014-04-01
We performed sensitivity and uncertainty analysis as well as benchmark similarity assessment of the MOX fuel subassembly designed for the Gas-Cooled Fast Reactor (GFR) as a representative material of the core. Material composition was defined for each assembly ring separately allowing us to decompose the sensitivities not only for isotopes and reactions but also for spatial regions. This approach was confirmed by direct perturbation calculations for chosen materials and isotopes. Similarity assessment identified only ten partly comparable benchmark experiments that can be utilized in the field of GFR development. Based on the determined uncertainties, we also identified main contributors to the calculation bias.
NASA Astrophysics Data System (ADS)
Zhirkin, A. V.; Alekseev, P. N.; Batyaev, V. F.; Gurevich, M. I.; Dudnikov, A. A.; Kuteev, B. V.; Pavlov, K. V.; Titarenko, Yu. E.; Titarenko, A. Yu.
2017-06-01
In this report the calculation accuracy requirements of the main parameters of the fusion neutron source, and the thermonuclear blankets with a DT fusion power of more than 10 MW, are formulated. To conduct the benchmark experiments the technical documentation and calculation models were developed for two blanket micro-models: the molten salt and the heavy water solid-state blankets. The calculations of the neutron spectra, and 37 dosimetric reaction rates that are widely used for the registration of thermal, resonance and threshold (0.25-13.45 MeV) neutrons, were performed for each blanket micro-model. The MCNP code and the neutron data library ENDF/B-VII were used for the calculations. All the calculations were performed for two kinds of neutron source: source I is the fusion source, source II is the source of neutrons generated by the 7Li target irradiated by protons with energy 24.6 MeV. The spectral indexes ratios were calculated to describe the spectrum variations from different neutron sources. The obtained results demonstrate the advantage of using the fusion neutron source in future experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, Timothy F. G., E-mail: tim.green@materials.ox.ac.uk; Yates, Jonathan R., E-mail: jonathan.yates@materials.ox.ac.uk
2014-06-21
We present a method for the first-principles calculation of nuclear magnetic resonance (NMR) J-coupling in extended systems using state-of-the-art ultrasoft pseudopotentials and including scalar-relativistic effects. The use of ultrasoft pseudopotentials is allowed by extending the projector augmented wave (PAW) method of Joyce et al. [J. Chem. Phys. 127, 204107 (2007)]. We benchmark it against existing local-orbital quantum chemical calculations and experiments for small molecules containing light elements, with good agreement. Scalar-relativistic effects are included at the zeroth-order regular approximation level of theory and benchmarked against existing local-orbital quantum chemical calculations and experiments for a number of small molecules containing themore » heavy row six elements W, Pt, Hg, Tl, and Pb, with good agreement. Finally, {sup 1}J(P-Ag) and {sup 2}J(P-Ag-P) couplings are calculated in some larger molecular crystals and compared against solid-state NMR experiments. Some remarks are also made as to improving the numerical stability of dipole perturbations using PAW.« less
Paxton, Elizabeth W; Inacio, Maria Cs; Kiley, Mary-Lou
2012-01-01
Considering the high cost, volume, and patient safety issues associated with medical devices, monitoring of medical device performance is critical to ensure patient safety and quality of care. The purpose of this article is to describe the Kaiser Permanente (KP) implant registries and to highlight the benefits of these implant registries on patient safety, quality, cost effectiveness, and research. Eight KP implant registries leverage the integrated health care system's administrative databases and electronic health records system. Registry data collected undergo quality control and validation as well as statistical analysis. Patient safety has been enhanced through identification of affected patients during major recalls, identification of risk factors associated with outcomes of interest, development of risk calculators, and surveillance programs for infections and adverse events. Effective quality improvement activities included medical center- and surgeon-specific profiles for use in benchmarking reports, and changes in practice related to registry information output. Among the cost-effectiveness strategies employed were collaborations with sourcing and contracting groups, and assistance in adherence to formulary device guidelines. Research studies using registry data included postoperative complications, resource utilization, infection risk factors, thromboembolic prophylaxis, effects of surgical delay on concurrent injuries, and sports injury patterns. The unique KP implant registries provide important information and affect several areas of our organization, including patient safety, quality improvement, cost-effectiveness, and research.
Kirkwood, R. K.; Michel, P.; London, R.; ...
2011-05-26
To optimize the coupling to indirect drive targets in the National Ignition Campaign (NIC) at the National Ignition Facility, a model of stimulated scattering produced by multiple laser beams is used. The model has shown that scatter of the 351 nm beams can be significantly enhanced over single beam predictions in ignition relevant targets by the interaction of the multiple crossing beams with a millimeter scale length, 2.5 keV, 0.02 - 0.05 x critical density, plasma. The model uses a suite of simulation capabilities and its key aspects are benchmarked with experiments at smaller laser facilities. The model has alsomore » influenced the design of the initial targets used for NIC by showing that both the stimulated Brillouin scattering (SBS) and stimulated Raman scattering (SRS) can be reduced by the reduction of the plasma density in the beam intersection volume that is caused by an increase in the diameter of the laser entrance hole (LEH). In this model, a linear wave response leads to a small gain exponent produced by each crossing quad of beams (<~1 per quad) which amplifies the scattering that originates in the target interior where the individual beams are separated and crosses many or all other beams near the LEH as it exits the target. As a result all 23 crossing quads of beams produce a total gain exponent of several or greater for seeds of light with wavelengths in the range that is expected for scattering from the interior (480 to 580 nm for SRS). This means that in the absence of wave saturation, the overall multi-beam scatter will be significantly larger than the expectations for single beams. The potential for non-linear saturation of the Langmuir waves amplifying SRS light is also analyzed with a two dimensional, vectorized, particle in cell code (2D VPIC) that is benchmarked by amplification experiments in a plasma with normalized parameters similar to ignition targets. The physics of cumulative scattering by multiple crossing beams that simultaneously amplify the same SBS light wave is further demonstrated in experiments that benchmark the linear models for the ion waves amplifying SBS. Here, the expectation from this model and its experimental benchmarks is shown to be consistent with observations of stimulated Raman scatter in the first series of energetic experiments with ignition targets, confirming the importance of the multi-beam scattering model for optimizing coupling.« less
Skill Assessment in the Interpretation of 3D Fracture Patterns from Radiographs
Rojas-Murillo, Salvador; Hanley, Jessica M; Kreiter, Clarence D; Karam, Matthew D; Anderson, Donald D
2016-01-01
Abstract Background Interpreting two-dimensional radiographs to ascertain the three-dimensional (3D) position and orientation of fracture planes and bone fragments is an important component of orthopedic diagnosis and clinical management. This skill, however, has not been thoroughly explored and measured. Our primary research question is to determine if 3D radiographic image interpretation can be reliably assessed, and whether this assessment varies by level of training. A test designed to measure this skill among orthopedic surgeons would provide a quantitative benchmark for skill assessment and training research. Methods Two tests consisting of a series of online exercises were developed to measure this skill. Each exercise displayed a pair of musculoskeletal radiographs. Participants selected one of three CT slices of the same or similar fracture patterns that best matched the radiographs. In experiment 1, 10 orthopedic residents and staff responded to nine questions. In experiment 2, 52 residents from both orthopedics and radiology responded to 12 questions. Results Experiment 1 yielded a Cronbach alpha of 0.47. Performance correlated with experience; r(8) = 0.87, p<0.01, suggesting that the test could be both valid and reliable with a slight increase in test length. In experiment 2, after removing three non-discriminating items, the Cronbach coefficient alpha was 0.28 and performance correlated with experience; r(50) = 0.25, p<0.10. Conclusions Although evidence for reliability and validity was more compelling with the first experiment, the analyses suggest motivation and test duration are important determinants of test efficacy. The interpretation of radiographs to discern 3D information is a promising and a relatively unexplored area for surgical skill education and assessment. The online test was useful and reliable. Further test development is likely to increase test effectiveness. Clinical Relevance Accurately interpreting radiographic images is an essential clinical skill. Quantitative, repeatable techniques to measure this skill can improve resident training and improve patient safety. PMID:27528827
NRL 1989 Beam Propagation Studies in Support of the ATA Multi-Pulse Propagation Experiment
1990-08-31
papers presented here were all written prior to the completion of the experiment. The first of these papers presents simulation results which modeled ...beam stability and channel evolution for an entire five pulse burst. The second paper describes a new air chemistry model used in the SARLAC...Experiment: A new air chemistry model for use in the propagation codes simulating the MPPE was developed by making analytic fits to benchmark runs with
De Brún, Aoife; Heavey, Emily; Waring, Justin; Dawson, Pamela; Scott, Jason
2017-08-01
The importance of involving patients in reporting on safety is increasingly recognized. Whilst studies have identified barriers to clinician incident reporting, few have explored barriers and facilitators to patient reporting of safety experiences. This paper explores patient perspectives on providing feedback on safety experiences. Patients (n=28) were invited to take part in semi-structured interviews when given a survey about their experiences of safety following hospital discharge. Transcripts were thematically analysed using NVivo10. Patients were recruited from four hospitals in the UK. Three themes were identified as barriers and facilitators to patient involvement in providing feedback on their safety experiences. The first, cognitive-cultural, found that whilst safety was a priority for most, some felt the term was not relevant to them because safety was the "default" position, and/or because safety could not be disentangled from the overall experience of care. The structural-procedural theme indicated that reporting was facilitated when patients saw the process as straightforward, but that disinclination or perceived inability to provide feedback was a barrier. Finally, learning and change illustrated that perception of the impact of feedback could facilitate or inhibit reporting. When collecting patient feedback on experiences of safety, it is important to consider what may help or hinder this process, beyond the process alone. We present a staged model of prerequisite barriers and facilitators and hypothesize that each stage needs to be achieved for patients to provide feedback on safety experiences. Implications for collecting meaningful data on patients' safety experiences are considered. © 2016 The Authors. Health Expectations Published by John Wiley & Sons Ltd.
Occupational health and safety in the Moroccan construction sites: preliminary diagnosis
NASA Astrophysics Data System (ADS)
Tarik, Bakeli; Adil, Hafidi Alaoui
2018-05-01
Managing occupational health and safety on Moroccan construction sector represents the first step for projects' success. In fact, by avoiding accidents, all the related direct and indirect costs and delays can be prevented. That leads to an important question always asked by any project manager: what are the factors responsible for accidents? How can they be avoided? Through this research, the aim is to go through these questions, to contribute in occupational health and safety principles understanding, to identify construction accidentology and risk management opportunities and to approach the case of Moroccan construction sites by an accurate diagnosis. The approach is to make researchers, managers, stakeholders and deciders aware about the criticality of construction sites health and safety situation. And, to do the first step for a scientific research project in relation with health and safety in the Moroccan construction sector. For this, the paper will study the related state of art namely about construction sites accidents causation, and will focus on Reason's `Swiss cheese' model and its utilization for Moroccan construction sites health and safety diagnosis. The research will end with an estimation of an accidents fatality rate in the Moroccan construction sector and a benchmarking with the international rates. Finally, conclusions will be presented about the necessity of Occupational Health and Safety Management System (OHSMS) implementation, which shall cover all risk levels, and insure, at the same time, that the necessary defenses against accidents are on place.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tokuhiro, Akira; Potirniche, Gabriel; Cogliati, Joshua
2014-07-08
An experimental and computational study, consisting of modeling and simulation (M&S), of key thermal-mechanical issues affecting the design and safety of pebble-bed (PB) reactors was conducted. The objective was to broaden understanding and experimentally validate thermal-mechanic phenomena of nuclear grade graphite, specifically, spheres in frictional contact as anticipated in the bed under reactor relevant pressures and temperatures. The contact generates graphite dust particulates that can subsequently be transported into the flowing gaseous coolent. Under postulated depressurization transients and with the potential for leaked fission products to be adsorbed onto graphite 'dust', there is the potential for fission products to escapemore » from the primary volume. This is a design safety concern. Furthermore, earlier safety assessment identified the distinct possibility for the dispersed dust to combust in contact with air if sufficient conditions are met. Both of these phenomena were noted as important to design review and containing uncertainty to warrant study. The team designed and conducted two separate effects tests to study and benchmark the potential dust-generation rate, as well as study the conditions under which a dust explosion may occure in a standardized, instrumented explosion chamber.« less
Efficacy and Safety of Pomegranate Medicinal Products for Cancer
Vlachojannis, Christian
2015-01-01
Preclinical in vitro and in vivo studies demonstrate potent effects of pomegranate preparations in cancer cell lines and animal models with chemically induced cancers. We have carried out one systematic review of the effectiveness of pomegranate products in the treatment of cancer and another on their safety. The PubMed search provided 162 references for pomegranate and cancer and 122 references for pomegranate and safety/toxicity. We identified 4 clinical studies investigating 3 pomegranate products, of which one was inappropriate because of the low polyphenol content. The evidence of clinical effectiveness was poor because the quality of the studies was poor. Although there is no concern over safety with the doses used in the clinical studies, pomegranate preparations may be harmful by inducing synthetic drug metabolism through activation of liver enzymes. We have analysed various pomegranate products for their content of anthocyanins, punicalagin, and ellagic acid in order to compare them with the benchmark doses from published data. If the amount of coactive constituents is not declared, patients risk not benefiting from the putative pomegranate effects. Moreover, pomegranate end products are affected by many determinants. Their declaration should be incorporated into the regulatory guidance and controlled before pomegranate products enter the market. PMID:25815026
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harms, Gary A.; Ford, John T.; Barber, Allison Delo
2010-11-01
Sandia National Laboratories (SNL) has conducted radiation effects testing for the Department of Energy (DOE) and other contractors supporting the DOE since the 1960's. Over this period, the research reactor facilities at Sandia have had a primary mission to provide appropriate nuclear radiation environments for radiation testing and qualification of electronic components and other devices. The current generation of reactors includes the Annular Core Research Reactor (ACRR), a water-moderated pool-type reactor, fueled by elements constructed from UO2-BeO ceramic fuel pellets, and the Sandia Pulse Reactor III (SPR-III), a bare metal fast burst reactor utilizing a uranium-molybdenum alloy fuel. The SPR-IIImore » is currently defueled. The SPR Facility (SPRF) has hosted a series of critical experiments. A purpose-built critical experiment was first operated at the SPRF in the late 1980's. This experiment, called the Space Nuclear Thermal Propulsion Critical Experiment (CX), was designed to explore the reactor physics of a nuclear thermal rocket motor. This experiment was fueled with highly-enriched uranium carbide fuel in annular water-moderated fuel elements. The experiment program was completed and the fuel for the experiment was moved off-site. A second critical experiment, the Burnup Credit Critical Experiment (BUCCX) was operated at Sandia in 2002. The critical assembly for this experiment was based on the assembly used in the CX modified to accommodate low-enriched pin-type fuel in water moderator. This experiment was designed as a platform in which the reactivity effects of specific fission product poisons could be measured. Experiments were carried out on rhodium, an important fission product poison. The fuel and assembly hardware for the BUCCX remains at Sandia and is available for future experimentation. The critical experiment currently in operation at the SPRF is the Seven Percent Critical Experiment (7uPCX). This experiment is designed to provide benchmark reactor physics data to support validation of the reactor physics codes used to design commercial reactor fuel elements in an enrichment range above the current 5% enrichment cap. A first set of critical experiments in the 7uPCX has been completed. More experiments are planned in the 7uPCX series. The critical experiments at Sandia National Laboratories are currently funded by the US Department of Energy Nuclear Criticality Safety Program (NCSP). The NCSP has committed to maintain the critical experiment capability at Sandia and to support the development of a critical experiments training course at the facility. The training course is intended to provide hands-on experiment experience for the training of new and re-training of practicing Nuclear Criticality Safety Engineers. The current plans are for the development of the course to continue through the first part of fiscal year 2011 with the development culminating is the delivery of a prototype of the course in the latter part of the fiscal year. The course will be available in fiscal year 2012.« less
MPI, HPF or OpenMP: A Study with the NAS Benchmarks
NASA Technical Reports Server (NTRS)
Jin, Hao-Qiang; Frumkin, Michael; Hribar, Michelle; Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)
1999-01-01
Porting applications to new high performance parallel and distributed platforms is a challenging task. Writing parallel code by hand is time consuming and costly, but the task can be simplified by high level languages and would even better be automated by parallelizing tools and compilers. The definition of HPF (High Performance Fortran, based on data parallel model) and OpenMP (based on shared memory parallel model) standards has offered great opportunity in this respect. Both provide simple and clear interfaces to language like FORTRAN and simplify many tedious tasks encountered in writing message passing programs. In our study we implemented the parallel versions of the NAS Benchmarks with HPF and OpenMP directives. Comparison of their performance with the MPI implementation and pros and cons of different approaches will be discussed along with experience of using computer-aided tools to help parallelize these benchmarks. Based on the study,potentials of applying some of the techniques to realistic aerospace applications will be presented
MPI, HPF or OpenMP: A Study with the NAS Benchmarks
NASA Technical Reports Server (NTRS)
Jin, H.; Frumkin, M.; Hribar, M.; Waheed, A.; Yan, J.; Saini, Subhash (Technical Monitor)
1999-01-01
Porting applications to new high performance parallel and distributed platforms is a challenging task. Writing parallel code by hand is time consuming and costly, but this task can be simplified by high level languages and would even better be automated by parallelizing tools and compilers. The definition of HPF (High Performance Fortran, based on data parallel model) and OpenMP (based on shared memory parallel model) standards has offered great opportunity in this respect. Both provide simple and clear interfaces to language like FORTRAN and simplify many tedious tasks encountered in writing message passing programs. In our study, we implemented the parallel versions of the NAS Benchmarks with HPF and OpenMP directives. Comparison of their performance with the MPI implementation and pros and cons of different approaches will be discussed along with experience of using computer-aided tools to help parallelize these benchmarks. Based on the study, potentials of applying some of the techniques to realistic aerospace applications will be presented.
Benchmarking comparison and validation of MCNP photon interaction data
NASA Astrophysics Data System (ADS)
Colling, Bethany; Kodeli, I.; Lilley, S.; Packer, L. W.
2017-09-01
The objective of the research was to test available photoatomic data libraries for fusion relevant applications, comparing against experimental and computational neutronics benchmarks. Photon flux and heating was compared using the photon interaction data libraries (mcplib 04p, 05t, 84p and 12p). Suitable benchmark experiments (iron and water) were selected from the SINBAD database and analysed to compare experimental values with MCNP calculations using mcplib 04p, 84p and 12p. In both the computational and experimental comparisons, the majority of results with the 04p, 84p and 12p photon data libraries were within 1σ of the mean MCNP statistical uncertainty. Larger differences were observed when comparing computational results with the 05t test photon library. The Doppler broadening sampling bug in MCNP-5 is shown to be corrected for fusion relevant problems through use of the 84p photon data library. The recommended libraries for fusion neutronics are 84p (or 04p) with MCNP6 and 84p if using MCNP-5.
How to Use Benchmark and Cross-section Studies to Improve Data Libraries and Models
NASA Astrophysics Data System (ADS)
Wagner, V.; Suchopár, M.; Vrzalová, J.; Chudoba, P.; Svoboda, O.; Tichý, P.; Krása, A.; Majerle, M.; Kugler, A.; Adam, J.; Baldin, A.; Furman, W.; Kadykov, M.; Solnyshkin, A.; Tsoupko-Sitnikov, S.; Tyutyunikov, S.; Vladimirovna, N.; Závorka, L.
2016-06-01
Improvements of the Monte Carlo transport codes and cross-section libraries are very important steps towards usage of the accelerator-driven transmutation systems. We have conducted a lot of benchmark experiments with different set-ups consisting of lead, natural uranium and moderator irradiated by relativistic protons and deuterons within framework of the collaboration “Energy and Transmutation of Radioactive Waste”. Unfortunately, the knowledge of the total or partial cross-sections of important reactions is insufficient. Due to this reason we have started extensive studies of different reaction cross-sections. We measure cross-sections of important neutron reactions by means of the quasi-monoenergetic neutron sources based on the cyclotrons at Nuclear Physics Institute in Řež and at The Svedberg Laboratory in Uppsala. Measurements of partial cross-sections of relativistic deuteron reactions were the second direction of our studies. The new results obtained during last years will be shown. Possible use of these data for improvement of libraries, models and benchmark studies will be discussed.
Openness to experience, work experience and patient safety.
Chang, Hao-Yuan; Friesner, Daniel; Lee, I-Chen; Chu, Tsung-Lan; Chen, Hui-Ling; Wu, Wan-Er; Teng, Ching-I
2016-11-01
The purpose of this study is to examine how the interaction between nurse openness and work experience is related to patient safety. No study has yet examined the interactions between these, and how openness and work experience jointly impact patient safety. This study adopts a cross-sectional design, using self-reported work experience, perceived time pressure and measures of patient safety, and was conducted in a major medical centre. The sample consisted of 421 full-time nurses from all available units in the centre. Proportionate random sampling was used. Patient safety was measured using the self-reported frequency of common adverse events. Openness was self-rated using items identified in the relevant literature. Nurse openness is positively related to the patient safety construct (B = 0.08, P = 0.03). Moreover, work experience reduces the relation between openness and patient safety (B = -0.12, P < 0.01). The relationship between openness, work experience and patient safety suggests a new means of improving patient care in a health system setting. Nurse managers may enhance patient safety by assessing nurse openness and assigning highly open nurses to duties that make maximum use of that trait. © 2016 John Wiley & Sons Ltd.
Scott, Jason; Waring, Justin; Heavey, Emily; Dawson, Pamela
2014-01-01
Background It is increasingly recognised that patients can play a role in reporting safety incidents. Studies have tended to focus on patients within hospital settings, and on the reporting of patient safety incidents as defined within a medical model of safety. This study aims to determine the feasibility of collecting and using patient experiences of safety as a proactive approach to identifying latent conditions of safety as patients undergo organisational care transfers. Methods and analysis The study comprises three components: (1) patients’ experiences of safety relating to a care transfer, (2) patients’ receptiveness to reporting experiences of safety, (3) quality improvement using patient experiences of safety. (1) A safety survey and evaluation form will be distributed to patients discharged from 15 wards across four clinical areas (cardiac, care of older people, orthopaedics and stroke) over 1 year. Healthcare professionals involved in the care transfer will be provided with a regular summary of patient feedback. (2) Patients (n=36) who return an evaluation form will be sampled representatively based on the four clinical areas and interviewed about their experiences of healthcare and safety and completing the survey. (3) Healthcare professionals (n=75) will be invited to participate in semistructured interviews and focus groups to discuss their experiences with and perceptions of receiving and using patient feedback. Data analysis will explore the relationship between patient experiences of safety and other indicators and measures of quality and safety. Interview and focus group data will be thematically analysed and triangulated with all other data sources using a convergence coding matrix. Ethics and dissemination The study has been granted National Health Service (NHS) Research Ethics Committee approval. Patient experiences of safety will be disseminated to healthcare teams for the purpose of organisational development and quality improvement. Results will be disseminated to study participants as well as through peer-reviewed outputs. PMID:24833698
An Investigation of Health and Safety Measures in a Hydroelectric Power Plant.
Acakpovi, Amevi; Dzamikumah, Lucky
2016-12-01
Occupational risk management is known as a catalyst in generating superior returns for all stakeholders on a sustainable basis. A number of companies in Ghana implemented health and safety measures adopted from international companies to ensure the safety of their employees. However, there exist great threats to employees' safety in these companies. The purpose of this paper is to investigate the level of compliance of Occupational Health and Safety management systems and standards set by international and local legislation in power producing companies in Ghana. The methodology is conducted by administering questionnaires and in-depth interviews as measuring instruments. A random sampling technique was applied to 60 respondents; only 50 respondents returned their responses. The questionnaire was developed from a literature review and contained questions and items relevant to the initial research problem. A factor analysis was also carried out to investigate the influence of some variables on safety in general. Results showed that the significant factors that influence the safety of employees at the hydroelectric power plant stations are: lack of training and supervision, non-observance of safe work procedures, lack of management commitment, and lack of periodical check on machine operations. The study pointed out the safety loopholes and therefore helped improve the health and safety measures of employees in the selected company by providing effective recommendations. The implementation of the proposed recommendations in this paper, would lead to the prevention of work-related injuries and illnesses of employees as well as property damage and incidents in hydroelectric power plants. The recommendations may equally be considered as benchmark for the Safety and Health Management System with international standards.
Initial development of a practical safety audit tool to assess fleet safety management practices.
Mitchell, Rebecca; Friswell, Rena; Mooren, Lori
2012-07-01
Work-related vehicle crashes are a common cause of occupational injury. Yet, there are few studies that investigate management practices used for light vehicle fleets (i.e. vehicles less than 4.5 tonnes). One of the impediments to obtaining and sharing information on effective fleet safety management is the lack of an evidence-based, standardised measurement tool. This article describes the initial development of an audit tool to assess fleet safety management practices in light vehicle fleets. The audit tool was developed by triangulating information from a review of the literature on fleet safety management practices and from semi-structured interviews with 15 fleet managers and 21 fleet drivers. A preliminary useability assessment was conducted with 5 organisations. The audit tool assesses the management of fleet safety against five core categories: (1) management, systems and processes; (2) monitoring and assessment; (3) employee recruitment, training and education; (4) vehicle technology, selection and maintenance; and (5) vehicle journeys. Each of these core categories has between 1 and 3 sub-categories. Organisations are rated at one of 4 levels on each sub-category. The fleet safety management audit tool is designed to identify the extent to which fleet safety is managed in an organisation against best practice. It is intended that the audit tool be used to conduct audits within an organisation to provide an indicator of progress in managing fleet safety and to consistently benchmark performance against other organisations. Application of the tool by fleet safety researchers is now needed to inform its further development and refinement and to permit psychometric evaluation. Copyright © 2012 Elsevier Ltd. All rights reserved.
Frontal Polymerization in Microgravity
NASA Technical Reports Server (NTRS)
Pojman, John A.
1999-01-01
Frontal polymerization systems, with their inherent large thermal and compositional gradients, are greatly affected by buoyancy-driven convection. Sounding rocket experiments allowed the preparation of benchmark materials and demonstrated that methods to suppress the Rayleigh-Taylor instability in ground-based research did not significantly affect the molecular weight of the polymer. Experiments under weightlessness show clearly that bubbles produced during the reaction interact very differently than under 1 g.
ERIC Educational Resources Information Center
Brandon, Paul R.; Harrison, George M.; Lawton, Brian E.
2013-01-01
When evaluators plan site-randomized experiments, they must conduct the appropriate statistical power analyses. These analyses are most likely to be valid when they are based on data from the jurisdictions in which the studies are to be conducted. In this method note, we provide software code, in the form of a SAS macro, for producing statistical…
Benchmarking Gas Path Diagnostic Methods: A Public Approach
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Bird, Jeff; Davison, Craig; Volponi, Al; Iverson, R. Eugene
2008-01-01
Recent technology reviews have identified the need for objective assessments of engine health management (EHM) technology. The need is two-fold: technology developers require relevant data and problems to design and validate new algorithms and techniques while engine system integrators and operators need practical tools to direct development and then evaluate the effectiveness of proposed solutions. This paper presents a publicly available gas path diagnostic benchmark problem that has been developed by the Propulsion and Power Systems Panel of The Technical Cooperation Program (TTCP) to help address these needs. The problem is coded in MATLAB (The MathWorks, Inc.) and coupled with a non-linear turbofan engine simulation to produce "snap-shot" measurements, with relevant noise levels, as if collected from a fleet of engines over their lifetime of use. Each engine within the fleet will experience unique operating and deterioration profiles, and may encounter randomly occurring relevant gas path faults including sensor, actuator and component faults. The challenge to the EHM community is to develop gas path diagnostic algorithms to reliably perform fault detection and isolation. An example solution to the benchmark problem is provided along with associated evaluation metrics. A plan is presented to disseminate this benchmark problem to the engine health management technical community and invite technology solutions.
Lance, Blake W.; Smith, Barton L.
2016-06-23
Transient convection has been investigated experimentally for the purpose of providing Computational Fluid Dynamics (CFD) validation benchmark data. A specialized facility for validation benchmark experiments called the Rotatable Buoyancy Tunnel was used to acquire thermal and velocity measurements of flow over a smooth, vertical heated plate. The initial condition was forced convection downward with subsequent transition to mixed convection, ending with natural convection upward after a flow reversal. Data acquisition through the transient was repeated for ensemble-averaged results. With simple flow geometry, validation data were acquired at the benchmark level. All boundary conditions (BCs) were measured and their uncertainties quantified.more » Temperature profiles on all four walls and the inlet were measured, as well as as-built test section geometry. Inlet velocity profiles and turbulence levels were quantified using Particle Image Velocimetry. System Response Quantities (SRQs) were measured for comparison with CFD outputs and include velocity profiles, wall heat flux, and wall shear stress. Extra effort was invested in documenting and preserving the validation data. Details about the experimental facility, instrumentation, experimental procedure, materials, BCs, and SRQs are made available through this paper. As a result, the latter two are available for download and the other details are included in this work.« less
Evaluation of the Pool Critical Assembly Benchmark with Explicitly-Modeled Geometry using MCNP6
Kulesza, Joel A.; Martz, Roger Lee
2017-03-01
Despite being one of the most widely used benchmarks for qualifying light water reactor (LWR) radiation transport methods and data, no benchmark calculation of the Oak Ridge National Laboratory (ORNL) Pool Critical Assembly (PCA) pressure vessel wall benchmark facility (PVWBF) using MCNP6 with explicitly modeled core geometry exists. As such, this paper provides results for such an analysis. First, a criticality calculation is used to construct the fixed source term. Next, ADVANTG-generated variance reduction parameters are used within the final MCNP6 fixed source calculations. These calculations provide unadjusted dosimetry results using three sets of dosimetry reaction cross sections of varyingmore » ages (those packaged with MCNP6, from the IRDF-2002 multi-group library, and from the ACE-formatted IRDFF v1.05 library). These results are then compared to two different sets of measured reaction rates. The comparison agrees in an overall sense within 2% and on a specific reaction- and dosimetry location-basis within 5%. Except for the neptunium dosimetry, the individual foil raw calculation-to-experiment comparisons usually agree within 10% but is typically greater than unity. Finally, in the course of developing these calculations, geometry that has previously not been completely specified is provided herein for the convenience of future analysts.« less
New evaluation of thermal neutron scattering libraries for light and heavy water
NASA Astrophysics Data System (ADS)
Marquez Damian, Jose Ignacio; Granada, Jose Rolando; Cantargi, Florencia; Roubtsov, Danila
2017-09-01
In order to improve the design and safety of thermal nuclear reactors and for verification of criticality safety conditions on systems with significant amount of fissile materials and water, it is necessary to perform high-precision neutron transport calculations and estimate uncertainties of the results. These calculations are based on neutron interaction data distributed in evaluated nuclear data libraries. To improve the evaluations of thermal scattering sub-libraries, we developed a set of thermal neutron scattering cross sections (scattering kernels) for hydrogen bound in light water, and deuterium and oxygen bound in heavy water, in the ENDF-6 format from room temperature up to the critical temperatures of molecular liquids. The new evaluations were generated and processable with NJOY99 and also with NJOY-2012 with minor modifications (updates), and with the new version of NJOY-2016. The new TSL libraries are based on molecular dynamics simulations with GROMACS and recent experimental data, and result in an improvement of the calculation of single neutron scattering quantities. In this work, we discuss the importance of taking into account self-diffusion in liquids to accurately describe the neutron scattering at low neutron energies (quasi-elastic peak problem). To improve modeling of heavy water, it is important to take into account temperature-dependent static structure factors and apply Sköld approximation to the coherent inelastic components of the scattering matrix. The usage of the new set of scattering matrices and cross-sections improves the calculation of thermal critical systems moderated and/or reflected with light/heavy water obtained from the International Criticality Safety Benchmark Evaluation Project (ICSBEP) handbook. For example, the use of the new thermal scattering library for heavy water, combined with the ROSFOND-2010 evaluation of the cross sections for deuterium, results in an improvement of the C/E ratio in 48 out of 65 international benchmark cases calculated with the Monte Carlo code MCNP5, in comparison with the existing library based on the ENDF/B-VII.0 evaluation.
Maier, Andrew; Vincent, Melissa J; Parker, Ann; Gadagbui, Bernard K; Jayjock, Michael
2015-12-01
Asthma is a complex syndrome with significant consequences for those affected. The number of individuals affected is growing, although the reasons for the increase are uncertain. Ensuring the effective management of potential exposures follows from substantial evidence that exposure to some chemicals can increase the likelihood of asthma responses. We have developed a safety assessment approach tailored to the screening of asthma risks from residential consumer product ingredients as a proactive risk management tool. Several key features of the proposed approach advance the assessment resources often used for asthma issues. First, a quantitative health benchmark for asthma or related endpoints (irritation and sensitization) is provided that extends qualitative hazard classification methods. Second, a parallel structure is employed to include dose-response methods for asthma endpoints and methods for scenario specific exposure estimation. The two parallel tracks are integrated in a risk characterization step. Third, a tiered assessment structure is provided to accommodate different amounts of data for both the dose-response assessment (i.e., use of existing benchmarks, hazard banding, or the threshold of toxicological concern) and exposure estimation (i.e., use of empirical data, model estimates, or exposure categories). Tools building from traditional methods and resources have been adapted to address specific issues pertinent to asthma toxicology (e.g., mode-of-action and dose-response features) and the nature of residential consumer product use scenarios (e.g., product use patterns and exposure durations). A case study for acetic acid as used in various sentinel products and residential cleaning scenarios was developed to test the safety assessment methodology. In particular, the results were used to refine and verify relationships among tiered approaches such that each lower data tier in the approach provides a similar or greater margin of safety for a given scenario. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
CERN Computing in Commercial Clouds
NASA Astrophysics Data System (ADS)
Cordeiro, C.; Field, L.; Garrido Bear, B.; Giordano, D.; Jones, B.; Keeble, O.; Manzi, A.; Martelli, E.; McCance, G.; Moreno-García, D.; Traylen, S.
2017-10-01
By the end of 2016 more than 10 Million core-hours of computing resources have been delivered by several commercial cloud providers to the four LHC experiments to run their production workloads, from simulation to full chain processing. In this paper we describe the experience gained at CERN in procuring and exploiting commercial cloud resources for the computing needs of the LHC experiments. The mechanisms used for provisioning, monitoring, accounting, alarming and benchmarking will be discussed, as well as the involvement of the LHC collaborations in terms of managing the workflows of the experiments within a multicloud environment.
The Safety Attitudes Questionnaire as a Tool for Benchmarking Safety Culture in the NICU
Profit, Jochen; Etchegaray, Jason; Petersen, Laura A; Sexton, J Bryan; Hysong, Sylvia J; Mei, Minghua; Thomas, Eric J
2014-01-01
background NICU safety culture, as measured by the Safety Attitudes Questionnaire (SAQ), varies widely. Associations with clinical outcomes in the adult ICU setting make the SAQ an attractive tool for comparing clinical performance between hospitals. Little information is available on the use of the SAQ for this purpose in the NICU setting. objectives To determine whether the dimensions of safety culture measured by the SAQ give consistent results when used as a NICU performance measure. methods Cross-sectional survey of caregivers in twelve NICUs, using the six scales of the SAQ: teamwork climate, safety climate, job satisfaction, stress recognition, perceptions of management, and working conditions. NICUs were ranked by quantifying their contribution to overall risk-adjusted variation across the scales. Spearman Rank Correlation coefficients were used to test for consistency in scale performance. We then examined whether performance in the top four NICUs in one scale predicted top four performance in others. results There were 547 respondents in twelve NICUs. Of fifteen NICU-level correlations in performance ranking, two were greater than 0.7, seven were between 0.4 and 0.69, the six remaining were less than 0.4. We found a trend towards significance in comparing the distribution of performance in the top four NICUs across domains with a binomial distribution p = .051, indicating generally consistent performance across dimensions of safety culture. conclusion A culture of safety permeates many aspects of patient care and organizational functioning. The SAQ may be a useful tool for comparative performance assessments among NICUs. PMID:22337935
Student Learning: Education's Field of Dreams.
ERIC Educational Resources Information Center
Blackwell, Peggy L.
2003-01-01
Discusses seven research-based benchmarks providing a framework for the student-learning-focused reform of teacher education: knowledge and understanding based on previous experience, usable content knowledge, transfer of learning/the learning context, strategic thinking, motivation and affect, development and individual differences, and standards…
The development and psychometric evaluation of a safety climate measure for primary care.
de Wet, C; Spence, W; Mash, R; Johnson, P; Bowie, P
2010-12-01
Building a safety culture is an important part of improving patient care. Measuring perceptions of safety climate among healthcare teams and organisations is a key element of this process. Existing measurement instruments are largely developed for secondary care settings in North America and many lack adequate psychometric testing. Our aim was to develop and test an instrument to measure perceptions of safety climate among primary care teams in National Health Service for Scotland. Questionnaire development was facilitated through a steering group, literature review, semistructured interviews with primary care team members, a modified Delphi and completion of a content validity index by experts. A cross-sectional postal survey utilising the questionnaire was undertaken in a random sample of west of Scotland general practices to facilitate psychometric evaluation. Statistical methods, including exploratory and confirmatory factor analysis, and Cronbach and Raykov reliability coefficients were conducted. Of the 667 primary care team members based in 49 general practices surveyed, 563 returned completed questionnaires (84.4%). Psychometric evaluation resulted in the development of a 30-item questionnaire with five safety climate factors: leadership, teamwork, communication, workload and safety systems. Retained items have strong factor loadings to only one factor. Reliability coefficients was satisfactory (α = 0.94 and ρ = 0.93). This study is the first stage in the development of an appropriately valid and reliable safety climate measure for primary care. Measuring safety climate perceptions has the potential to help primary care organisations and teams focus attention on safety-related issues and target improvement through educational interventions. Further research is required to explore acceptability and feasibility issues for primary care teams and the potential for organisational benchmarking.
Association between poor sleep, fatigue, and safety outcomes in Emergency Medical Services providers
Patterson, P. Daniel; Weaver, Matthew D.; Frank, Rachel C.; Warner, Charles W.; Martin-Gill, Christian; Guyette, Francis X.; Fairbanks, Rollin J.; Hubble, Michael W.; Songer, Thomas J.; Callaway, Clifton W.; Kelsey, Sheryl F.; Hostler, David
2011-01-01
Objective To determine the association between poor sleep quality, fatigue, and self-reported safety outcomes among Emergency Medical Services (EMS) workers. Methods We used convenience sampling of EMS agencies and a cross-sectional survey design. We administered the 19-item Pittsburgh Sleep Quality Index (PSQI), 11-item Chalder Fatigue Questionnaire (CFQ), and 44-item EMS Safety Inventory (EMS-SI) to measure sleep quality, fatigue, and safety outcomes, respectively. We used a consensus process to develop the EMS-SI, which was designed to capture three composite measurements of EMS worker injury, medical errors and adverse events (AE), and safety compromising behaviors. We used hierarchical logistic regression to test the association between poor sleep quality, fatigue, and three composite measures of EMS worker safety outcomes. Results We received 547 surveys from 30 EMS agencies (a 35.6% mean agency response rate). The mean PSQI score exceeded the benchmark for poor sleep (6.9, 95%CI 6.6, 7.2). Greater than half of respondents were classified as fatigued (55%, 95%CI 50.7, 59.3). Eighteen percent of respondents reported an injury (17.8%, 95%CI 13.5, 22.1), forty-one percent a medical error or AE (41.1%, 95%CI 36.8, 45.4), and 89% (95%CI 87, 92) safety compromising behaviors. After controlling for confounding, we identified 1.9 greater odds of injury (95%CI 1.1, 3.3), 2.2 greater odds of medical error or AE (95%CI 1.4, 3.3), and 3.6 greater odds of safety compromising behavior (95%CI 1.5, 8.3) among fatigued respondents versus non-fatigued respondents. Conclusions In this sample of EMS workers, poor sleep quality and fatigue is common. We provide preliminary evidence of an association between sleep quality, fatigue, and safety outcomes. PMID:22023164
[Operating Room Nurses' Experiences of Securing for Patient Safety].
Park, Kwang Ok; Kim, Jong Kyung; Kim, Myoung Sook
2015-10-01
This study was done to evaluate the experience of securing patient safety in hospital operating rooms. Experiential data were collected from 15 operating room nurses through in-depth interviews. The main question was "Could you describe your experience with patient safety in the operating room?". Qualitative data from the field and transcribed notes were analyzed using Strauss and Corbin's grounded theory methodology. The core category of experience with patient safety in the operating room was 'trying to maintain principles of patient safety during high-risk surgical procedures'. The participants used two interactional strategies: 'attempt continuous improvement', 'immersion in operation with sharing issues of patient safety'. The results indicate that the important factors for ensuring the safety of patients in the operating room are manpower, education, and a system for patient safety. Successful and safe surgery requires communication, teamwork and recognition of the importance of patient safety by the surgical team.
Podar, Mircea; Shakya, Migun; D'Amore, Rosalinda; ...
2016-01-14
In the last 5 years, the rapid pace of innovations and improvements in sequencing technologies has completely changed the landscape of metagenomic and metagenetic experiments. Therefore, it is critical to benchmark the various methodologies for interrogating the composition of microbial communities, so that we can assess their strengths and limitations. Here, the most common phylogenetic marker for microbial community diversity studies is the 16S ribosomal RNA gene and in the last 10 years the field has moved from sequencing a small number of amplicons and samples to more complex studies where thousands of samples and multiple different gene regions aremore » interrogated.« less
A Benchmark Problem for Development of Autonomous Structural Modal Identification
NASA Technical Reports Server (NTRS)
Pappa, Richard S.; Woodard, Stanley E.; Juang, Jer-Nan
1996-01-01
This paper summarizes modal identification results obtained using an autonomous version of the Eigensystem Realization Algorithm on a dynamically complex, laboratory structure. The benchmark problem uses 48 of 768 free-decay responses measured in a complete modal survey test. The true modal parameters of the structure are well known from two previous, independent investigations. Without user involvement, the autonomous data analysis identified 24 to 33 structural modes with good to excellent accuracy in 62 seconds of CPU time (on a DEC Alpha 4000 computer). The modal identification technique described in the paper is the baseline algorithm for NASA's Autonomous Dynamics Determination (ADD) experiment scheduled to fly on International Space Station assembly flights in 1997-1999.
Benchmarking and performance analysis of the CM-2. [SIMD computer
NASA Technical Reports Server (NTRS)
Myers, David W.; Adams, George B., II
1988-01-01
A suite of benchmarking routines testing communication, basic arithmetic operations, and selected kernel algorithms written in LISP and PARIS was developed for the CM-2. Experiment runs are automated via a software framework that sequences individual tests, allowing for unattended overnight operation. Multiple measurements are made and treated statistically to generate well-characterized results from the noisy values given by cm:time. The results obtained provide a comparison with similar, but less extensive, testing done on a CM-1. Tests were chosen to aid the algorithmist in constructing fast, efficient, and correct code on the CM-2, as well as gain insight into what performance criteria are needed when evaluating parallel processing machines.
Little Boy replication: justification and construction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malenfant, R.E.
A reconstruction of the Little Boy weapon allowed experiments to evaluate yield, leakage measurements for comparison with calculations, and phenomenological measurements to evaluate various in-situ dosimeters. The reconstructed weapon was operated at sustained delayed critical at the Los Alamos Critical Assembly Facility. The present experiments provide a wealth of information to benchmark calculations and demonstrate that the 1965 measurements on the Ichiban assembly (a spherical mockup of Little Boy) were in error.
Benchmarking the minimum Electron Beam (eBeam) dose required for the sterilization of space foods
NASA Astrophysics Data System (ADS)
Bhatia, Sohini S.; Wall, Kayley R.; Kerth, Chris R.; Pillai, Suresh D.
2018-02-01
As manned space missions extend in length, the safety, nutrition, acceptability, and shelf life of space foods are of paramount importance to NASA. Since food and mealtimes play a key role in reducing stress and boredom of prolonged missions, the quality of food in terms of appearance, flavor, texture, and aroma can have significant psychological ramifications on astronaut performance. The FDA, which oversees space foods, currently requires a minimum dose of 44 kGy for irradiated space foods. The underlying hypothesis was that commercial sterility of space foods could be achieved at a significantly lower dose, and this lowered dose would positively affect the shelf life of the product. Electron beam processed beef fajitas were used as an example NASA space food to benchmark the minimum eBeam dose required for sterility. A 15 kGy dose was able to achieve an approximately 10 log reduction in Shiga-toxin-producing Escherichia coli bacteria, and a 5 log reduction in Clostridium sporogenes spores. Furthermore, accelerated shelf life testing (ASLT) to determine sensory and quality characteristics under various conditions was conducted. Using Multidimensional gas-chromatography-olfactometry-mass spectrometry (MDGC-O-MS), numerous volatiles were shown to be dependent on the dose applied to the product. Furthermore, concentrations of off -flavor aroma compounds such as dimethyl sulfide were decreased at the reduced 15 kGy dose. The results suggest that the combination of conventional cooking combined with eBeam processing (15 kGy) can achieve the safety and shelf-life objectives needed for long duration space-foods.
Race, gender, and risk perceptions of the legal consequences of drinking and driving.
Sloan, Frank A; Chepke, Lindsey M; Davis, Dontrell V
2013-06-01
This study investigated whether subjective beliefs about the consequences of driving while intoxicated (DWI) differ by race/gender. Beliefs affect driving behaviors and views of police/judicial fairness. The researchers compared risk perceptions of DWI using a survey of drinkers in eight cities in four states with actual arrest and conviction rates and fines from court data in the same cities. With state arrest data as a benchmark, Black males were overly pessimistic about being stopped, whether or not actual drinking occurred, and attributed higher jail penalties to DWI conviction. That Black males overestimated jail sentences incurred by the general population suggests that they did not attribute higher jail penalties to racial bias. Arrest data did not reveal disparities in judicial outcomes following DWI arrest. Blacks' subjective beliefs about DWI consequences may reflect social experiences, which are not jurisdiction- or crime-specific; this is a challenge to policymakers aiming to deter DWI by changing statutes and enforcement. If perception of bias exists despite no actual bias, a change in enforcement policy would not be effective, but a public relations campaign would be helpful in realigning beliefs. Copyright © 2013 National Safety Council and Elsevier Ltd. All rights reserved.
Human resource development for nuclear generation - from the perspective of a utility company
NASA Astrophysics Data System (ADS)
Kahar, Wan Shakirah Wan Abdul; Mostafa, Nor Azlan; Salim, Mohd Faiz
2017-01-01
Malaysia is currently in the planning phase of its nuclear power program, with the first unit targeted to be operational in 2030. Training of nuclear power plant (NPP) staffs are usually long and rigorous due to the complexity and safety aspects of nuclear power. As the sole electricity utility in the country, it is therefore essential that Tenaga Nasional Berhad (TNB) prepares early in developing its human resource and nuclear expertise as a potential NPP owner-operator. A utility also has to be prudent in managing its work force efficiently and effectively, while ensuring that adequate preparations are being made to acquire the necessary nuclear knowledge with sufficient training lead time. There are several approaches to training that can be taken by a utility company with no experience in nuclear power. These include conducting feasibility studies and benchmarking exercises, preparing long term human resource development, increasing the exposure on nuclear power technology to both the top management and general staff, and employing the assistance of relevant agencies locally and abroad. This paper discusses the activities done and steps taken by TNB in its human resource development for Malaysia's nuclear power program.
Amponsah-Tawiah, Kwesi; Jain, Aditya; Leka, Stavroula; Hollis, David; Cox, Tom
2013-06-01
In addition to hazardous conditions that are prevalent in mines, there are various physical and psychosocial risk factors that can affect mine workers' safety and health. Without due diligence to mine safety, these risk factors can affect workers' safety experience, in terms of near misses, disabling injuries and accidents experienced or witnessed by workers. This study sets out to examine the effects of physical and psychosocial risk factors on workers' safety experience in a sample of Ghanaian miners. 307 participants from five mining companies responded to a cross sectional survey examining physical and psychosocial hazards and their implications for employees' safety experience. Zero-inflated Poisson regression models indicated that mining conditions, equipment, ambient conditions, support and security, and work demands and control are significant predictors of near misses, disabling injuries, and accidents experienced or witnessed by workers. The type of mine had important implications for workers' safety experience. Copyright © 2013 Elsevier Ltd and National Safety Council. All rights reserved.
Guida, Hilka Flavia Saldanha; Brito, Jussara; Alvarez, Denise
2013-11-01
This article presents the labor management changes and the implications for occupational health and safety that occurred after two thermoelectric plants were acquired by a government-owned, joint stock with private investors, energy corporation. The changes led part of these workers to question their own professional abilities, as previously experienced workers were suddenly considered unqualified due to the new organizational model and restructuring implemented in their units. It was seen how lack of professional recognition in the workplace led to negative health and safety consequences for workers, as there were numerous cases of psychic anguish, emotional disorders, musculoskeletal problems, gastrointestinal disorders, etc. It was also seen that it is now possible to introduce a series of measures that can contribute to improve working conditions and, consequently, the lives of the workers. The benchmark used was Ergology, as well as aspects of the Psychodynamics of Work and the Ergonomics of the Activity. The methodology included a bibliographical survey of the theme, document analysis, semi-structured interviews, systematic activities observations and the validation of results with the research subjects.
Rethinking key–value store for parallel I/O optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kougkas, Anthony; Eslami, Hassan; Sun, Xian-He
2015-01-26
Key-value stores are being widely used as the storage system for large-scale internet services and cloud storage systems. However, they are rarely used in HPC systems, where parallel file systems are the dominant storage solution. In this study, we examine the architecture differences and performance characteristics of parallel file systems and key-value stores. We propose using key-value stores to optimize overall Input/Output (I/O) performance, especially for workloads that parallel file systems cannot handle well, such as the cases with intense data synchronization or heavy metadata operations. We conducted experiments with several synthetic benchmarks, an I/O benchmark, and a real application.more » We modeled the performance of these two systems using collected data from our experiments, and we provide a predictive method to identify which system offers better I/O performance given a specific workload. The results show that we can optimize the I/O performance in HPC systems by utilizing key-value stores.« less
Investigation of the transient fuel preburner manifold and combustor
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Chen, Yen-Sen; Farmer, Richard C.
1989-01-01
A computational fluid dynamics (CFD) model with finite rate reactions, FDNS, was developed to study the start transient of the Space Shuttle Main Engine (SSME) fuel preburner (FPB). FDNS is a time accurate, pressure based CFD code. An upwind scheme was employed for spatial discretization. The upwind scheme was based on second and fourth order central differencing with adaptive artificial dissipation. A state of the art two-equation k-epsilon (T) turbulence model was employed for the turbulence calculation. A Pade' Rational Solution (PARASOL) chemistry algorithm was coupled with the point implicit procedure. FDNS was benchmarked with three well documented experiments: a confined swirling coaxial jet, a non-reactive ramjet dump combustor, and a reactive ramjet dump combustor. Excellent comparisons were obtained for the benchmark cases. The code was then used to study the start transient of an axisymmetric SSME fuel preburner. Predicted transient operation of the preburner agrees well with experiment. Furthermore, it was also found that an appreciable amount of unburned oxygen entered the turbine stages.
Liao, Peilin; Carter, Emily A
2011-09-07
Quantitative characterization of low-lying excited electronic states in materials is critical for the development of solar energy conversion materials. The many-body Green's function method known as the GW approximation (GWA) directly probes states corresponding to photoemission and inverse photoemission experiments, thereby determining the associated band structure. Several versions of the GW approximation with different levels of self-consistency exist in the field. While the GWA based on density functional theory (DFT) works well for conventional semiconductors, less is known about its reliability for strongly correlated semiconducting materials. Here we present a systematic study of the GWA using hematite (α-Fe(2)O(3)) as the benchmark material. We analyze its performance in terms of the calculated photoemission/inverse photoemission band gaps, densities of states, and dielectric functions. Overall, a non-self-consistent G(0)W(0) using input from DFT+U theory produces physical observables in best agreement with experiments. This journal is © the Owner Societies 2011
Benchmarking reference services: step by step.
Buchanan, H S; Marshall, J G
1996-01-01
This article is a companion to an introductory article on benchmarking published in an earlier issue of Medical Reference Services Quarterly. Librarians interested in benchmarking often ask the following questions: How do I determine what to benchmark; how do I form a benchmarking team; how do I identify benchmarking partners; what's the best way to collect and analyze benchmarking information; and what will I do with the data? Careful planning is a critical success factor of any benchmarking project, and these questions must be answered before embarking on a benchmarking study. This article summarizes the steps necessary to conduct benchmarking research. Relevant examples of each benchmarking step are provided.
Ahangari, Hamed; Atkinson-Palombo, Carol; Garrick, Norman W
2016-06-01
In January 2015, the United States Department of Transportation (USDOT) announced that the official target of the federal government transportation safety policy was zero deaths. Having a better understanding of traffic fatality trends of various age cohorts-and to what extent the US is lagging other countries-is a crucial first step to identifying policies that may help the USDOT achieve its goal. In this paper we analyze fatality rates for different age cohorts in developed countries to better understand how road traffic fatality patterns vary across countries by age cohort. Using benchmarking analysis and comparative index analysis based on panel data modelling and data for selected years between 1990 and 2010, we compare changes in the rate of road traffic fatality over time, as well as the absolute level of road traffic fatality for six age groups in the US, with 15 other developed countries. Our findings illustrate tremendous variations in road fatality rates (both in terms of the absolute values and the rates of improvement over time) among different age cohorts in all of the 16 countries. Looking specifically at the US, our analysis shows that safety improvements for Youngsters (15-17 years old) was much better than for other age groups, and closely tracked peer countries. In sharp contrast, Children (0-14 years old) and Seniors (+65 years old) in the US, fare very poorly when compared to peer countries. For example, in 2010, Children in the US were a stunning five times more likely to experience a road traffic fatality than Children in the UK. This startling statistic suggests an immediate need to explore further the causes and potential solutions to these disparities. This is especially important if countries, including the US, are to achieve the ambitious goals set out in Zero Vision initiatives. Copyright © 2016 Elsevier Ltd and National Safety Council. All rights reserved.
Zwijnenberg, Nicolien C; Hendriks, Michelle; Delnoij, Diana M J; de Veer, Anke J E; Spreeuwenberg, Peter; Wagner, Cordula
2016-12-01
To examine how information presentation affects the understanding and use of information for quality improvement. An experimental design, testing 22 formats, and showing information on patient safety culture. Formats differed in visualization, outcomes and benchmark information. Respondents viewed three randomly selected presentation formats in an online survey, completing several tasks per format. The hospital sector in the Netherlands. A volunteer sample of healthcare professionals, mainly nurses, working in hospitals. Main Outcome Measure(s): The degree to which information is understandable and usable (accurate choice for quality improvement, sense of urgency to change and appraisal of one's own performance). About 115 healthcare professionals participated (response rate 25%), resulting in 345 reviews. Information in tables (P = 0.007) and bar charts (P < 0.0001) was better understood than radars. Presenting outcomes on a 5-point scale (P < 0.001) or as '% positive responders' (P < 0.001) was better understood than '% negative responders'. Formats without benchmarks were better understood than formats with benchmarks. Use: Bar charts resulted in more accurate choices than tables (P = 0.003) and radars (P < 0.001). Outcomes on a 5-point scale resulted in more accurate choices than '% negative responders' (P = 0.007). Presenting '% positive responders' resulted in a higher sense of urgency to change than outcomes on a 5-point scale (P = 0.002). Benchmark information had inconsistent effects on the appraisal of one's own performances. Information presentation affects healthcare professionals' understanding and use of quality information. Our findings supplement the further understanding on how quality information can be best communicated to healthcare professionals for realizing quality improvements. © The Author 2016. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Errors in laboratory medicine: practical lessons to improve patient safety.
Howanitz, Peter J
2005-10-01
Patient safety is influenced by the frequency and seriousness of errors that occur in the health care system. Error rates in laboratory practices are collected routinely for a variety of performance measures in all clinical pathology laboratories in the United States, but a list of critical performance measures has not yet been recommended. The most extensive databases describing error rates in pathology were developed and are maintained by the College of American Pathologists (CAP). These databases include the CAP's Q-Probes and Q-Tracks programs, which provide information on error rates from more than 130 interlaboratory studies. To define critical performance measures in laboratory medicine, describe error rates of these measures, and provide suggestions to decrease these errors, thereby ultimately improving patient safety. A review of experiences from Q-Probes and Q-Tracks studies supplemented with other studies cited in the literature. Q-Probes studies are carried out as time-limited studies lasting 1 to 4 months and have been conducted since 1989. In contrast, Q-Tracks investigations are ongoing studies performed on a yearly basis and have been conducted only since 1998. Participants from institutions throughout the world simultaneously conducted these studies according to specified scientific designs. The CAP has collected and summarized data for participants about these performance measures, including the significance of errors, the magnitude of error rates, tactics for error reduction, and willingness to implement each of these performance measures. A list of recommended performance measures, the frequency of errors when these performance measures were studied, and suggestions to improve patient safety by reducing these errors. Error rates for preanalytic and postanalytic performance measures were higher than for analytic measures. Eight performance measures were identified, including customer satisfaction, test turnaround times, patient identification, specimen acceptability, proficiency testing, critical value reporting, blood product wastage, and blood culture contamination. Error rate benchmarks for these performance measures were cited and recommendations for improving patient safety presented. Not only has each of the 8 performance measures proven practical, useful, and important for patient care, taken together, they also fulfill regulatory requirements. All laboratories should consider implementing these performance measures and standardizing their own scientific designs, data analysis, and error reduction strategies according to findings from these published studies.
Comparing safety climate in naval aviation and hospitals: implications for improving patient safety.
Singer, Sara J; Rosen, Amy; Zhao, Shibei; Ciavarelli, Anthony P; Gaba, David M
2010-01-01
Evidence of variation in safety climate suggests the need for improvement among at least some hospitals. However, comparisons only among hospitals may underestimate the improvement required. Comparison of hospitals with analogous industries may provide a broader perspective on the safety status of our nation's hospitals. The purpose of this study was to compare safety climate among hospital workers with personnel from naval aviation, an organization that operates with high reliability despite intrinsically hazardous conditions. We surveyed a random sample of health care workers in 67 U.S. hospitals and, for generalizability, 30 veterans affairs hospitals using questions comparable with those posed at approximately the same time (2007) to a census of personnel from 35 squadrons of U.S. naval aviators. We received 13,841 (41%) completed surveys in U.S. hospitals, 5,511 (50%) in veterans affairs hospitals, and 14,854 (82%) among naval aviators. We examined differences in respondents' perceptions of safety climate at their institution overall and for 16 individual items. Safety climate was three times better on average among naval aviators than among hospital personnel. Naval aviators perceived a safer climate (up to seven times safer) than hospital personnel with respect to each of the 16 survey items. Compared with hospital managers, naval commanders perceived climate more like frontline personnel did. When contrasting naval aviators with hospital personnel working in comparably hazardous areas, safety climate discrepancies increased rather than decreased. One individual hospital performed as well as naval aviation on average, and at least one hospital outperformed the Navy benchmark for all but three individual survey items. Results suggest that hospitals have not sufficiently created a uniform priority of safety. However, if each hospital performed as well as the top-performing hospital in each area measured, hospitals could achieve safety climate levels comparable with naval aviation. Major interventions to bolster hospital safety climate continue to be required to improve patient safety.
Road safety performance indicators for the interurban road network.
Yannis, George; Weijermars, Wendy; Gitelman, Victoria; Vis, Martijn; Chaziris, Antonis; Papadimitriou, Eleonora; Azevedo, Carlos Lima
2013-11-01
Various road safety performance indicators (SPIs) have been proposed for different road safety research areas, mainly as regards driver behaviour (e.g. seat belt use, alcohol, drugs, etc.) and vehicles (e.g. passive safety); however, no SPIs for the road network and design have been developed. The objective of this research is the development of an SPI for the road network, to be used as a benchmark for cross-region comparisons. The developed SPI essentially makes a comparison of the existing road network to the theoretically required one, defined as one which meets some minimum requirements with respect to road safety. This paper presents a theoretical concept for the determination of this SPI as well as a translation of this theory into a practical method. Also, the method is applied in a number of pilot countries namely the Netherlands, Portugal, Greece and Israel. The results show that the SPI could be efficiently calculated in all countries, despite some differences in the data sources. In general, the calculated overall SPI scores were realistic and ranged from 81 to 94%, with the exception of Greece where the SPI was relatively lower (67%). However, the SPI should be considered as a first attempt to determine the safety level of the road network. The proposed method has some limitations and could be further improved. The paper presents directions for further research to further develop the SPI. Copyright © 2012 Elsevier Ltd. All rights reserved.
Niskanen, Toivo; Lehtelä, Jouni; Länsikallio, Riina
2014-01-01
Employers and workers need concrete guidance to plan and implement changes in the ergonomics of computer workstations. The Näppärä method is a screening tool for identifying problems requiring further assessment and corrective actions. The aim of this study was to assess the work of occupational safety and health (OSH) government inspectors who used Näppärä as part of their OSH enforcement inspections (430 assessments) related to computer work. The modifications in workstation ergonomics involved mainly adjustments to the screen, mouse, keyboard, forearm supports, and chair. One output of the assessment is an index indicating the percentage of compliance items. This method can be considered as exposure assessment and ergonomics intervention used as a benchmark for the level of ergonomics. Future research can examine whether the effectiveness of participatory ergonomics interventions should be investigated with Näppärä.
Safety and control of accelerator-driven subcritical systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rief, H.; Takahashi, H.
1995-10-01
To study control and safety of accelertor driven nuclear systems, a one point kinetic model was developed and programed. It deals with fast transients as a function of reactivity insertion. Doppler feedback, and the intensity of an external neutron source. The model allows for a simultaneous calculation of an equivalent critical reactor. It was validated by a comparison with a benchmark specified by the Nuclear Energy Agency Committee of Reactor Physics. Additional features are the possibility of inserting a linear or quadratic time dependent reactivity ramp which may account for gravity induced accidents like earthquakes, the possibility to shut downmore » the external neutron source by an exponential decay law of the form exp({minus}t/{tau}), and a graphical display of the power and reactivity changes. The calculations revealed that such boosters behave quite benignly even if they are only slightly subcritical.« less
Benchmark Shock Tube Experiments for Radiative Heating Relevant to Earth Re-Entry
NASA Technical Reports Server (NTRS)
Brandis, A. M.; Cruden, B. A.
2017-01-01
Detailed spectrally and spatially resolved radiance has been measured in the Electric Arc Shock Tube (EAST) facility for conditions relevant to high speed entry into a variety of atmospheres, including Earth, Venus, Titan, Mars and the Outer Planets. The tests that measured radiation relevant for Earth re-entry are the focus of this work and are taken from campaigns 47, 50, 52 and 57. These tests covered conditions from 8 km/s to 15.5 km/s at initial pressures ranging from 0.05 Torr to 1 Torr, of which shots at 0.1 and 0.2 Torr are analyzed in this paper. These conditions cover a range of points of interest for potential fight missions, including return from Low Earth Orbit, the Moon and Mars. The large volume of testing available from EAST is useful for statistical analysis of radiation data, but is problematic for identifying representative experiments for performing detailed analysis. Therefore, the intent of this paper is to select a subset of benchmark test data that can be considered for further detailed study. These benchmark shots are intended to provide more accessible data sets for future code validation studies and facility-to-facility comparisons. The shots that have been selected as benchmark data are the ones in closest agreement to a line of best fit through all of the EAST results, whilst also showing the best experimental characteristics, such as test time and convergence to equilibrium. The EAST data are presented in different formats for analysis. These data include the spectral radiance at equilibrium, the spatial dependence of radiance over defined wavelength ranges and the mean non-equilibrium spectral radiance (so-called 'spectral non-equilibrium metric'). All the information needed to simulate each experimental trace, including free-stream conditions, shock time of arrival (i.e. x-t) relation, and the spectral and spatial resolution functions, are provided.
Physics of reactor safety. Quarterly report, January--March 1977. [LMFBR
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
1977-06-01
This report summarizes work done on reactor safety, Monte Carlo analysis of safety-related critical assembly experiments, and planning of DEMI safety-related critical experiments. Work on reactor core thermal-hydraulics is also included.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Risner, J.M.; Wiarda, D.; Miller, T.M.
2011-07-01
The U.S. Nuclear Regulatory Commission's Regulatory Guide 1.190 states that calculational methods used to estimate reactor pressure vessel (RPV) fluence should use the latest version of the evaluated nuclear data file (ENDF). The VITAMIN-B6 fine-group library and BUGLE-96 broad-group library, which are widely used for RPV fluence calculations, were generated using ENDF/B-VI.3 data, which was the most current data when Regulatory Guide 1.190 was issued. We have developed new fine-group (VITAMIN-B7) and broad-group (BUGLE-B7) libraries based on ENDF/B-VII.0. These new libraries, which were processed using the AMPX code system, maintain the same group structures as the VITAMIN-B6 and BUGLE-96 libraries.more » Verification and validation of the new libraries were accomplished using diagnostic checks in AMPX, 'unit tests' for each element in VITAMIN-B7, and a diverse set of benchmark experiments including critical evaluations for fast and thermal systems, a set of experimental benchmarks that are used for SCALE regression tests, and three RPV fluence benchmarks. The benchmark evaluation results demonstrate that VITAMIN-B7 and BUGLE-B7 are appropriate for use in RPV fluence calculations and meet the calculational uncertainty criterion in Regulatory Guide 1.190. (authors)« less
NASA Astrophysics Data System (ADS)
Rodriguez, Tony F.; Cushman, David A.
2003-06-01
With the growing commercialization of watermarking techniques in various application scenarios it has become increasingly important to quantify the performance of watermarking products. The quantification of relative merits of various products is not only essential in enabling further adoption of the technology by society as a whole, but will also drive the industry to develop testing plans/methodologies to ensure quality and minimize cost (to both vendors & customers.) While the research community understands the theoretical need for a publicly available benchmarking system to quantify performance, there has been less discussion on the practical application of these systems. By providing a standard set of acceptance criteria, benchmarking systems can dramatically increase the quality of a particular watermarking solution, validating the product performances if they are used efficiently and frequently during the design process. In this paper we describe how to leverage specific design of experiments techniques to increase the quality of a watermarking scheme, to be used with the benchmark tools being developed by the Ad-Hoc Watermark Verification Group. A Taguchi Loss Function is proposed for an application and orthogonal arrays used to isolate optimal levels for a multi-factor experimental situation. Finally, the results are generalized to a population of cover works and validated through an exhaustive test.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Risner, Joel M; Wiarda, Dorothea; Miller, Thomas Martin
2011-01-01
The U.S. Nuclear Regulatory Commission s Regulatory Guide 1.190 states that calculational methods used to estimate reactor pressure vessel (RPV) fluence should use the latest version of the Evaluated Nuclear Data File (ENDF). The VITAMIN-B6 fine-group library and BUGLE-96 broad-group library, which are widely used for RPV fluence calculations, were generated using ENDF/B-VI data, which was the most current data when Regulatory Guide 1.190 was issued. We have developed new fine-group (VITAMIN-B7) and broad-group (BUGLE-B7) libraries based on ENDF/B-VII. These new libraries, which were processed using the AMPX code system, maintain the same group structures as the VITAMIN-B6 and BUGLE-96more » libraries. Verification and validation of the new libraries was accomplished using diagnostic checks in AMPX, unit tests for each element in VITAMIN-B7, and a diverse set of benchmark experiments including critical evaluations for fast and thermal systems, a set of experimental benchmarks that are used for SCALE regression tests, and three RPV fluence benchmarks. The benchmark evaluation results demonstrate that VITAMIN-B7 and BUGLE-B7 are appropriate for use in LWR shielding applications, and meet the calculational uncertainty criterion in Regulatory Guide 1.190.« less
Learning Communities: An Untapped Sustainable Competitive Advantage for Higher Education
ERIC Educational Resources Information Center
Dawson, Shane; Burnett, Bruce; O' Donohue, Mark
2006-01-01
Purpose: This paper demonstrates the need for the higher education sector to develop and implement scaleable, quantitative measures that evaluate community and establish organisational benchmarks in order to guide the development of future practices designed to enhance the student learning experience. Design/methodology/approach: Literature…
Desperately Seeking Standards: Bridging the Gap from Concept to Reality.
ERIC Educational Resources Information Center
Jones, A. James; Gardner, Carrie; Zaenglein, Judith L.
1998-01-01
Discussion of national standards for information-and-technology literacy focuses on experiences at one school where national standards were synthesized by library media specialists to develop local standards as well as a series of benchmarks by which student achievement could be measured. (Author/LRW)
[Assessment of the patient-safety culture in a healthcare district].
Pozo Muñoz, F; Padilla Marín, V
2013-01-01
1) To describe the frequency of positive attitudes and behaviours, in terms of patient safety, among the healthcare providers working in a healthcare district; 2) to determine whether the level of safety-related culture differs from other studies; and 3) to analyse negatively valued dimensions, and to establish areas for their improvement. A descriptive, cross-sectional study based on the results of an evaluation of the safety-related culture was conducted on a randomly selected sample of 247 healthcare providers, by using the Spanish adaptation of the Hospital Survey on Patient Safety Culture (HSOPSC) designed by the Agency for Healthcare Research and Quality (AHRQ), as the evaluation tool. Positive and negative responses were analysed, as well as the global score. Results were compared with international and national results. A total of 176 completed survey questionnaires were analysed (response rate: 71.26%); 50% of responders described the safety climate as very good, 37% as acceptable, and 7% as excellent. Strong points were: «Teamwork within the units» (80.82%) and «Supervisor/manager expectations and actions» (80.54%). Dimensions identified for potential improvement included: «Staffing» (37.93%), «Non-punitive response to error» (41.67%), and «Frequency of event reporting» (49.05%). Strong and weak points were identified in the safety-related culture of the healthcare district studied, together with potential improvement areas. Benchmarking at the international level showed that our safety-related culture was within the average of hospitals, while at the national level, our results were above the average of hospitals. Copyright © 2013 SECA. Published by Elsevier Espana. All rights reserved.
Little Boy replication: justification and construction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malenfant, R.E.
A reconstruction of the Little Boy weapon allowed experiments to evaluate yield, leakage measurements for comparison with calculations, and phenomenological measurements to evaluate various in-situ dosimeters. The reconstructed weapon was operated at sustained delayed critical at the Los Alamos Critical Assembly Facility. The present experiments provide a wealth of information to benchmark calculations and demonstrate that the 1965 measurements on the Ichiban assembly (a spherical mockup of Little Boy) were in error. 5 references, 2 figures.
Electric load shape benchmarking for small- and medium-sized commercial buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Xuan; Hong, Tianzhen; Chen, Yixing
Small- and medium-sized commercial buildings owners and utility managers often look for opportunities for energy cost savings through energy efficiency and energy waste minimization. However, they currently lack easy access to low-cost tools that help interpret the massive amount of data needed to improve understanding of their energy use behaviors. Benchmarking is one of the techniques used in energy audits to identify which buildings are priorities for an energy analysis. Traditional energy performance indicators, such as the energy use intensity (annual energy per unit of floor area), consider only the total annual energy consumption, lacking consideration of the fluctuation ofmore » energy use behavior over time, which reveals the time of use information and represents distinct energy use behaviors during different time spans. To fill the gap, this study developed a general statistical method using 24-hour electric load shape benchmarking to compare a building or business/tenant space against peers. Specifically, the study developed new forms of benchmarking metrics and data analysis methods to infer the energy performance of a building based on its load shape. We first performed a data experiment with collected smart meter data using over 2,000 small- and medium-sized businesses in California. We then conducted a cluster analysis of the source data, and determined and interpreted the load shape features and parameters with peer group analysis. Finally, we implemented the load shape benchmarking feature in an open-access web-based toolkit (the Commercial Building Energy Saver) to provide straightforward and practical recommendations to users. The analysis techniques were generic and flexible for future datasets of other building types and in other utility territories.« less
Electric load shape benchmarking for small- and medium-sized commercial buildings
Luo, Xuan; Hong, Tianzhen; Chen, Yixing; ...
2017-07-28
Small- and medium-sized commercial buildings owners and utility managers often look for opportunities for energy cost savings through energy efficiency and energy waste minimization. However, they currently lack easy access to low-cost tools that help interpret the massive amount of data needed to improve understanding of their energy use behaviors. Benchmarking is one of the techniques used in energy audits to identify which buildings are priorities for an energy analysis. Traditional energy performance indicators, such as the energy use intensity (annual energy per unit of floor area), consider only the total annual energy consumption, lacking consideration of the fluctuation ofmore » energy use behavior over time, which reveals the time of use information and represents distinct energy use behaviors during different time spans. To fill the gap, this study developed a general statistical method using 24-hour electric load shape benchmarking to compare a building or business/tenant space against peers. Specifically, the study developed new forms of benchmarking metrics and data analysis methods to infer the energy performance of a building based on its load shape. We first performed a data experiment with collected smart meter data using over 2,000 small- and medium-sized businesses in California. We then conducted a cluster analysis of the source data, and determined and interpreted the load shape features and parameters with peer group analysis. Finally, we implemented the load shape benchmarking feature in an open-access web-based toolkit (the Commercial Building Energy Saver) to provide straightforward and practical recommendations to users. The analysis techniques were generic and flexible for future datasets of other building types and in other utility territories.« less
A chemical EOR benchmark study of different reservoir simulators
NASA Astrophysics Data System (ADS)
Goudarzi, Ali; Delshad, Mojdeh; Sepehrnoori, Kamy
2016-09-01
Interest in chemical EOR processes has intensified in recent years due to the advancements in chemical formulations and injection techniques. Injecting Polymer (P), surfactant/polymer (SP), and alkaline/surfactant/polymer (ASP) are techniques for improving sweep and displacement efficiencies with the aim of improving oil production in both secondary and tertiary floods. There has been great interest in chemical flooding recently for different challenging situations. These include high temperature reservoirs, formations with extreme salinity and hardness, naturally fractured carbonates, and sandstone reservoirs with heavy and viscous crude oils. More oil reservoirs are reaching maturity where secondary polymer floods and tertiary surfactant methods have become increasingly important. This significance has added to the industry's interest in using reservoir simulators as tools for reservoir evaluation and management to minimize costs and increase the process efficiency. Reservoir simulators with special features are needed to represent coupled chemical and physical processes present in chemical EOR processes. The simulators need to be first validated against well controlled lab and pilot scale experiments to reliably predict the full field implementations. The available data from laboratory scale include 1) phase behavior and rheological data; and 2) results of secondary and tertiary coreflood experiments for P, SP, and ASP floods under reservoir conditions, i.e. chemical retentions, pressure drop, and oil recovery. Data collected from corefloods are used as benchmark tests comparing numerical reservoir simulators with chemical EOR modeling capabilities such as STARS of CMG, ECLIPSE-100 of Schlumberger, REVEAL of Petroleum Experts. The research UTCHEM simulator from The University of Texas at Austin is also included since it has been the benchmark for chemical flooding simulation for over 25 years. The results of this benchmark comparison will be utilized to improve chemical design for field-scale studies using commercial simulators. The benchmark tests illustrate the potential of commercial simulators for chemical flooding projects and provide a comprehensive table of strengths and limitations of each simulator for a given chemical EOR process. Mechanistic simulations of chemical EOR processes will provide predictive capability and can aid in optimization of the field injection projects. The objective of this paper is not to compare the computational efficiency and solution algorithms; it only focuses on the process modeling comparison.
Benchmarking and audit of breast units improves quality of care
van Dam, P.A.; Verkinderen, L.; Hauspy, J.; Vermeulen, P.; Dirix, L.; Huizing, M.; Altintas, S.; Papadimitriou, K.; Peeters, M.; Tjalma, W.
2013-01-01
Quality Indicators (QIs) are measures of health care quality that make use of readily available hospital inpatient administrative data. Assessment quality of care can be performed on different levels: national, regional, on a hospital basis or on an individual basis. It can be a mandatory or voluntary system. In all cases development of an adequate database for data extraction, and feedback of the findings is of paramount importance. In the present paper we performed a Medline search on “QIs and breast cancer” and “benchmarking and breast cancer care”, and we have added some data from personal experience. The current data clearly show that the use of QIs for breast cancer care, regular internal and external audit of performance of breast units, and benchmarking are effective to improve quality of care. Adherence to guidelines improves markedly (particularly regarding adjuvant treatment) and there are data emerging showing that this results in a better outcome. As quality assurance benefits patients, it will be a challenge for the medical and hospital community to develop affordable quality control systems, which are not leading to excessive workload. PMID:24753926
Donahue, Suzanne; DiBlasi, Robert M; Thomas, Karen
2018-02-02
To examine the practice of nebulizer cool mist blow-by oxygen administered to spontaneously breathing postanesthesia care unit (PACU) pediatric patients during Phase one recovery. Existing evidence was evaluated. Informal benchmarking documented practices in peer organizations. An in vitro study was then conducted to simulate clinical practice and determine depth and amount of airway humidity delivery with blow-by oxygen. Informal benchmarking information was obtained by telephone interview. Using a three-dimensional printed simulation model of the head connected to a breathing lung simulator, depth and amount of moisture delivery in the respiratory tree were measured. Evidence specific to PACU administration of cool mist blow-by oxygen was limited. Informal benchmarking revealed that routine cool mist oxygenated blow-by administration was not widely practiced. The laboratory experiment revealed minimal moisture reaching the mid-tracheal area of the simulated airway model. Routine use of oxygenated cool mist in spontaneously breathing pediatric PACU patients is not supported. Copyright © 2017 American Society of PeriAnesthesia Nurses. Published by Elsevier Inc. All rights reserved.
Stanford, Robert E
2004-05-01
This paper uses a non-parametric frontier model and adaptations of the concepts of cross-efficiency and peer-appraisal to develop a formal methodology for benchmarking provider performance in the treatment of Acute Myocardial Infarction (AMI). Parameters used in the benchmarking process are the rates of proper recognition of indications of six standard treatment processes for AMI; the decision making units (DMUs) to be compared are the Medicare eligible hospitals of a particular state; the analysis produces an ordinal ranking of individual hospital performance scores. The cross-efficiency/peer-appraisal calculation process is constructed to accommodate DMUs that experience no patients in some of the treatment categories. While continuing to rate highly the performances of DMUs which are efficient in the Pareto-optimal sense, our model produces individual DMU performance scores that correlate significantly with good overall performance, as determined by a comparison of the sums of the individual DMU recognition rates for the six standard treatment processes. The methodology is applied to data collected from 107 state Medicare hospitals.
NASA Astrophysics Data System (ADS)
Moriarty, Patrick; Sanz Rodrigo, Javier; Gancarski, Pawel; Chuchfield, Matthew; Naughton, Jonathan W.; Hansen, Kurt S.; Machefaux, Ewan; Maguire, Eoghan; Castellani, Francesco; Terzi, Ludovico; Breton, Simon-Philippe; Ueda, Yuko
2014-06-01
Researchers within the International Energy Agency (IEA) Task 31: Wakebench have created a framework for the evaluation of wind farm flow models operating at the microscale level. The framework consists of a model evaluation protocol integrated with a web-based portal for model benchmarking (www.windbench.net). This paper provides an overview of the building-block validation approach applied to wind farm wake models, including best practices for the benchmarking and data processing procedures for validation datasets from wind farm SCADA and meteorological databases. A hierarchy of test cases has been proposed for wake model evaluation, from similarity theory of the axisymmetric wake and idealized infinite wind farm, to single-wake wind tunnel (UMN-EPFL) and field experiments (Sexbierum), to wind farm arrays in offshore (Horns Rev, Lillgrund) and complex terrain conditions (San Gregorio). A summary of results from the axisymmetric wake, Sexbierum, Horns Rev and Lillgrund benchmarks are used to discuss the state-of-the-art of wake model validation and highlight the most relevant issues for future development.
On the predictability of land surface fluxes from meteorological variables
NASA Astrophysics Data System (ADS)
Haughton, Ned; Abramowitz, Gab; Pitman, Andy J.
2018-01-01
Previous research has shown that land surface models (LSMs) are performing poorly when compared with relatively simple empirical models over a wide range of metrics and environments. Atmospheric driving data appear to provide information about land surface fluxes that LSMs are not fully utilising. Here, we further quantify the information available in the meteorological forcing data that are used by LSMs for predicting land surface fluxes, by interrogating FLUXNET data, and extending the benchmarking methodology used in previous experiments. We show that substantial performance improvement is possible for empirical models using meteorological data alone, with no explicit vegetation or soil properties, thus setting lower bounds on a priori expectations on LSM performance. The process also identifies key meteorological variables that provide predictive power. We provide an ensemble of empirical benchmarks that are simple to reproduce and provide a range of behaviours and predictive performance, acting as a baseline benchmark set for future studies. We reanalyse previously published LSM simulations and show that there is more diversity between LSMs than previously indicated, although it remains unclear why LSMs are broadly performing so much worse than simple empirical models.
Limitations of Community College Benchmarking and Benchmarks
ERIC Educational Resources Information Center
Bers, Trudy H.
2006-01-01
This chapter distinguishes between benchmarks and benchmarking, describes a number of data and cultural limitations to benchmarking projects, and suggests that external demands for accountability are the dominant reason for growing interest in benchmarking among community colleges.
Teaching cultural safety in a New Zealand nursing education program.
Richardson, Fran; Carryer, Jenny
2005-05-01
Cultural safety education is a concept unique to nursing in New Zealand. It involves teaching nursing students to recognize and understand the dynamics of cultural, personal, and professional power and how these shape nursing and health care relationships. This article describes the findings of a research study on the experience of teaching cultural safety. As a teacher of cultural safety, the first author was interested in exploring the experience of teaching the topic with other cultural safety teachers. A qualitative approach situated in a critical theory paradigm was used for the study. The study was informed by the ideas of Foucault and feminist theory. Fourteen women between ages 20 and 60 were interviewed about their experience of teaching cultural safety. Five women were Maori (the indigenous people of New Zealand), and 9 were Pakeha (the Maori name for New Zealanders of European descent). Following data analysis, three major themes were identified: that the Treaty of Waitangi provides for an examination of power in cultural safety education; that the broad concept of difference influences the experience of teaching cultural safety; and that the experience of teaching cultural safety has personal, professional, and political dimensions. These dimensions are experienced differently by Maori and Pakeha teachers.
Griesbach, Sara; Lustig, Adam; Malsin, Luanne; Carley, Blake; Westrich, Kimberly D; Dubois, Robert W
2015-04-01
The accountable care organization (ACO), one of the most promising and talked about new models of care, focuses on improving communication and care transitions by tying potential shared savings to specific clinical and financial benchmarks. An important factor in meeting these benchmarks is an ACO's ability to manage medications in an environment where medical and pharmacy care has been integrated. The program described in this article highlights the critical components of Marshfield Clinic's Drug Safety Alert Program (DSAP), which focuses on prioritizing and communicating safety issues related to medications with the goal of reducing potential adverse drug events. Once the medication safety concern is identified, it is reviewed to evaluate whether an alert warrants sending prescribers a communication that identifies individual patients or a general communication to all physicians describing the safety concern. Instead of basing its decisions regarding clinician notification about drug alerts on subjective criteria, the Marshfield Clinic's DSAP uses an internally developed scoring system. The scoring system includes criteria developed from previous drug alerts, such as level of evidence, size of population affected, severity of adverse event identified or targeted, litigation risk, available alternatives, and potential for duration of medication use. Each of the 6 criteria is assigned a weight and is scored based upon the content and severity of the alert received. In its first 12 months, the program targeted 6 medication safety concerns involving the following medications: topiramate, glyburide, simvastatin, citalopram, pioglitazone, and lovastatin. Baseline and follow-up prescribing data were gathered on the targeted medications. Follow-up review of prescribing data demonstrated that the DSAP provided quality up-to-date safety information that led to changes in drug therapy and to decreases in potential adverse drug events. In aggregate, nearly 10,000 total potential adverse drug events were identified with baseline data from the DSAP initiatives, and nearly 8,000 were resolved by changes in prescribing. Implications and additional thoughts from The Working Group on Optimizing Medication Therapy in Value-Based Healthcare were provided for the following categories: leveraging electronic health records, importance of data collection and reassessment, preventing alert fatigue utilizing various techniques, relevance to ACO quality measurement, and limitations of a retrospective system. While health information technologies have been recognized as a cornerstone for an ACO's success, additional research is needed on comparing these types of technological innovations. Future research should focus on reviewing comparable scoring criteria and alert systems utilized in a variety of ACOs. In addition, an examination of different data mining procedures used within different electronic health record platforms would prove useful to ACOs looking to improve the care of not only the subpopulations with specific metrics associated with them, but their patient population as a whole. The authors also highlight the need for additional research on health information exchanges, including the cost and resource requirements needed to successfully participate in these types of networks.
Using Toyota's A3 Thinking for Analyzing MBA Business Cases
ERIC Educational Resources Information Center
Anderson, Joe S.; Morgan, James N.; Williams, Susan K.
2011-01-01
A3 Thinking is fundamental to Toyota's benchmark management philosophy and to their lean production system. It is used to solve problems, gain agreement, mentor team members, and lead organizational improvements. A structured problem-solving approach, A3 Thinking builds improvement opportunities through experience. We used "The Toyota…
Comparing Community College Student and Faculty Perceptions of Student Engagement
ERIC Educational Resources Information Center
Senn-Carter, Darian
2017-01-01
The purpose of this quantitative study was to compare faculty and student perceptions of "student engagement" at a mid-Atlantic community college to determine the level of correlation between student experiences and faculty practices in five benchmark areas of student engagement: "academic challenge, student-faculty interaction,…
Canada First: The 2009 Survey of International Students
ERIC Educational Resources Information Center
Humphries, Jennifer, Ed.; Knight-Grofe, Janine, Ed.; Klabunde, Niels, Ed.
2009-01-01
The Canadian Bureau for International Education (CBIE) regularly evaluates the experience of international students in Canada through a benchmarking survey. Canada First 2009 represents the fourth time CBIE has conducted this research. Previous editions appeared in 1988, 1999 and 2004. This year's survey used a revised questionnaire similar to…
ERIC Educational Resources Information Center
Slover-Linett, Cheryl; Stoner, Michael
2010-01-01
Earlier this year, CASE formed a social media task force to explore what educational institutions are trying to achieve with social media presence and learn about social media engagements at member institutions. CASE, in partnership with mStoner and Slover Linett Strategies, in June launched a benchmarking survey on social media in advancement by…
Informing New String Programmes: Lessons Learned from an Australian Experience
ERIC Educational Resources Information Center
Murphy, Fintan; Rickard, Nikki; Gill, Anneliese; Grimmett, Helen
2011-01-01
Although there are many examples of notable string programmes there has been relatively little comparative analysis of these programmes. This paper examines three benchmark string programmes (The University of Illinois String Project, The Tower Hamlets String Teaching Project and Colourstrings) alongside Music4All, an innovative string programme…
Reflective Field Experiences for Success in Teaching Elementary Mathematics
ERIC Educational Resources Information Center
Robards, Shirley N.
2009-01-01
In this paper, the author discusses the major components of a junior level pedagogy course for elementary education majors learning to teach mathematics. The course reviews content and knowledge of the teacher candidates and introduces methods and materials for teaching elementary mathematics using the Standards or benchmarks from the National…
Using Clouds for MapReduce Measurement Assignments
ERIC Educational Resources Information Center
Rabkin, Ariel; Reiss, Charles; Katz, Randy; Patterson, David
2013-01-01
We describe our experiences teaching MapReduce in a large undergraduate lecture course using public cloud services and the standard Hadoop API. Using the standard API, students directly experienced the quality of industrial big-data tools. Using the cloud, every student could carry out scalability benchmarking assignments on realistic hardware,…
Dual linear structured support vector machine tracking method via scale correlation filter
NASA Astrophysics Data System (ADS)
Li, Weisheng; Chen, Yanquan; Xiao, Bin; Feng, Chen
2018-01-01
Adaptive tracking-by-detection methods based on structured support vector machine (SVM) performed well on recent visual tracking benchmarks. However, these methods did not adopt an effective strategy of object scale estimation, which limits the overall tracking performance. We present a tracking method based on a dual linear structured support vector machine (DLSSVM) with a discriminative scale correlation filter. The collaborative tracker comprised of a DLSSVM model and a scale correlation filter obtains good results in tracking target position and scale estimation. The fast Fourier transform is applied for detection. Extensive experiments show that our tracking approach outperforms many popular top-ranking trackers. On a benchmark including 100 challenging video sequences, the average precision of the proposed method is 82.8%.
NASA Astrophysics Data System (ADS)
Khachatryan, V.; Sirunyan, A. M.; Tumasyan, A.; Adam, W.; Asilar, E.; Bergauer, T.; Brandstetter, J.; Brondolin, E.; Dragicevic, M.; Erö, J.; Flechl, M.; Friedl, M.; Frühwirth, R.; Ghete, V. M.; Hartl, C.; Hörmann, N.; Hrubec, J.; Jeitler, M.; König, A.; Krätschmer, I.; Liko, D.; Matsushita, T.; Mikulec, I.; Rabady, D.; Rad, N.; Rahbaran, B.; Rohringer, H.; Schieck, J.; Strauss, J.; Treberer-Treberspurg, W.; Waltenberger, W.; Wulz, C.-E.; Mossolov, V.; Shumeiko, N.; Suarez Gonzalez, J.; Alderweireldt, S.; De Wolf, E. A.; Janssen, X.; Lauwers, J.; Van De Klundert, M.; Van Haevermaet, H.; Van Mechelen, P.; Van Remortel, N.; Van Spilbeeck, A.; Abu Zeid, S.; Blekman, F.; D'Hondt, J.; Daci, N.; De Bruyn, I.; Deroover, K.; Heracleous, N.; Lowette, S.; Moortgat, S.; Moreels, L.; Olbrechts, A.; Python, Q.; Tavernier, S.; Van Doninck, W.; Van Mulders, P.; Van Parijs, I.; Brun, H.; Caillol, C.; Clerbaux, B.; De Lentdecker, G.; Delannoy, H.; Fasanella, G.; Favart, L.; Goldouzian, R.; Grebenyuk, A.; Karapostoli, G.; Lenzi, T.; Léonard, A.; Luetic, J.; Maerschalk, T.; Marinov, A.; Randle-conde, A.; Seva, T.; Vander Velde, C.; Vanlaer, P.; Yonamine, R.; Zenoni, F.; Zhang, F.; Cimmino, A.; Cornelis, T.; Dobur, D.; Fagot, A.; Garcia, G.; Gul, M.; Poyraz, D.; Salva, S.; Schöfbeck, R.; Sharma, A.; Tytgat, M.; Van Driessche, W.; Yazgan, E.; Zaganidis, N.; Bakhshiansohi, H.; Beluffi, C.; Bondu, O.; Brochet, S.; Bruno, G.; Caudron, A.; De Visscher, S.; Delaere, C.; Delcourt, M.; Francois, B.; Giammanco, A.; Jafari, A.; Jez, P.; Komm, M.; Lemaitre, V.; Magitteri, A.; Mertens, A.; Musich, M.; Nuttens, C.; Piotrzkowski, K.; Quertenmont, L.; Selvaggi, M.; Vidal Marono, M.; Wertz, S.; Beliy, N.; Aldá Júnior, W. L.; Alves, F. L.; Alves, G. A.; Brito, L.; Hensel, C.; Moraes, A.; Pol, M. E.; Rebello Teles, P.; Belchior Batista Das Chagas, E.; Carvalho, W.; Chinellato, J.; Custódio, A.; Da Costa, E. M.; Da Silveira, G. G.; De Jesus Damiao, D.; De Oliveira Martins, C.; Fonseca De Souza, S.; Huertas Guativa, L. M.; Malbouisson, H.; Matos Figueiredo, D.; Mora Herrera, C.; Mundim, L.; Nogima, H.; Prado Da Silva, W. L.; Santoro, A.; Sznajder, A.; Tonelli Manganote, E. J.; Vilela Pereira, A.; Ahuja, S.; Bernardes, C. A.; Dogra, S.; Fernandez Perez Tomei, T. R.; Gregores, E. M.; Mercadante, P. G.; Moon, C. S.; Novaes, S. F.; Padula, Sandra S.; Romero Abad, D.; Ruiz Vargas, J. C.; Aleksandrov, A.; Hadjiiska, R.; Iaydjiev, P.; Rodozov, M.; Stoykova, S.; Sultanov, G.; Vutova, M.; Dimitrov, A.; Glushkov, I.; Litov, L.; Pavlov, B.; Petkov, P.; Fang, W.; Ahmad, M.; Bian, J. G.; Chen, G. M.; Chen, H. S.; Chen, M.; Chen, Y.; Cheng, T.; Jiang, C. H.; Leggat, D.; Liu, Z.; Romeo, F.; Shaheen, S. M.; Spiezia, A.; Tao, J.; Wang, C.; Wang, Z.; Zhang, H.; Zhao, J.; Ban, Y.; Chen, G.; Li, Q.; Liu, S.; Mao, Y.; Qian, S. J.; Wang, D.; Xu, Z.; Avila, C.; Cabrera, A.; Chaparro Sierra, L. F.; Florez, C.; Gomez, J. P.; González Hernández, C. F.; Ruiz Alvarez, J. D.; Sanabria, J. C.; Godinovic, N.; Lelas, D.; Puljak, I.; Ribeiro Cipriano, P. M.; Sculac, T.; Antunovic, Z.; Kovac, M.; Brigljevic, V.; Ferencek, D.; Kadija, K.; Micanovic, S.; Sudic, L.; Susa, T.; Attikis, A.; Mavromanolakis, G.; Mousa, J.; Nicolaou, C.; Ptochos, F.; Razis, P. A.; Rykaczewski, H.; Finger, M.; Finger, M.; Carrera Jarrin, E.; Abdelalim, A. A.; Mohammed, Y.; Salama, E.; Calpas, B.; Kadastik, M.; Murumaa, M.; Perrini, L.; Raidal, M.; Tiko, A.; Veelken, C.; Eerola, P.; Pekkanen, J.; Voutilainen, M.; Härkönen, J.; Karimäki, V.; Kinnunen, R.; Lampén, T.; Lassila-Perini, K.; Lehti, S.; Lindén, T.; Luukka, P.; Tuominiemi, J.; Tuovinen, E.; Wendland, L.; Talvitie, J.; Tuuva, T.; Besancon, M.; Couderc, F.; Dejardin, M.; Denegri, D.; Fabbro, B.; Faure, J. L.; Favaro, C.; Ferri, F.; Ganjour, S.; Ghosh, S.; Givernaud, A.; Gras, P.; Hamel de Monchenault, G.; Jarry, P.; Kucher, I.; Locci, E.; Machet, M.; Malcles, J.; Rander, J.; Rosowsky, A.; Titov, M.; Zghiche, A.; Abdulsalam, A.; Antropov, I.; Baffioni, S.; Beaudette, F.; Busson, P.; Cadamuro, L.; Chapon, E.; Charlot, C.; Davignon, O.; Granier de Cassagnac, R.; Jo, M.; Lisniak, S.; Miné, P.; Nguyen, M.; Ochando, C.; Ortona, G.; Paganini, P.; Pigard, P.; Regnard, S.; Salerno, R.; Sirois, Y.; Strebler, T.; Yilmaz, Y.; Zabi, A.; Agram, J.-L.; Andrea, J.; Aubin, A.; Bloch, D.; Brom, J.-M.; Buttignol, M.; Chabert, E. C.; Chanon, N.; Collard, C.; Conte, E.; Coubez, X.; Fontaine, J.-C.; Gelé, D.; Goerlach, U.; Le Bihan, A.-C.; Skovpen, K.; Van Hove, P.; Gadrat, S.; Beauceron, S.; Bernet, C.; Boudoul, G.; Bouvier, E.; Carrillo Montoya, C. A.; Chierici, R.; Contardo, D.; Courbon, B.; Depasse, P.; El Mamouni, H.; Fan, J.; Fay, J.; Gascon, S.; Gouzevitch, M.; Grenier, G.; Ille, B.; Lagarde, F.; Laktineh, I. B.; Lethuillier, M.; Mirabito, L.; Pequegnot, A. L.; Perries, S.; Popov, A.; Sabes, D.; Sordini, V.; Vander Donckt, M.; Verdier, P.; Viret, S.; Toriashvili, T.; Tsamalaidze, Z.; Autermann, C.; Beranek, S.; Feld, L.; Heister, A.; Kiesel, M. K.; Klein, K.; Lipinski, M.; Ostapchuk, A.; Preuten, M.; Raupach, F.; Schael, S.; Schomakers, C.; Schulte, J. F.; Schulz, J.; Verlage, T.; Weber, H.; Zhukov, V.; Albert, A.; Brodski, M.; Dietz-Laursonn, E.; Duchardt, D.; Endres, M.; Erdmann, M.; Erdweg, S.; Esch, T.; Fischer, R.; Güth, A.; Hamer, M.; Hebbeker, T.; Heidemann, C.; Hoepfner, K.; Knutzen, S.; Merschmeyer, M.; Meyer, A.; Millet, P.; Mukherjee, S.; Olschewski, M.; Padeken, K.; Pook, T.; Radziej, M.; Reithler, H.; Rieger, M.; Scheuch, F.; Sonnenschein, L.; Teyssier, D.; Thüer, S.; Cherepanov, V.; Flügge, G.; Haj Ahmad, W.; Hoehle, F.; Kargoll, B.; Kress, T.; Künsken, A.; Lingemann, J.; Müller, T.; Nehrkorn, A.; Nowack, A.; Nugent, I. M.; Pistone, C.; Pooth, O.; Stahl, A.; Aldaya Martin, M.; Asawatangtrakuldee, C.; Beernaert, K.; Behnke, O.; Behrens, U.; Bin Anuar, A. A.; Borras, K.; Campbell, A.; Connor, P.; Contreras-Campana, C.; Costanza, F.; Diez Pardos, C.; Dolinska, G.; Eckerlin, G.; Eckstein, D.; Eichhorn, T.; Eren, E.; Gallo, E.; Garay Garcia, J.; Geiser, A.; Gizhko, A.; Grados Luyando, J. M.; Gunnellini, P.; Harb, A.; Hauk, J.; Hempel, M.; Jung, H.; Kalogeropoulos, A.; Karacheban, O.; Kasemann, M.; Keaveney, J.; Kleinwort, C.; Korol, I.; Krücker, D.; Lange, W.; Lelek, A.; Leonard, J.; Lipka, K.; Lobanov, A.; Lohmann, W.; Mankel, R.; Melzer-Pellmann, I.-A.; Meyer, A. B.; Mittag, G.; Mnich, J.; Mussgiller, A.; Ntomari, E.; Pitzl, D.; Placakyte, R.; Raspereza, A.; Roland, B.; Sahin, M. Ö.; Saxena, P.; Schoerner-Sadenius, T.; Seitz, C.; Spannagel, S.; Stefaniuk, N.; Van Onsem, G. P.; Walsh, R.; Wissing, C.; Blobel, V.; Centis Vignali, M.; Draeger, A. R.; Dreyer, T.; Garutti, E.; Gonzalez, D.; Haller, J.; Hoffmann, M.; Junkes, A.; Klanner, R.; Kogler, R.; Kovalchuk, N.; Lapsien, T.; Lenz, T.; Marchesini, I.; Marconi, D.; Meyer, M.; Niedziela, M.; Nowatschin, D.; Pantaleo, F.; Peiffer, T.; Perieanu, A.; Poehlsen, J.; Sander, C.; Scharf, C.; Schleper, P.; Schmidt, A.; Schumann, S.; Schwandt, J.; Stadie, H.; Steinbrück, G.; Stober, F. M.; Stöver, M.; Tholen, H.; Troendle, D.; Usai, E.; Vanelderen, L.; Vanhoefer, A.; Vormwald, B.; Barth, C.; Baus, C.; Berger, J.; Butz, E.; Chwalek, T.; Colombo, F.; De Boer, W.; Dierlamm, A.; Fink, S.; Friese, R.; Giffels, M.; Gilbert, A.; Goldenzweig, P.; Haitz, D.; Hartmann, F.; Heindl, S. M.; Husemann, U.; Katkov, I.; Lobelle Pardo, P.; Maier, B.; Mildner, H.; Mozer, M. U.; Müller, Th.; Plagge, M.; Quast, G.; Rabbertz, K.; Röcker, S.; Roscher, F.; Schröder, M.; Shvetsov, I.; Sieber, G.; Simonis, H. J.; Ulrich, R.; Wagner-Kuhr, J.; Wayand, S.; Weber, M.; Weiler, T.; Williamson, S.; Wöhrmann, C.; Wolf, R.; Anagnostou, G.; Daskalakis, G.; Geralis, T.; Giakoumopoulou, V. A.; Kyriakis, A.; Loukas, D.; Topsis-Giotis, I.; Kesisoglou, S.; Panagiotou, A.; Saoulidou, N.; Tziaferi, E.; Evangelou, I.; Flouris, G.; Foudas, C.; Kokkas, P.; Loukas, N.; Manthos, N.; Papadopoulos, I.; Paradas, E.; Filipovic, N.; Bencze, G.; Hajdu, C.; Hidas, P.; Horvath, D.; Sikler, F.; Veszpremi, V.; Vesztergombi, G.; Zsigmond, A. J.; Beni, N.; Czellar, S.; Karancsi, J.; Makovec, A.; Molnar, J.; Szillasi, Z.; Bartók, M.; Raics, P.; Trocsanyi, Z. L.; Ujvari, B.; Bahinipati, S.; Choudhury, S.; Mal, P.; Mandal, K.; Nayak, A.; Sahoo, D. K.; Sahoo, N.; Swain, S. K.; Bansal, S.; Beri, S. B.; Bhatnagar, V.; Chawla, R.; Bhawandeep, U.; Kalsi, A. K.; Kaur, A.; Kaur, M.; Kumar, R.; Kumari, P.; Mehta, A.; Mittal, M.; Singh, J. B.; Walia, G.; Kumar, Ashok; Bhardwaj, A.; Choudhary, B. C.; Garg, R. B.; Keshri, S.; Malhotra, S.; Naimuddin, M.; Nishu, N.; Ranjan, K.; Sharma, R.; Sharma, V.; Bhattacharya, R.; Bhattacharya, S.; Chatterjee, K.; Dey, S.; Dutt, S.; Dutta, S.; Ghosh, S.; Majumdar, N.; Modak, A.; Mondal, K.; Mukhopadhyay, S.; Nandan, S.; Purohit, A.; Roy, A.; Roy, D.; Roy Chowdhury, S.; Sarkar, S.; Sharan, M.; Thakur, S.; Behera, P. K.; Chudasama, R.; Dutta, D.; Jha, V.; Kumar, V.; Mohanty, A. K.; Netrakanti, P. K.; Pant, L. M.; Shukla, P.; Topkar, A.; Aziz, T.; Dugad, S.; Kole, G.; Mahakud, B.; Mitra, S.; Mohanty, G. B.; Parida, B.; Sur, N.; Sutar, B.; Banerjee, S.; Bhowmik, S.; Dewanjee, R. K.; Ganguly, S.; Guchait, M.; Jain, Sa.; Kumar, S.; Maity, M.; Majumder, G.; Mazumdar, K.; Sarkar, T.; Wickramage, N.; Chauhan, S.; Dube, S.; Hegde, V.; Kapoor, A.; Kothekar, K.; Rane, A.; Sharma, S.; Behnamian, H.; Chenarani, S.; Eskandari Tadavani, E.; Etesami, S. M.; Fahim, A.; Khakzad, M.; Mohammadi Najafabadi, M.; Naseri, M.; Paktinat Mehdiabadi, S.; Rezaei Hosseinabadi, F.; Safarzadeh, B.; Zeinali, M.; Felcini, M.; Grunewald, M.; Abbrescia, M.; Calabria, C.; Caputo, C.; Colaleo, A.; Creanza, D.; Cristella, L.; De Filippis, N.; De Palma, M.; Fiore, L.; Iaselli, G.; Maggi, G.; Maggi, M.; Miniello, G.; My, S.; Nuzzo, S.; Pompili, A.; Pugliese, G.; Radogna, R.; Ranieri, A.; Selvaggi, G.; Silvestris, L.; Venditti, R.; Verwilligen, P.; Abbiendi, G.; Battilana, C.; Bonacorsi, D.; Braibant-Giacomelli, S.; Brigliadori, L.; Campanini, R.; Capiluppi, P.; Castro, A.; Cavallo, F. R.; Chhibra, S. S.; Codispoti, G.; Cuffiani, M.; Dallavalle, G. M.; Fabbri, F.; Fanfani, A.; Fasanella, D.; Giacomelli, P.; Grandi, C.; Guiducci, L.; Marcellini, S.; Masetti, G.; Montanari, A.; Navarria, F. L.; Perrotta, A.; Rossi, A. M.; Rovelli, T.; Siroli, G. P.; Tosi, N.; Albergo, S.; Chiorboli, M.; Costa, S.; Di Mattia, A.; Giordano, F.; Potenza, R.; Tricomi, A.; Tuve, C.; Barbagli, G.; Ciulli, V.; Civinini, C.; D'Alessandro, R.; Focardi, E.; Gori, V.; Lenzi, P.; Meschini, M.; Paoletti, S.; Sguazzoni, G.; Viliani, L.; Benussi, L.; Bianco, S.; Fabbri, F.; Piccolo, D.; Primavera, F.; Calvelli, V.; Ferro, F.; Lo Vetere, M.; Monge, M. R.; Robutti, E.; Tosi, S.; Brianza, L.; Dinardo, M. E.; Fiorendi, S.; Gennai, S.; Ghezzi, A.; Govoni, P.; Malberti, M.; Malvezzi, S.; Manzoni, R. A.; Marzocchi, B.; Menasce, D.; Moroni, L.; Paganoni, M.; Pedrini, D.; Pigazzini, S.; Ragazzi, S.; Tabarelli de Fatis, T.; Buontempo, S.; Cavallo, N.; De Nardo, G.; Di Guida, S.; Esposito, M.; Fabozzi, F.; Iorio, A. O. M.; Lanza, G.; Lista, L.; Meola, S.; Paolucci, P.; Sciacca, C.; Thyssen, F.; Azzi, P.; Bacchetta, N.; Benato, L.; Bisello, D.; Boletti, A.; Carlin, R.; Carvalho Antunes De Oliveira, A.; Checchia, P.; Dall'Osso, M.; De Castro Manzano, P.; Dorigo, T.; Dosselli, U.; Gasparini, F.; Gasparini, U.; Gozzelino, A.; Lacaprara, S.; Margoni, M.; Meneguzzo, A. T.; Pazzini, J.; Pozzobon, N.; Ronchese, P.; Simonetto, F.; Torassa, E.; Zanetti, M.; Zotto, P.; Zucchetta, A.; Zumerle, G.; Braghieri, A.; Magnani, A.; Montagna, P.; Ratti, S. P.; Re, V.; Riccardi, C.; Salvini, P.; Vai, I.; Vitulo, P.; Alunni Solestizi, L.; Bilei, G. M.; Ciangottini, D.; Fanò, L.; Lariccia, P.; Leonardi, R.; Mantovani, G.; Menichelli, M.; Saha, A.; Santocchia, A.; Androsov, K.; Azzurri, P.; Bagliesi, G.; Bernardini, J.; Boccali, T.; Castaldi, R.; Ciocci, M. A.; Dell'Orso, R.; Donato, S.; Fedi, G.; Giassi, A.; Grippo, M. T.; Ligabue, F.; Lomtadze, T.; Martini, L.; Messineo, A.; Palla, F.; Rizzi, A.; Savoy-Navarro, A.; Spagnolo, P.; Tenchini, R.; Tonelli, G.; Venturi, A.; Verdini, P. G.; Barone, L.; Cavallari, F.; Cipriani, M.; D'imperio, G.; Del Re, D.; Diemoz, M.; Gelli, S.; Longo, E.; Margaroli, F.; Meridiani, P.; Organtini, G.; Paramatti, R.; Preiato, F.; Rahatlou, S.; Rovelli, C.; Santanastasio, F.; Amapane, N.; Arcidiacono, R.; Argiro, S.; Arneodo, M.; Bartosik, N.; Bellan, R.; Biino, C.; Cartiglia, N.; Costa, M.; Covarelli, R.; Degano, A.; Demaria, N.; Finco, L.; Kiani, B.; Mariotti, C.; Maselli, S.; Mazza, G.; Migliore, E.; Monaco, V.; Monteil, E.; Obertino, M. M.; Pacher, L.; Pastrone, N.; Pelliccioni, M.; Pinna Angioni, G. L.; Ravera, F.; Romero, A.; Rotondo, F.; Ruspa, M.; Sacchi, R.; Sola, V.; Solano, A.; Staiano, A.; Traczyk, P.; Belforte, S.; Casarsa, M.; Cossutti, F.; Della Ricca, G.; La Licata, C.; Schizzi, A.; Zanetti, A.; Kim, D. H.; Kim, G. N.; Kim, M. S.; Lee, S.; Lee, S. W.; Oh, Y. D.; Sekmen, S.; Son, D. C.; Yang, Y. C.; Lee, A.; Kim, H.; Brochero Cifuentes, J. A.; Kim, T. J.; Cho, S.; Choi, S.; Go, Y.; Gyun, D.; Ha, S.; Hong, B.; Jo, Y.; Kim, Y.; Lee, B.; Lee, K.; Lee, K. S.; Lee, S.; Lim, J.; Park, S. K.; Roh, Y.; Almond, J.; Kim, J.; Lee, H.; Oh, S. B.; Radburn-Smith, B. C.; Seo, S. h.; Yang, U. K.; Yoo, H. D.; Yu, G. B.; Choi, M.; Kim, H.; Kim, J. H.; Lee, J. S. H.; Park, I. C.; Ryu, G.; Ryu, M. S.; Choi, Y.; Goh, J.; Hwang, C.; Lee, J.; Yu, I.; Dudenas, V.; Juodagalvis, A.; Vaitkus, J.; Ahmed, I.; Ibrahim, Z. A.; Komaragiri, J. R.; Md Ali, M. A. B.; Mohamad Idris, F.; Wan Abdullah, W. A. T.; Yusli, M. N.; Zolkapli, Z.; Castilla-Valdez, H.; De La Cruz-Burelo, E.; Heredia-De La Cruz, I.; Hernandez-Almada, A.; Lopez-Fernandez, R.; Magaña Villalba, R.; Mejia Guisao, J.; Sanchez-Hernandez, A.; Carrillo Moreno, S.; Oropeza Barrera, C.; Vazquez Valencia, F.; Carpinteyro, S.; Pedraza, I.; Salazar Ibarguen, H. A.; Uribe Estrada, C.; Morelos Pineda, A.; Krofcheck, D.; Butler, P. H.; Ahmad, A.; Ahmad, M.; Hassan, Q.; Hoorani, H. R.; Khan, W. A.; Saddique, A.; Shah, M. A.; Shoaib, M.; Waqas, M.; Bialkowska, H.; Bluj, M.; Boimska, B.; Frueboes, T.; Górski, M.; Kazana, M.; Nawrocki, K.; Romanowska-Rybinska, K.; Szleper, M.; Zalewski, P.; Bunkowski, K.; Byszuk, A.; Doroba, K.; Kalinowski, A.; Konecki, M.; Krolikowski, J.; Misiura, M.; Olszewski, M.; Walczak, M.; Bargassa, P.; Beirão Da Cruz E Silva, C.; Di Francesco, A.; Faccioli, P.; Ferreira Parracho, P. G.; Gallinaro, M.; Hollar, J.; Leonardo, N.; Lloret Iglesias, L.; Nemallapudi, M. V.; Rodrigues Antunes, J.; Seixas, J.; Toldaiev, O.; Vadruccio, D.; Varela, J.; Vischia, P.; Belotelov, I.; Bunin, P.; Golutvin, I.; Gorbunov, I.; Karjavin, V.; Kozlov, G.; Lanev, A.; Malakhov, A.; Matveev, V.; Moisenz, P.; Palichik, V.; Perelygin, V.; Savina, M.; Shmatov, S.; Shulha, S.; Skatchkov, N.; Smirnov, V.; Voytishin, N.; Zarubin, A.; Chtchipounov, L.; Golovtsov, V.; Ivanov, Y.; Kim, V.; Kuznetsova, E.; Murzin, V.; Oreshkin, V.; Sulimov, V.; Vorobyev, A.; Andreev, Yu.; Dermenev, A.; Gninenko, S.; Golubev, N.; Karneyeu, A.; Kirsanov, M.; Krasnikov, N.; Pashenkov, A.; Tlisov, D.; Toropin, A.; Epshteyn, V.; Gavrilov, V.; Lychkovskaya, N.; Popov, V.; Pozdnyakov, I.; Safronov, G.; Spiridonov, A.; Toms, M.; Vlasov, E.; Zhokin, A.; Bylinkin, A.; Chistov, R.; Danilov, M.; Rusinov, V.; Andreev, V.; Azarkin, M.; Dremin, I.; Kirakosyan, M.; Leonidov, A.; Rusakov, S. V.; Terkulov, A.; Baskakov, A.; Belyaev, A.; Boos, E.; Bunichev, V.; Dubinin, M.; Dudko, L.; Klyukhin, V.; Kodolova, O.; Lokhtin, I.; Miagkov, I.; Obraztsov, S.; Perfilov, M.; Petrushanko, S.; Savrin, V.; Snigirev, A.; Blinov, V.; Skovpen, Y.; Azhgirey, I.; Bayshev, I.; Bitioukov, S.; Elumakhov, D.; Kachanov, V.; Kalinin, A.; Konstantinov, D.; Krychkine, V.; Petrov, V.; Ryutin, R.; Sobol, A.; Troshin, S.; Tyurin, N.; Uzunian, A.; Volkov, A.; Adzic, P.; Cirkovic, P.; Devetak, D.; Dordevic, M.; Milosevic, J.; Rekovic, V.; Alcaraz Maestre, J.; Barrio Luna, M.; Calvo, E.; Cerrada, M.; Chamizo Llatas, M.; Colino, N.; De La Cruz, B.; Delgado Peris, A.; Escalante Del Valle, A.; Fernandez Bedoya, C.; Fernández Ramos, J. P.; Flix, J.; Fouz, M. C.; Garcia-Abia, P.; Gonzalez Lopez, O.; Goy Lopez, S.; Hernandez, J. M.; Josa, M. I.; Navarro De Martino, E.; Pérez-Calero Yzquierdo, A.; Puerta Pelayo, J.; Quintario Olmeda, A.; Redondo, I.; Romero, L.; Soares, M. S.; de Trocóniz, J. F.; Missiroli, M.; Moran, D.; Cuevas, J.; Fernandez Menendez, J.; Gonzalez Caballero, I.; González Fernández, J. R.; Palencia Cortezon, E.; Sanchez Cruz, S.; Suárez Andrés, I.; Vizan Garcia, J. M.; Cabrillo, I. J.; Calderon, A.; Castiñeiras De Saa, J. R.; Curras, E.; Fernandez, M.; Garcia-Ferrero, J.; Gomez, G.; Lopez Virto, A.; Marco, J.; Martinez Rivero, C.; Matorras, F.; Piedra Gomez, J.; Rodrigo, T.; Ruiz-Jimeno, A.; Scodellaro, L.; Trevisani, N.; Vila, I.; Vilar Cortabitarte, R.; Abbaneo, D.; Auffray, E.; Auzinger, G.; Bachtis, M.; Baillon, P.; Ball, A. H.; Barney, D.; Bloch, P.; Bocci, A.; Bonato, A.; Botta, C.; Camporesi, T.; Castello, R.; Cepeda, M.; Cerminara, G.; D'Alfonso, M.; d'Enterria, D.; Dabrowski, A.; Daponte, V.; David, A.; De Gruttola, M.; De Roeck, A.; Di Marco, E.; Dobson, M.; Dorney, B.; du Pree, T.; Duggan, D.; Dünser, M.; Dupont, N.; Elliott-Peisert, A.; Fartoukh, S.; Franzoni, G.; Fulcher, J.; Funk, W.; Gigi, D.; Gill, K.; Girone, M.; Glege, F.; Gulhan, D.; Gundacker, S.; Guthoff, M.; Hammer, J.; Harris, P.; Hegeman, J.; Innocente, V.; Janot, P.; Kieseler, J.; Kirschenmann, H.; Knünz, V.; Kornmayer, A.; Kortelainen, M. J.; Kousouris, K.; Krammer, M.; Lange, C.; Lecoq, P.; Lourenço, C.; Lucchini, M. T.; Malgeri, L.; Mannelli, M.; Martelli, A.; Meijers, F.; Merlin, J. A.; Mersi, S.; Meschi, E.; Moortgat, F.; Morovic, S.; Mulders, M.; Neugebauer, H.; Orfanelli, S.; Orsini, L.; Pape, L.; Perez, E.; Peruzzi, M.; Petrilli, A.; Petrucciani, G.; Pfeiffer, A.; Pierini, M.; Racz, A.; Reis, T.; Rolandi, G.; Rovere, M.; Ruan, M.; Sakulin, H.; Sauvan, J. B.; Schäfer, C.; Schwick, C.; Seidel, M.; Sharma, A.; Silva, P.; Sphicas, P.; Steggemann, J.; Stoye, M.; Takahashi, Y.; Tosi, M.; Treille, D.; Triossi, A.; Tsirou, A.; Veckalns, V.; Veres, G. I.; Wardle, N.; Wöhri, H. K.; Zagozdzinska, A.; Zeuner, W. D.; Bertl, W.; Deiters, K.; Erdmann, W.; Horisberger, R.; Ingram, Q.; Kaestli, H. C.; Kotlinski, D.; Langenegger, U.; Rohe, T.; Bachmair, F.; Bäni, L.; Bianchini, L.; Casal, B.; Dissertori, G.; Dittmar, M.; Donegà, M.; Grab, C.; Heidegger, C.; Hits, D.; Hoss, J.; Kasieczka, G.; Lecomte, P.; Lustermann, W.; Mangano, B.; Marionneau, M.; Martinez Ruiz del Arbol, P.; Masciovecchio, M.; Meinhard, M. T.; Meister, D.; Micheli, F.; Musella, P.; Nessi-Tedaldi, F.; Pandolfi, F.; Pata, J.; Pauss, F.; Perrin, G.; Perrozzi, L.; Quittnat, M.; Rossini, M.; Schönenberger, M.; Starodumov, A.; Tavolaro, V. R.; Theofilatos, K.; Wallny, R.; Aarrestad, T. K.; Amsler, C.; Caminada, L.; Canelli, M. F.; De Cosa, A.; Galloni, C.; Hinzmann, A.; Hreus, T.; Kilminster, B.; Ngadiuba, J.; Pinna, D.; Rauco, G.; Robmann, P.; Salerno, D.; Yang, Y.; Candelise, V.; Doan, T. H.; Jain, Sh.; Khurana, R.; Konyushikhin, M.; Kuo, C. M.; Lin, W.; Lu, Y. J.; Pozdnyakov, A.; Yu, S. S.; Kumar, Arun; Chang, P.; Chang, Y. H.; Chang, Y. W.; Chao, Y.; Chen, K. F.; Chen, P. H.; Dietz, C.; Fiori, F.; Hou, W.-S.; Hsiung, Y.; Liu, Y. F.; Lu, R.-S.; Miñano Moya, M.; Paganis, E.; Psallidas, A.; Tsai, J. f.; Tzeng, Y. M.; Asavapibhop, B.; Singh, G.; Srimanobhas, N.; Suwonjandee, N.; Adiguzel, A.; Cerci, S.; Damarseckin, S.; Demiroglu, Z. S.; Dozen, C.; Dumanoglu, I.; Girgis, S.; Gokbulut, G.; Guler, Y.; Hos, I.; Kangal, E. E.; Kara, O.; Kayis Topaksu, A.; Kiminsu, U.; Oglakci, M.; Onengut, G.; Ozdemir, K.; Sunar Cerci, D.; Topakli, H.; Turkcapar, S.; Zorbakir, I. S.; Zorbilmez, C.; Bilin, B.; Bilmis, S.; Isildak, B.; Karapinar, G.; Yalvac, M.; Zeyrek, M.; Gülmez, E.; Kaya, M.; Kaya, O.; Yetkin, E. A.; Yetkin, T.; Cakir, A.; Cankocak, K.; Sen, S.; Grynyov, B.; Levchuk, L.; Sorokin, P.; Aggleton, R.; Ball, F.; Beck, L.; Brooke, J. J.; Burns, D.; Clement, E.; Cussans, D.; Flacher, H.; Goldstein, J.; Grimes, M.; Heath, G. P.; Heath, H. F.; Jacob, J.; Kreczko, L.; Lucas, C.; Newbold, D. M.; Paramesvaran, S.; Poll, A.; Sakuma, T.; Seif El Nasr-storey, S.; Smith, D.; Smith, V. J.; Bell, K. W.; Belyaev, A.; Brew, C.; Brown, R. M.; Calligaris, L.; Cieri, D.; Cockerill, D. J. A.; Coughlan, J. A.; Harder, K.; Harper, S.; Olaiya, E.; Petyt, D.; Shepherd-Themistocleous, C. H.; Thea, A.; Tomalin, I. R.; Williams, T.; Baber, M.; Bainbridge, R.; Buchmuller, O.; Bundock, A.; Burton, D.; Casasso, S.; Citron, M.; Colling, D.; Corpe, L.; Dauncey, P.; Davies, G.; De Wit, A.; Della Negra, M.; Di Maria, R.; Dunne, P.; Elwood, A.; Futyan, D.; Haddad, Y.; Hall, G.; Iles, G.; James, T.; Lane, R.; Laner, C.; Lucas, R.; Lyons, L.; Magnan, A.-M.; Malik, S.; Mastrolorenzo, L.; Nash, J.; Nikitenko, A.; Pela, J.; Penning, B.; Pesaresi, M.; Raymond, D. M.; Richards, A.; Rose, A.; Seez, C.; Summers, S.; Tapper, A.; Uchida, K.; Vazquez Acosta, M.; Virdee, T.; Wright, J.; Zenz, S. C.; Cole, J. E.; Hobson, P. R.; Khan, A.; Kyberd, P.; Leslie, D.; Reid, I. D.; Symonds, P.; Teodorescu, L.; Turner, M.; Borzou, A.; Call, K.; Dittmann, J.; Hatakeyama, K.; Liu, H.; Pastika, N.; Charaf, O.; Cooper, S. I.; Henderson, C.; Rumerio, P.; West, C.; Arcaro, D.; Avetisyan, A.; Bose, T.; Gastler, D.; Rankin, D.; Richardson, C.; Rohlf, J.; Sulak, L.; Zou, D.; Benelli, G.; Berry, E.; Cutts, D.; Garabedian, A.; Hakala, J.; Heintz, U.; Hogan, J. M.; Jesus, O.; Laird, E.; Landsberg, G.; Mao, Z.; Narain, M.; Piperov, S.; Sagir, S.; Spencer, E.; Syarif, R.; Breedon, R.; Breto, G.; Burns, D.; Calderon De La Barca Sanchez, M.; Chauhan, S.; Chertok, M.; Conway, J.; Conway, R.; Cox, P. T.; Erbacher, R.; Flores, C.; Funk, G.; Gardner, M.; Ko, W.; Lander, R.; Mclean, C.; Mulhearn, M.; Pellett, D.; Pilot, J.; Shalhout, S.; Smith, J.; Squires, M.; Stolp, D.; Tripathi, M.; Wilbur, S.; Yohay, R.; Cousins, R.; Everaerts, P.; Florent, A.; Hauser, J.; Ignatenko, M.; Saltzberg, D.; Takasugi, E.; Valuev, V.; Weber, M.; Burt, K.; Clare, R.; Ellison, J.; Gary, J. W.; Ghiasi Shirazi, S. M. A.; Hanson, G.; Heilman, J.; Jandir, P.; Kennedy, E.; Lacroix, F.; Long, O. R.; Olmedo Negrete, M.; Paneva, M. I.; Shrinivas, A.; Si, W.; Wei, H.; Wimpenny, S.; Yates, B. R.; Branson, J. G.; Cerati, G. B.; Cittolin, S.; Derdzinski, M.; Gerosa, R.; Holzner, A.; Klein, D.; Krutelyov, V.; Letts, J.; Olivito, D.; Padhi, S.; Pieri, M.; Sani, M.; Sharma, V.; Tadel, M.; Vartak, A.; Wasserbaech, S.; Welke, C.; Wood, J.; Würthwein, F.; Yagil, A.; Zevi Della Porta, G.; Bhandari, R.; Bradmiller-Feld, J.; Campagnari, C.; Dishaw, A.; Dutta, V.; Flowers, K.; Franco Sevilla, M.; Geffert, P.; George, C.; Golf, F.; Gouskos, L.; Gran, J.; Heller, R.; Incandela, J.; Mccoll, N.; Mullin, S. D.; Ovcharova, A.; Richman, J.; Stuart, D.; Suarez, I.; Yoo, J.; Anderson, D.; Apresyan, A.; Bendavid, J.; Bornheim, A.; Bunn, J.; Chen, Y.; Duarte, J.; Lawhorn, J. M.; Mott, A.; Newman, H. B.; Pena, C.; Spiropulu, M.; Vlimant, J. R.; Xie, S.; Zhu, R. Y.; Andrews, M. B.; Azzolini, V.; Ferguson, T.; Paulini, M.; Russ, J.; Sun, M.; Vogel, H.; Vorobiev, I.; Cumalat, J. P.; Ford, W. T.; Jensen, F.; Johnson, A.; Krohn, M.; Mulholland, T.; Stenson, K.; Wagner, S. R.; Alexander, J.; Chaves, J.; Chu, J.; Dittmer, S.; Mcdermott, K.; Mirman, N.; Nicolas Kaufman, G.; Patterson, J. R.; Rinkevicius, A.; Ryd, A.; Skinnari, L.; Soffi, L.; Tan, S. M.; Tao, Z.; Thom, J.; Tucker, J.; Wittich, P.; Zientek, M.; Winn, D.; Abdullin, S.; Albrow, M.; Apollinari, G.; Banerjee, S.; Bauerdick, L. A. T.; Beretvas, A.; Berryhill, J.; Bhat, P. C.; Bolla, G.; Burkett, K.; Butler, J. N.; Cheung, H. W. K.; Chlebana, F.; Cihangir, S.; Cremonesi, M.; Elvira, V. D.; Fisk, I.; Freeman, J.; Gottschalk, E.; Gray, L.; Green, D.; Grünendahl, S.; Gutsche, O.; Hare, D.; Harris, R. M.; Hasegawa, S.; Hirschauer, J.; Hu, Z.; Jayatilaka, B.; Jindariani, S.; Johnson, M.; Joshi, U.; Klima, B.; Kreis, B.; Lammel, S.; Linacre, J.; Lincoln, D.; Lipton, R.; Liu, M.; Liu, T.; Lopes De Sá, R.; Lykken, J.; Maeshima, K.; Magini, N.; Marraffino, J. M.; Maruyama, S.; Mason, D.; McBride, P.; Merkel, P.; Mrenna, S.; Nahn, S.; Newman-Holmes, C.; O'Dell, V.; Pedro, K.; Prokofyev, O.; Rakness, G.; Ristori, L.; Sexton-Kennedy, E.; Soha, A.; Spalding, W. J.; Spiegel, L.; Stoynev, S.; Strobbe, N.; Taylor, L.; Tkaczyk, S.; Tran, N. V.; Uplegger, L.; Vaandering, E. W.; Vernieri, C.; Verzocchi, M.; Vidal, R.; Wang, M.; Weber, H. A.; Whitbeck, A.; Acosta, D.; Avery, P.; Bortignon, P.; Bourilkov, D.; Brinkerhoff, A.; Carnes, A.; Carver, M.; Curry, D.; Das, S.; Field, R. D.; Furic, I. K.; Konigsberg, J.; Korytov, A.; Ma, P.; Matchev, K.; Mei, H.; Milenovic, P.; Mitselmakher, G.; Rank, D.; Shchutska, L.; Sperka, D.; Thomas, L.; Wang, J.; Wang, S.; Yelton, J.; Linn, S.; Markowitz, P.; Martinez, G.; Rodriguez, J. L.; Ackert, A.; Adams, J. R.; Adams, T.; Askew, A.; Bein, S.; Diamond, B.; Hagopian, S.; Hagopian, V.; Johnson, K. F.; Khatiwada, A.; Prosper, H.; Santra, A.; Weinberg, M.; Baarmand, M. M.; Bhopatkar, V.; Colafranceschi, S.; Hohlmann, M.; Noonan, D.; Roy, T.; Yumiceva, F.; Adams, M. R.; Apanasevich, L.; Berry, D.; Betts, R. R.; Bucinskaite, I.; Cavanaugh, R.; Evdokimov, O.; Gauthier, L.; Gerber, C. E.; Hofman, D. J.; Kurt, P.; O'Brien, C.; Sandoval Gonzalez, I. D.; Turner, P.; Varelas, N.; Wang, H.; Wu, Z.; Zakaria, M.; Zhang, J.; Bilki, B.; Clarida, W.; Dilsiz, K.; Durgut, S.; Gandrajula, R. P.; Haytmyradov, M.; Khristenko, V.; Merlo, J.-P.; Mermerkaya, H.; Mestvirishvili, A.; Moeller, A.; Nachtman, J.; Ogul, H.; Onel, Y.; Ozok, F.; Penzo, A.; Snyder, C.; Tiras, E.; Wetzel, J.; Yi, K.; Anderson, I.; Blumenfeld, B.; Cocoros, A.; Eminizer, N.; Fehling, D.; Feng, L.; Gritsan, A. V.; Maksimovic, P.; Martin, C.; Osherson, M.; Roskes, J.; Sarica, U.; Swartz, M.; Xiao, M.; Xin, Y.; You, C.; Al-bataineh, A.; Baringer, P.; Bean, A.; Boren, S.; Bowen, J.; Bruner, C.; Castle, J.; Forthomme, L.; Kenny, R. P., III; Kropivnitskaya, A.; Majumder, D.; Mcbrayer, W.; Murray, M.; Sanders, S.; Stringer, R.; Tapia Takaki, J. D.; Wang, Q.; Ivanov, A.; Kaadze, K.; Khalil, S.; Maravin, Y.; Mohammadi, A.; Saini, L. K.; Skhirtladze, N.; Toda, S.; Rebassoo, F.; Wright, D.; Anelli, C.; Baden, A.; Baron, O.; Belloni, A.; Calvert, B.; Eno, S. C.; Ferraioli, C.; Gomez, J. A.; Hadley, N. J.; Jabeen, S.; Kellogg, R. G.; Kolberg, T.; Kunkle, J.; Lu, Y.; Mignerey, A. C.; Ricci-Tam, F.; Shin, Y. H.; Skuja, A.; Tonjes, M. B.; Tonwar, S. C.; Abercrombie, D.; Allen, B.; Apyan, A.; Barbieri, R.; Baty, A.; Bi, R.; Bierwagen, K.; Brandt, S.; Busza, W.; Cali, I. A.; Demiragli, Z.; Di Matteo, L.; Gomez Ceballos, G.; Goncharov, M.; Hsu, D.; Iiyama, Y.; Innocenti, G. M.; Klute, M.; Kovalskyi, D.; Krajczar, K.; Lai, Y. S.; Lee, Y.-J.; Levin, A.; Luckey, P. D.; Marini, A. C.; Mcginn, C.; Mironov, C.; Narayanan, S.; Niu, X.; Paus, C.; Roland, C.; Roland, G.; Salfeld-Nebgen, J.; Stephans, G. S. F.; Sumorok, K.; Tatar, K.; Varma, M.; Velicanu, D.; Veverka, J.; Wang, J.; Wang, T. W.; Wyslouch, B.; Yang, M.; Zhukova, V.; Benvenuti, A. C.; Chatterjee, R. M.; Evans, A.; Finkel, A.; Gude, A.; Hansen, P.; Kalafut, S.; Kao, S. C.; Kubota, Y.; Lesko, Z.; Mans, J.; Nourbakhsh, S.; Ruckstuhl, N.; Rusack, R.; Tambe, N.; Turkewitz, J.; Acosta, J. G.; Oliveros, S.; Avdeeva, E.; Bartek, R.; Bloom, K.; Claes, D. R.; Dominguez, A.; Fangmeier, C.; Gonzalez Suarez, R.; Kamalieddin, R.; Kravchenko, I.; Malta Rodrigues, A.; Meier, F.; Monroy, J.; Siado, J. E.; Snow, G. R.; Stieger, B.; Alyari, M.; Dolen, J.; George, J.; Godshalk, A.; Harrington, C.; Iashvili, I.; Kaisen, J.; Kharchilava, A.; Kumar, A.; Parker, A.; Rappoccio, S.; Roozbahani, B.; Alverson, G.; Barberis, E.; Baumgartel, D.; Hortiangtham, A.; Massironi, A.; Morse, D. M.; Nash, D.; Orimoto, T.; Teixeira De Lima, R.; Trocino, D.; Wang, R.-J.; Wood, D.; Bhattacharya, S.; Hahn, K. A.; Kubik, A.; Kumar, A.; Low, J. F.; Mucia, N.; Odell, N.; Pollack, B.; Schmitt, M. H.; Sung, K.; Trovato, M.; Velasco, M.; Dev, N.; Hildreth, M.; Hurtado Anampa, K.; Jessop, C.; Karmgard, D. J.; Kellams, N.; Lannon, K.; Marinelli, N.; Meng, F.; Mueller, C.; Musienko, Y.; Planer, M.; Reinsvold, A.; Ruchti, R.; Smith, G.; Taroni, S.; Wayne, M.; Wolf, M.; Woodard, A.; Alimena, J.; Antonelli, L.; Brinson, J.; Bylsma, B.; Durkin, L. S.; Flowers, S.; Francis, B.; Hart, A.; Hill, C.; Hughes, R.; Ji, W.; Liu, B.; Luo, W.; Puigh, D.; Winer, B. L.; Wulsin, H. W.; Cooperstein, S.; Driga, O.; Elmer, P.; Hardenbrook, J.; Hebda, P.; Lange, D.; Luo, J.; Marlow, D.; Medvedeva, T.; Mei, K.; Mooney, M.; Olsen, J.; Palmer, C.; Piroué, P.; Stickland, D.; Tully, C.; Zuranski, A.; Malik, S.; Barker, A.; Barnes, V. E.; Folgueras, S.; Gutay, L.; Jha, M. K.; Jones, M.; Jung, A. W.; Jung, K.; Miller, D. H.; Neumeister, N.; Shi, X.; Sun, J.; Svyatkovskiy, A.; Wang, F.; Xie, W.; Xu, L.; Parashar, N.; Stupak, J.; Adair, A.; Akgun, B.; Chen, Z.; Ecklund, K. M.; Geurts, F. J. M.; Guilbaud, M.; Li, W.; Michlin, B.; Northup, M.; Padley, B. P.; Redjimi, R.; Roberts, J.; Rorie, J.; Tu, Z.; Zabel, J.; Betchart, B.; Bodek, A.; de Barbaro, P.; Demina, R.; Duh, Y. t.; Ferbel, T.; Galanti, M.; Garcia-Bellido, A.; Han, J.; Hindrichs, O.; Khukhunaishvili, A.; Lo, K. H.; Tan, P.; Verzetti, M.; Agapitos, A.; Chou, J. P.; Contreras-Campana, E.; Gershtein, Y.; Gómez Espinosa, T. A.; Halkiadakis, E.; Heindl, M.; Hidas, D.; Hughes, E.; Kaplan, S.; Kunnawalkam Elayavalli, R.; Kyriacou, S.; Lath, A.; Nash, K.; Saka, H.; Salur, S.; Schnetzer, S.; Sheffield, D.; Somalwar, S.; Stone, R.; Thomas, S.; Thomassen, P.; Walker, M.; Foerster, M.; Heideman, J.; Riley, G.; Rose, K.; Spanier, S.; Thapa, K.; Bouhali, O.; Celik, A.; Dalchenko, M.; De Mattia, M.; Delgado, A.; Dildick, S.; Eusebi, R.; Gilmore, J.; Huang, T.; Juska, E.; Kamon, T.; Mueller, R.; Pakhotin, Y.; Patel, R.; Perloff, A.; Perniè, L.; Rathjens, D.; Rose, A.; Safonov, A.; Tatarinov, A.; Ulmer, K. A.; Akchurin, N.; Cowden, C.; Damgov, J.; De Guio, F.; Dragoiu, C.; Dudero, P. R.; Faulkner, J.; Gurpinar, E.; Kunori, S.; Lamichhane, K.; Lee, S. W.; Libeiro, T.; Peltola, T.; Undleeb, S.; Volobouev, I.; Wang, Z.; Delannoy, A. G.; Greene, S.; Gurrola, A.; Janjam, R.; Johns, W.; Maguire, C.; Melo, A.; Ni, H.; Sheldon, P.; Tuo, S.; Velkovska, J.; Xu, Q.; Arenton, M. W.; Barria, P.; Cox, B.; Goodell, J.; Hirosky, R.; Ledovskoy, A.; Li, H.; Neu, C.; Sinthuprasith, T.; Sun, X.; Wang, Y.; Wolfe, E.; Xia, F.; Clarke, C.; Harr, R.; Karchin, P. E.; Lamichhane, P.; Sturdy, J.; Belknap, D. A.; Dasu, S.; Dodd, L.; Duric, S.; Gomber, B.; Grothe, M.; Herndon, M.; Hervé, A.; Klabbers, P.; Lanaro, A.; Levine, A.; Long, K.; Loveless, R.; Ojalvo, I.; Perry, T.; Pierro, G. A.; Polese, G.; Ruggles, T.; Savin, A.; Smith, N.; Smith, W. H.; Taylor, D.; Woods, N.; CMS Collaboration
2017-10-01
A search for heavy narrow resonances decaying into four-lepton final states has been performed using proton-proton collision data at √{ s} = 8TeV collected by the CMS experiment, corresponding to an integrated luminosity of 19.7fb-1. No excess of events over the standard model background expectation is observed. Upper limits for a benchmark model on the product of cross section and branching fraction for the production of these heavy narrow resonances are presented. The limit excludes leptophobic Z‧ bosons with masses below 2.5TeV within the benchmark model. This is the first result to constrain a leptophobic Z‧ resonance in the four-lepton channel.
Patient safety culture: finding meaning in patient experiences.
Bishop, Andrea C; Cregan, Brianna R
2015-01-01
The purpose of this paper is to determine what patient and family stories can tell us about patient safety culture within health care organizations and how patients experience patient safety culture. A total of 11 patient and family stories of adverse event experiences were examined in September 2013 using publicly available videos on the Canadian Patient Safety Insitute web site. Videos were transcribed verbatim and collated as one complete data set. Thematic analysis was used to perform qualitative inquiry. All qualitative analysis was done using NVivo 10 software. A total of three themes were identified: first, Being Passed Around; second, Not Having the Conversation; and third, the Person Behind the Patient. Results from this research also suggest that while health care organizations and providers might expect patients to play a larger role in managing their health, there may be underlying reasons as to why patients are not doing so. The findings indicate that patient experiences and narratives are useful sources of information to better understand organizational safety culture and patient experiences of safety while hospitalized. Greater inclusion and analysis of patient safety narratives is important in understanding the needs of patients and how patient safety culture interventions can be improved to ensure translation of patient safety strategies at the frontlines of care. Greater acknowledgement of the patient and family experience provides organizations with an integral perspective to assist in defining and addressing deficiencies within their patient safety culture and to identify opportunities for improvement.
Benchmarking specialty hospitals, a scoping review on theory and practice.
Wind, A; van Harten, W H
2017-04-04
Although benchmarking may improve hospital processes, research on this subject is limited. The aim of this study was to provide an overview of publications on benchmarking in specialty hospitals and a description of study characteristics. We searched PubMed and EMBASE for articles published in English in the last 10 years. Eligible articles described a project stating benchmarking as its objective and involving a specialty hospital or specific patient category; or those dealing with the methodology or evaluation of benchmarking. Of 1,817 articles identified in total, 24 were included in the study. Articles were categorized into: pathway benchmarking, institutional benchmarking, articles on benchmark methodology or -evaluation and benchmarking using a patient registry. There was a large degree of variability:(1) study designs were mostly descriptive and retrospective; (2) not all studies generated and showed data in sufficient detail; and (3) there was variety in whether a benchmarking model was just described or if quality improvement as a consequence of the benchmark was reported upon. Most of the studies that described a benchmark model described the use of benchmarking partners from the same industry category, sometimes from all over the world. Benchmarking seems to be more developed in eye hospitals, emergency departments and oncology specialty hospitals. Some studies showed promising improvement effects. However, the majority of the articles lacked a structured design, and did not report on benchmark outcomes. In order to evaluate the effectiveness of benchmarking to improve quality in specialty hospitals, robust and structured designs are needed including a follow up to check whether the benchmark study has led to improvements.
Can online benchmarking increase rates of thrombolysis? Data from the Austrian stroke unit registry.
Ferrari, Julia; Seyfang, Leonhard; Lang, Wilfried
2013-09-01
Despite its widespread availability and known safety and efficacy, a therapy with intravenous thrombolysis is still undergiven. We aimed to identify whether nationwide quality projects--like the stroke registry in Austria--as well as online benchmarking and predefined target values can increase rates of thrombolysis. Therefore, we assessed 6,394 out of 48,462 patients with ischemic stroke from the Austrian stroke registry (study period from March 2003 to December 2011) who had undergone thrombolysis treatment. We defined lower level and target values as quality parameters and evaluated whether or not these parameters could be achieved in the past years. We were able to show that rates of thrombolysis in Austria increased from 4.9% in 2003 to 18.3% in 2011. In a multivariate regression model, the main impact seen was the increase over the years [the OR ranges from 0.47 (95% CI 0.32-0.68) in 2003 to 2.51 (95% CI 2.20-2.87) in 2011). The predefined lower and target levels of thrombolysis were achieved at the majority of participating centers: in 2011 the lower value of 5% was achieved at all stroke units, and the target value of 15% was observed at 21 of 34 stroke units. We conclude that online benchmarking and the concept of defining target values as a tool for nationwide acute stroke care appeared to result in an increase in the rate of thrombolysis over the last few years while the variability between the stroke units has not yet been reduced.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pearce, R.J.H.; Bell, A.C.; Brennan, D.
'Trace Tritium Experiments' (TTE) were successfully performed on JET in 2003. The Campaign marked the first use of tritium in JET plasmas since the Deuterium-Tritium Experiment (DTE1) Campaign in 1997, and was the first use of tritium in experiments under the EFDA organisation with the UKAEA as JET Operator. The safety and regulatory preparations for the experiment were extensive. Since JET has been operated by the UKAEA the operations have followed the model of a licensed nuclear site. The safe operation of the JET torus is demonstrated in a safety case. Key Safety Management Requirement (KSMR) and Key Safety Relatedmore » Equipment (KSRE) are identified in the Safety Case for DT operation. The safe operation of the torus is within the bounds of, and under the control of, an Authority to Operate (ATO). New technical challenges were presented by the need to inject and account for small quantities of tritium in very short pulses ({approx}80ms), with an accurate time stamp. The safety and operational management of the campaign are described. Valuable lessons were learned which would help in running future experiments. It is concluded that JET is in a strong position to run future trace tritium and full DT discharges.« less
Experiences using OpenMP based on Computer Directed Software DSM on a PC Cluster
NASA Technical Reports Server (NTRS)
Hess, Matthias; Jost, Gabriele; Mueller, Matthias; Ruehle, Roland
2003-01-01
In this work we report on our experiences running OpenMP programs on a commodity cluster of PCs running a software distributed shared memory (DSM) system. We describe our test environment and report on the performance of a subset of the NAS Parallel Benchmarks that have been automaticaly parallelized for OpenMP. We compare the performance of the OpenMP implementations with that of their message passing counterparts and discuss performance differences.
Background evaluation for the neutron sources in the Daya Bay experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gu, W. Q.; Cao, G. F.; Chen, X. H.
2016-07-06
Here, we present an evaluation of the background induced by 241Am–13C neutron calibration sources in the Daya Bay reactor neutrino experiment. Furthermore, as a significant background for electron-antineutrino detection at 0.26 ± 0.12 detector per day on average, it has been estimated by a Monte Carlo simulation that was benchmarked by a special calibration data set. This dedicated data set also provides the energy spectrum of the background.
A CFD validation roadmap for hypersonic flows
NASA Technical Reports Server (NTRS)
Marvin, Joseph G.
1992-01-01
A roadmap for computational fluid dynamics (CFD) code validation is developed. The elements of the roadmap are consistent with air-breathing vehicle design requirements and related to the important flow path components: forebody, inlet, combustor, and nozzle. Building block and benchmark validation experiments are identified along with their test conditions and measurements. Based on an evaluation criteria, recommendations for an initial CFD validation data base are given and gaps identified where future experiments would provide the needed validation data.
A CFD validation roadmap for hypersonic flows
NASA Technical Reports Server (NTRS)
Marvin, Joseph G.
1993-01-01
A roadmap for computational fluid dynamics (CFD) code validation is developed. The elements of the roadmap are consistent with air-breathing vehicle design requirements and related to the important flow path components: forebody, inlet, combustor, and nozzle. Building block and benchmark validation experiments are identified along with their test conditions and measurements. Based on an evaluation criteria, recommendations for an initial CFD validation data base are given and gaps identified where future experiments would provide the needed validation data.
Clinical Impact Research – how to choose experimental or observational intervention study?
Malmivaara, Antti
2016-01-01
Abstract Background: Interventions directed to individuals by health and social care systems should increase health and welfare of patients and customers. Aims: This paper aims to present and define a new concept Clinical Impact Research (CIR) and suggest which study design, either randomized controlled trial (RCT) (experimental) or benchmarking controlled trial (BCT) (observational) is recommendable and to consider the feasibility, validity, and generalizability issues in CIR. Methods: The new concept is based on a narrative review of the literature and on author’s idea that in intervention studies, there is a need to cover comprehensively all the main impact categories and their respective outcomes. The considerations on how to choose the most appropriate study design (RCT or BCT) were based on previous methodological studies on RCTs and BCTs and on author’s previous work on the concepts benchmarking controlled trial and system impact research (SIR). Results: The CIR covers all studies aiming to assess the impact for health and welfare of any health (and integrated social) care or public health intervention directed to an individual. The impact categories are accessibility, quality, equality, effectiveness, safety, and efficiency. Impact is the main concept, and within each impact category, both generic- and context-specific outcome measures are needed. CIR uses RCTs and BCTs. Conclusions: CIR should be given a high priority in medical, health care, and health economic research. Clinicians and leaders at all levels of health care can exploit the evidence from CIR. Key messagesThe new concept of Clinical Impact Research (CIR) is defined as a research field aiming to assess what are the impacts of healthcare and public health interventions targeted to patients or individuals.The term impact refers to all effects caused by the interventions, with particular emphasis on accessibility, quality, equality, effectiveness, safety, and efficiency. CIR uses two study designs: randomized controlled trials (RCTs) (experimental) and benchmarking controlled trials (BCTs) (observational). Suggestions on how to choose between RCT and BCT as the most suitable study design are presented.Simple way of determining the study question in CIR based on the PICO (patient, intervention, control intervention, outcome) framework is presented.CIR creates the scientific basis for clinical decisions. Clinicians and leaders at all levels of health care and those working for public health can use the evidence from CIR for the benefit of patients and the population. PMID:27494394
Clinical Impact Research - how to choose experimental or observational intervention study?
Malmivaara, Antti
2016-11-01
Interventions directed to individuals by health and social care systems should increase health and welfare of patients and customers. This paper aims to present and define a new concept Clinical Impact Research (CIR) and suggest which study design, either randomized controlled trial (RCT) (experimental) or benchmarking controlled trial (BCT) (observational) is recommendable and to consider the feasibility, validity, and generalizability issues in CIR. The new concept is based on a narrative review of the literature and on author's idea that in intervention studies, there is a need to cover comprehensively all the main impact categories and their respective outcomes. The considerations on how to choose the most appropriate study design (RCT or BCT) were based on previous methodological studies on RCTs and BCTs and on author's previous work on the concepts benchmarking controlled trial and system impact research (SIR). The CIR covers all studies aiming to assess the impact for health and welfare of any health (and integrated social) care or public health intervention directed to an individual. The impact categories are accessibility, quality, equality, effectiveness, safety, and efficiency. Impact is the main concept, and within each impact category, both generic- and context-specific outcome measures are needed. CIR uses RCTs and BCTs. CIR should be given a high priority in medical, health care, and health economic research. Clinicians and leaders at all levels of health care can exploit the evidence from CIR. Key messages The new concept of Clinical Impact Research (CIR) is defined as a research field aiming to assess what are the impacts of healthcare and public health interventions targeted to patients or individuals. The term impact refers to all effects caused by the interventions, with particular emphasis on accessibility, quality, equality, effectiveness, safety, and efficiency. CIR uses two study designs: randomized controlled trials (RCTs) (experimental) and benchmarking controlled trials (BCTs) (observational). Suggestions on how to choose between RCT and BCT as the most suitable study design are presented. Simple way of determining the study question in CIR based on the PICO (patient, intervention, control intervention, outcome) framework is presented. CIR creates the scientific basis for clinical decisions. Clinicians and leaders at all levels of health care and those working for public health can use the evidence from CIR for the benefit of patients and the population.
National Security Science and Technology Initiative: Air Cargo Screening
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bingham, Philip R; White, Tim; Cespedes, Ernesto
The non-intrusive inspection (NII) of consolidated air cargo carried on commercial passenger aircraft continues to be a technically challenging, high-priority requirement of the Department of Homeland Security's Science and Technology Directorate (DHS S&T), the Transportation Security Agency and the Federal Aviation Administration. The goal of deploying a screening system that can reliably and cost-effectively detect explosive threats in consolidated cargo without adversely affecting the flow of commerce will require significant technical advances that will take years to develop. To address this critical National Security need, the Battelle Memorial Institute (Battelle), under a Cooperative Research and Development Agreement (CRADA) with fourmore » of its associated US Department of Energy (DOE) National Laboratories (Oak Ridge, Pacific Northwest, Idaho, and Brookhaven), conducted a research and development initiative focused on identifying, evaluating, and integrating technologies for screening consolidated air cargo for the presence of explosive threats. Battelle invested $8.5M of internal research and development funds during fiscal years 2007 through 2009. The primary results of this effort are described in this document and can be summarized as follows: (1) Completed a gap analysis that identified threat signatures and observables, candidate technologies for detection, their current state of development, and provided recommendations for improvements to meet air cargo screening requirements. (2) Defined a Commodity/Threat/Detection matrix that focuses modeling and experimental efforts, identifies technology gaps and game-changing opportunities, and provides a means of summarizing current and emerging capabilities. (3) Defined key properties (e.g., elemental composition, average density, effective atomic weight) for basic commodity and explosive benchmarks, developed virtual models of the physical distributions (pallets) of three commodity types and three explosive benchmarks combinations, and conducted modeling and simulation studies to begin populating the matrix of commodities, threats, and detection technologies. (4) Designed and fabricated basic (homogeneous) commodity test pallets and fabricated inert stimulants to support experiments and to validate modeling/simulation results. (5) Developed/expanded the team's capabilities to conduct full-scale imaging (neutron and x-ray) experiments of air cargo commodities and explosive benchmarks. (6) Conducted experiments to improve the collection of trace particles of explosives from a variety of surfaces representative of air cargo materials by means of mechanical (air/vibration/pressure), thermal, and electrostatic methods. Air cargo screening is a difficult challenge that will require significant investment in both research and development to find a suitable solution to ensure the safety of passengers without significantly hindering the flow of commodities. The initiative funded by Battelle has positioned this group to make major contributions in meeting the air cargo challenge by developing collaborations, developing laboratory test systems, improving knowledge of the challenges (both technical and business) for air cargo screening, and increasing the understanding of the capabilities for current inspection methods (x-ray radiography, x-ray backscatter, etc.) and potential future inspection methods (neutron radiography, fusion of detector modalities, advanced trace detection, etc.). Lastly, air cargo screening is still an issue that will benefit from collaboration between Department of Energy Laboratories and Battelle. On January 7, 2010, DHS Secretary Napolitano joined White House Press Secretary Robert Gibbs and Assistant to the President for Counterterrorism and Homeland Security John Brennan to announce several recommendations DHS has made to the President for improving the technology and procedures used to protect air travel from acts of terrorism. (This announcement followed the 25 Dec'09 Delta/Northwest Airlines Flight 253 terror attack.) Secretary Napolitano outlined five recommendations DHS will pursue to enhance the safety of the traveling public. One of the five recommendations, read as follows: 'Establish a partnership on aviation security between DHS and the Department of Energy and its National Laboratories in order to develop new and more effective technologies to deter and disrupt known threats and proactively anticipate and protect against new ways by which terrorists could seek to board an aircraft.' In conclusion, it appears very timely that Battelle and its DOE lab partners initiated a serious collaboration on the air cargo topic, and that we should continue to work toward future collaboration in response to the government's needs.« less
NASA Technical Reports Server (NTRS)
Bell, Michael A.
1999-01-01
Informal benchmarking using personal or professional networks has taken place for many years at the Kennedy Space Center (KSC). The National Aeronautics and Space Administration (NASA) recognized early on, the need to formalize the benchmarking process for better utilization of resources and improved benchmarking performance. The need to compete in a faster, better, cheaper environment has been the catalyst for formalizing these efforts. A pioneering benchmarking consortium was chartered at KSC in January 1994. The consortium known as the Kennedy Benchmarking Clearinghouse (KBC), is a collaborative effort of NASA and all major KSC contractors. The charter of this consortium is to facilitate effective benchmarking, and leverage the resulting quality improvements across KSC. The KBC acts as a resource with experienced facilitators and a proven process. One of the initial actions of the KBC was to develop a holistic methodology for Center-wide benchmarking. This approach to Benchmarking integrates the best features of proven benchmarking models (i.e., Camp, Spendolini, Watson, and Balm). This cost-effective alternative to conventional Benchmarking approaches has provided a foundation for consistent benchmarking at KSC through the development of common terminology, tools, and techniques. Through these efforts a foundation and infrastructure has been built which allows short duration benchmarking studies yielding results gleaned from world class partners that can be readily implemented. The KBC has been recognized with the Silver Medal Award (in the applied research category) from the International Benchmarking Clearinghouse.
Kang, Guangliang; Du, Li; Zhang, Hong
2016-06-22
The growing complexity of biological experiment design based on high-throughput RNA sequencing (RNA-seq) is calling for more accommodative statistical tools. We focus on differential expression (DE) analysis using RNA-seq data in the presence of multiple treatment conditions. We propose a novel method, multiDE, for facilitating DE analysis using RNA-seq read count data with multiple treatment conditions. The read count is assumed to follow a log-linear model incorporating two factors (i.e., condition and gene), where an interaction term is used to quantify the association between gene and condition. The number of the degrees of freedom is reduced to one through the first order decomposition of the interaction, leading to a dramatically power improvement in testing DE genes when the number of conditions is greater than two. In our simulation situations, multiDE outperformed the benchmark methods (i.e. edgeR and DESeq2) even if the underlying model was severely misspecified, and the power gain was increasing in the number of conditions. In the application to two real datasets, multiDE identified more biologically meaningful DE genes than the benchmark methods. An R package implementing multiDE is available publicly at http://homepage.fudan.edu.cn/zhangh/softwares/multiDE . When the number of conditions is two, multiDE performs comparably with the benchmark methods. When the number of conditions is greater than two, multiDE outperforms the benchmark methods.
Estimating the Value of Life, Injury, and Travel Time Saved Using a Stated Preference Framework.
Niroomand, Naghmeh; Jenkins, Glenn P
2016-06-01
The incidence of fatality over the period 2010-2014 from automobile accidents in North Cyprus is 2.75 times greater than the average for the EU. With the prospect of North Cyprus entering the EU, many investments will need to be undertaken to improve road safety in order to reach EU benchmarks. The objective of this study is to provide local estimates of the value of a statistical life and injury along with the value of time savings. These are among the parameter values needed for the evaluation of the change in the expected incidence of automotive accidents and time savings brought about by such projects. In this study we conducted a stated choice experiment to identify the preferences and tradeoffs of automobile drivers in North Cyprus for improved travel times, travel costs, and safety. The choice of route was examined using mixed logit models to obtain the marginal utilities associated with each attribute of the routes that consumers choose. These estimates were used to assess the individuals' willingness to pay (WTP) to avoid fatalities and injuries and to save travel time. We then used the results to obtain community-wide estimates of the value of a statistical life (VSL) saved, the value of injury (VI) prevented, and the value per hour of travel time saved. The estimates for the VSL range from €315,293 to €1,117,856 and the estimates of VI from € 5,603 to € 28,186. These values are consistent, after adjusting for differences in incomes, with the median results of similar studies done for EU countries. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suter, G.W. II; Tsao, C.L.
1996-06-01
This report presents potential screening benchmarks for protection of aquatic life form contaminants in water. Because there is no guidance for screening for benchmarks, a set of alternative benchmarks is presented herein. This report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. Also included is the updates of benchmark values where appropriate, new benchmark values, secondary sources are replaced by primary sources, and a more completemore » documentation of the sources and derivation of all values are presented.« less
Do European hospitals have quality and safety governance systems and structures in place?
Shaw, C; Kutryba, B; Crisp, H; Vallejo, P; Suñol, R
2009-02-01
Internal systems for quality and safety were assessed in 89 hospitals in six European states, by external teams using standardised criteria and procedures, as part of the Methods of Assessing Response to Quality Improvement Strategies (MARQuIS) project. The assessments were made primarily to identify the current use of quality management systems in the sample hospitals, and also to demonstrate a potential tool for comparable assessment of hospitals in general. The large majority of the hospitals had a formal, documented infrastructure to manage quality and safety, but a significant minority had no designated mission, programme or coordination. In two-thirds of hospitals, the governing body was active in defining policy and programmes for improvement, and received reports on quality, safety and patient satisfaction at least once a year. The brief on-site assessments identified systematic variations, within and between countries, in structures and processes of governance and to document the uptake of best practice. Unacceptable variations in practice could be reduced, to the benefit of consumers and providers, by developing and publishing basic organisational standards relevant to all European states. The simple assessment criteria designed for this project could be developed into a practical tool for self-assessment, peer review or benchmarking of hospitals across national borders. This assessment, combined with explicit, relevant and achievable standards, could provide a vehicle to promote the voluntary uptake of best practice and consistency in quality and safety among hospitals in Europe.
ERIC Educational Resources Information Center
Wolf, Katharina
2015-01-01
Industry placements are popular means to provide students with an opportunity to apply their skills, knowledge and experience in a "real world" setting. Within this context, supervisor feedback allows educators to measure students' performance beyond academic objectives, by benchmarking it against industry expectations. However, industry…
Examination of the Nexus between Academic Libraries and Accreditation: Lessons from Nigeria
ERIC Educational Resources Information Center
Nkiko, Christopher; Ilo, Promise; Idiegbeyan-Ose, Jerome; Segun-Adeniran, Chidi
2015-01-01
The article investigated the nexus between academic libraries and accreditation in the higher institutions with special focus on the Nigerian experience. It showed that all accreditation agencies place a high premium on library provisions as a major component of requisite benchmarks in determining the status of the program or institutions being…
ERIC Educational Resources Information Center
Shriberg, Michael
2002-01-01
This paper analyzes recent efforts to measure sustainability in higher education across institutions. The benefits of cross-institutional assessments include: identifying and benchmarking leaders and best practices; communicating common goals, experiences, and methods; and providing a directional tool to measure progress toward the concept of a…
Building Pressure: Modeling the Fiscal Future of California K-12 School Facilities
ERIC Educational Resources Information Center
Jain, Liz S.; Vincent, Jefrey M.
2016-01-01
Public school districts across California, particularly those in low-wealth areas, experience significant funding shortfalls for their facilities. Industry benchmarks suggest the state's K-12 school districts should spend nearly $18 billion a year to maintain their inventory, ensure buildings are up-to-date, and to build new spaces to handle…
Evaluating real-time Java for mission-critical large-scale embedded systems
NASA Technical Reports Server (NTRS)
Sharp, D. C.; Pla, E.; Luecke, K. R.; Hassan, R. J.
2003-01-01
This paper describes benchmarking results on an RT JVM. This paper extends previously published results by including additional tests, by being run on a recently available pre-release version of the first commercially supported RTSJ implementation, and by assessing results based on our experience with avionics systems in other languages.
NASA low speed centrifugal compressor
NASA Technical Reports Server (NTRS)
Hathaway, Michael D.
1990-01-01
The flow characteristics of a low speed centrifugal compressor were examined at NASA Lewis Research Center to improve understanding of the flow in centrifugal compressors, to provide models of various flow phenomena, and to acquire benchmark data for three dimensional viscous flow code validation. The paper describes the objectives, test facilities' instrumentation, and experiment preliminary comparisons.
ERIC Educational Resources Information Center
Woolf, Sara B.
2015-01-01
Teacher performance evaluation represents a high stakes issue as evidenced by its pivotal emphasis in national and local education reform initiatives and federal policy levers. National, state, and local education leaders continue to experience unprecedented pressure to adopt standardized benchmarks to reflect and link student achievement data to…
Benchmarking the Intended Technology Curricula of Botswana and South Africa: What Can We Learn?
ERIC Educational Resources Information Center
Du Toit, Adri; Gaotlhobogwe, Michael
2017-01-01
Following a transformation of experience-based handicraft education, Technology education was introduced in Botswana and South Africa in 1990 and 1998, respectively, with the intention of developing technologically literate societies, as well as to develop learners' skills for the world of work. Despite these optimistic intentions, limited…
NASA Astrophysics Data System (ADS)
Cowdery, E.; Dietze, M.
2016-12-01
As atmospheric levels of carbon dioxide levels continue to increase, it is critical that terrestrial ecosystem models can accurately predict ecological responses to the changing environment. Current predictions of net primary productivity (NPP) in response to elevated atmospheric CO2 concentration are highly variable and contain a considerable amount of uncertainty.The Predictive Ecosystem Analyzer (PEcAn) is an informatics toolbox that wraps around an ecosystem model and can be used to help identify which factors drive uncertainty. We tested a suite of models (LPJ-GUESS, MAESPA, GDAY, CLM5, DALEC, ED2), which represent a range from low to high structural complexity, across a range of Free-Air CO2 Enrichment (FACE) experiments: the Kennedy Space Center Open Top Chamber Experiment, the Rhinelander FACE experiment, the Duke Forest FACE experiment and the Oak Ridge Experiment on CO2 Enrichment. These tests were implemented in a novel benchmarking workflow that is automated, repeatable, and generalized to incorporate different sites and ecological models. Observational data from the FACE experiments represent a first test of this flexible, extensible approach aimed at providing repeatable tests of model process representation.To identify and evaluate the assumptions causing inter-model differences we used PEcAn to perform model sensitivity and uncertainty analysis, not only to assess the components of NPP, but also to examine system processes such nutrient uptake and and water use. Combining the observed patterns of uncertainty between multiple models with results of the recent FACE-model data synthesis project (FACE-MDS) can help identify which processes need further study and additional data constraints. These findings can be used to inform future experimental design and in turn can provide informative starting point for data assimilation.