Science.gov

Sample records for acceptable benchmark experiments

  1. Benchmarking Asteroid-Deflection Experiment

    NASA Astrophysics Data System (ADS)

    Remington, Tane; Bruck Syal, Megan; Owen, John Michael; Miller, Paul L.

    2016-10-01

    An asteroid impacting Earth could have devastating consequences. In preparation to deflect or disrupt one before it reaches Earth, it is imperative to have modeling capabilities that adequately simulate the deflection actions. Code validation is key to ensuring full confidence in simulation results used in an asteroid-mitigation plan. We are benchmarking well-known impact experiments using Spheral, an adaptive smoothed-particle hydrodynamics code, to validate our modeling of asteroid deflection. We describe our simulation results, compare them with experimental data, and discuss what we have learned from our work. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-695540

  2. Lawrence Livermore plutonium button critical experiment benchmark

    SciTech Connect

    Trumble, E.F.; Justice, J.B.; Frost, R.L.

    1994-12-31

    The end of the Cold War and the subsequent weapons reductions have led to an increased need for the safe storage of large amounts of highly enriched plutonium. In support of code validation required to address this need, a set of critical experiments involving arrays of weapons-grade plutonium metal that were performed at the Lawrence Livermore National Laboratory (LLNL) in the late 1960s has been revisited. Although these experiments are well documented, discrepancies and omissions have been found in the earlier reports. Many of these have been resolved in the current work, and these data have been compiled into benchmark descriptions. In addition, a computational verification has been performed on the benchmarks using multiple computer codes. These benchmark descriptions are also being made available to the US Department of Energy (DOE)-sponsored Nuclear Criticality Safety Benchmark Evaluation Working Group for dissemination in the DOE Handbook on Evaluated Criticality Safety Benchmark Experiments.

  3. Companies' opinions and acceptance of global food safety initiative benchmarks after implementation.

    PubMed

    Crandall, Phil; Van Loo, Ellen J; O'Bryan, Corliss A; Mauromoustakos, Andy; Yiannas, Frank; Dyenson, Natalie; Berdnik, Irina

    2012-09-01

    International attention has been focused on minimizing costs that may unnecessarily raise food prices. One important aspect to consider is the redundant and overlapping costs of food safety audits. The Global Food Safety Initiative (GFSI) has devised benchmarked schemes based on existing international food safety standards for use as a unifying standard accepted by many retailers. The present study was conducted to evaluate the impact of the decision made by Walmart Stores (Bentonville, AR) to require their suppliers to become GFSI compliant. An online survey of 174 retail suppliers was conducted to assess food suppliers' opinions of this requirement and the benefits suppliers realized when they transitioned from their previous food safety systems. The most common reason for becoming GFSI compliant was to meet customers' requirements; thus, supplier implementation of the GFSI standards was not entirely voluntary. Other reasons given for compliance were enhancing food safety and remaining competitive. About 54 % of food processing plants using GFSI benchmarked schemes followed the guidelines of Safe Quality Food 2000 and 37 % followed those of the British Retail Consortium. At the supplier level, 58 % followed Safe Quality Food 2000 and 31 % followed the British Retail Consortium. Respondents reported that the certification process took about 10 months. The most common reason for selecting a certain GFSI benchmarked scheme was because it was widely accepted by customers (retailers). Four other common reasons were (i) the standard has a good reputation in the industry, (ii) the standard was recommended by others, (iii) the standard is most often used in the industry, and (iv) the standard was required by one of their customers. Most suppliers agreed that increased safety of their products was required to comply with GFSI benchmarked schemes. They also agreed that the GFSI required a more carefully documented food safety management system, which often required

  4. Companies' opinions and acceptance of global food safety initiative benchmarks after implementation.

    PubMed

    Crandall, Phil; Van Loo, Ellen J; O'Bryan, Corliss A; Mauromoustakos, Andy; Yiannas, Frank; Dyenson, Natalie; Berdnik, Irina

    2012-09-01

    International attention has been focused on minimizing costs that may unnecessarily raise food prices. One important aspect to consider is the redundant and overlapping costs of food safety audits. The Global Food Safety Initiative (GFSI) has devised benchmarked schemes based on existing international food safety standards for use as a unifying standard accepted by many retailers. The present study was conducted to evaluate the impact of the decision made by Walmart Stores (Bentonville, AR) to require their suppliers to become GFSI compliant. An online survey of 174 retail suppliers was conducted to assess food suppliers' opinions of this requirement and the benefits suppliers realized when they transitioned from their previous food safety systems. The most common reason for becoming GFSI compliant was to meet customers' requirements; thus, supplier implementation of the GFSI standards was not entirely voluntary. Other reasons given for compliance were enhancing food safety and remaining competitive. About 54 % of food processing plants using GFSI benchmarked schemes followed the guidelines of Safe Quality Food 2000 and 37 % followed those of the British Retail Consortium. At the supplier level, 58 % followed Safe Quality Food 2000 and 31 % followed the British Retail Consortium. Respondents reported that the certification process took about 10 months. The most common reason for selecting a certain GFSI benchmarked scheme was because it was widely accepted by customers (retailers). Four other common reasons were (i) the standard has a good reputation in the industry, (ii) the standard was recommended by others, (iii) the standard is most often used in the industry, and (iv) the standard was required by one of their customers. Most suppliers agreed that increased safety of their products was required to comply with GFSI benchmarked schemes. They also agreed that the GFSI required a more carefully documented food safety management system, which often required

  5. Benchmarking NMR experiments: A relational database of protein pulse sequences

    NASA Astrophysics Data System (ADS)

    Senthamarai, Russell R. P.; Kuprov, Ilya; Pervushin, Konstantin

    2010-03-01

    Systematic benchmarking of multi-dimensional protein NMR experiments is a critical prerequisite for optimal allocation of NMR resources for structural analysis of challenging proteins, e.g. large proteins with limited solubility or proteins prone to aggregation. We propose a set of benchmarking parameters for essential protein NMR experiments organized into a lightweight (single XML file) relational database (RDB), which includes all the necessary auxiliaries (waveforms, decoupling sequences, calibration tables, setup algorithms and an RDB management system). The database is interfaced to the Spinach library ( http://spindynamics.org), which enables accurate simulation and benchmarking of NMR experiments on large spin systems. A key feature is the ability to use a single user-specified spin system to simulate the majority of deposited solution state NMR experiments, thus providing the (hitherto unavailable) unified framework for pulse sequence evaluation. This development enables predicting relative sensitivity of deposited implementations of NMR experiments, thus providing a basis for comparison, optimization and, eventually, automation of NMR analysis. The benchmarking is demonstrated with two proteins, of 170 amino acids I domain of αXβ2 Integrin and 440 amino acids NS3 helicase.

  6. Providing Nuclear Criticality Safety Analysis Education through Benchmark Experiment Evaluation

    SciTech Connect

    John D. Bess; J. Blair Briggs; David W. Nigg

    2009-11-01

    One of the challenges that today's new workforce of nuclear criticality safety engineers face is the opportunity to provide assessment of nuclear systems and establish safety guidelines without having received significant experience or hands-on training prior to graduation. Participation in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and/or the International Reactor Physics Experiment Evaluation Project (IRPhEP) provides students and young professionals the opportunity to gain experience and enhance critical engineering skills.

  7. Benchmark Evaluation of the Medium-Power Reactor Experiment Program Critical Configurations

    SciTech Connect

    Margaret A. Marshall; John D. Bess

    2013-02-01

    A series of small, compact critical assembly (SCCA) experiments were performed in 1962-1965 at the Oak Ridge National Laboratory Critical Experiments Facility (ORCEF) for the Medium-Power Reactor Experiment (MPRE) program. The MPRE was a stainless-steel clad, highly enriched uranium (HEU)-O2 fuelled, BeO reflected reactor design to provide electrical power to space vehicles. Cooling and heat transfer were to be achieved by boiling potassium in the reactor core and passing vapor directly through a turbine. Graphite- and beryllium-reflected assemblies were constructed at ORCEF to verify the critical mass, power distribution, and other reactor physics measurements needed to validate reactor calculations and reactor physics methods. The experimental series was broken into three parts, with the third portion of the experiments representing the beryllium-reflected measurements. The latter experiments are of interest for validating current reactor design efforts for a fission surface power reactor. The entire series has been evaluated as acceptable benchmark experiments and submitted for publication in the International Handbook of Evaluated Criticality Safety Benchmark Experiments and in the International Handbook of Evaluated Reactor Physics Benchmark Experiments.

  8. TRIGA Mark II Criticality Benchmark Experiment with Burned Fuel

    SciTech Connect

    Persic, Andreja; Ravnik, Matjaz; Zagar, Tomaz

    2000-12-15

    The experimental results of criticality benchmark experiments performed at the Jozef Stefan Institute TRIGA Mark II reactor are presented. The experiments were performed with partly burned fuel in two compact and uniform core configurations in the same arrangements as were used in the fresh fuel criticality benchmark experiment performed in 1991. In the experiments, both core configurations contained only 12 wt% U-ZrH fuel with 20% enriched uranium. The first experimental core contained 43 fuel elements with average burnup of 1.22 MWd or 2.8% {sup 235}U burned. The last experimental core configuration was composed of 48 fuel elements with average burnup of 1.15 MWd or 2.6% {sup 235}U burned. The experimental determination of k{sub eff} for both core configurations, one subcritical and one critical, are presented. Burnup for all fuel elements was calculated in two-dimensional four-group diffusion approximation using the TRIGLAV code. The burnup of several fuel elements was measured also by the reactivity method.

  9. Benchmark enclosure fire suppression experiments - phase 1 test report.

    SciTech Connect

    Figueroa, Victor G.; Nichols, Robert Thomas; Blanchat, Thomas K.

    2007-06-01

    A series of fire benchmark water suppression tests were performed that may provide guidance for dispersal systems for the protection of high value assets. The test results provide boundary and temporal data necessary for water spray suppression model development and validation. A review of fire suppression in presented for both gaseous suppression and water mist fire suppression. The experimental setup and procedure for gathering water suppression performance data are shown. Characteristics of the nozzles used in the testing are presented. Results of the experiments are discussed.

  10. Apollo experience report environmental acceptance testing

    NASA Technical Reports Server (NTRS)

    Laubach, C. H. M.

    1976-01-01

    Environmental acceptance testing was used extensively to screen selected spacecraft hardware for workmanship defects and manufacturing flaws. The minimum acceptance levels and durations and methods for their establishment are described. Component selection and test monitoring, as well as test implementation requirements, are included. Apollo spacecraft environmental acceptance test results are summarized, and recommendations for future programs are presented.

  11. SILENE Benchmark Critical Experiments for Criticality Accident Alarm Systems

    SciTech Connect

    Miller, Thomas Martin; Reynolds, Kevin H.

    2011-01-01

    In October 2010 a series of benchmark experiments was conducted at the Commissariat a Energie Atomique et aux Energies Alternatives (CEA) Valduc SILENE [1] facility. These experiments were a joint effort between the US Department of Energy (DOE) and the French CEA. The purpose of these experiments was to create three benchmarks for the verification and validation of radiation transport codes and evaluated nuclear data used in the analysis of criticality accident alarm systems (CAASs). This presentation will discuss the geometric configuration of these experiments and the quantities that were measured and will present some preliminary comparisons between the measured data and calculations. This series consisted of three single-pulsed experiments with the SILENE reactor. During the first experiment the reactor was bare (unshielded), but during the second and third experiments it was shielded by lead and polyethylene, respectively. During each experiment several neutron activation foils and thermoluminescent dosimeters (TLDs) were placed around the reactor, and some of these detectors were themselves shielded from the reactor by high-density magnetite and barite concrete, standard concrete, and/or BoroBond. All the concrete was provided by CEA Saclay, and the BoroBond was provided by Y-12 National Security Complex. Figure 1 is a picture of the SILENE reactor cell configured for pulse 1. Also included in these experiments were measurements of the neutron and photon spectra with two BICRON BC-501A liquid scintillators. These two detectors were provided and operated by CEA Valduc. They were set up just outside the SILENE reactor cell with additional lead shielding to prevent the detectors from being saturated. The final detectors involved in the experiments were two different types of CAAS detectors. The Babcock International Group provided three CIDAS CAAS detectors, which measured photon dose and dose rate with a Geiger-Mueller tube. CIDAS detectors are currently in

  12. MELCOR Verification, Benchmarking, and Applications experience at BNL

    SciTech Connect

    Madni, I.K.

    1992-12-31

    This paper presents a summary of MELCOR Verification, Benchmarking and Applications experience at Brookhaven National Laboratory (BNL), sponsored by the US Nuclear Regulatory Commission (NRC). Under MELCOR verification over the past several years, all released versions of the code were installed on BNL`s computer system, verification exercises were performed, and defect investigation reports were sent to SNL. Benchmarking calculations of integral severe fuel damage tests performed at BNL have helped to identify areas of modeling strengths and weaknesses in MELCOR; the most appropriate choices for input parameters; selection of axial nodalization for core cells and heat structures; and workarounds that extend the capabilities of MELCOR. These insights are explored in greater detail in the paper, with the help of selected results and comparisons. Full plant applications calculations at BNL have helped to evaluate the ability of MELCOR to successfully simulate various accident sequences and calculate source terms to the environment for both BWRs and PWRs. A summary of results, including timing of key events, thermal-hydraulic response, and environmental releases of fission products are presented for selected calculations, along with comparisons with Source Term Code Package (STCP) calculations of the same sequences. Differences in results are explained on the basis of modeling differences between the two codes. The results of a sensitivity calculation are also shown. The paper concludes by highlighting some insights on bottomline issues, and the contribution of the BNL program to MELCOR development, assessment, and the identification of user needs for optimum use of the code.

  13. MELCOR Verification, Benchmarking, and Applications experience at BNL

    SciTech Connect

    Madni, I.K.

    1992-01-01

    This paper presents a summary of MELCOR Verification, Benchmarking and Applications experience at Brookhaven National Laboratory (BNL), sponsored by the US Nuclear Regulatory Commission (NRC). Under MELCOR verification over the past several years, all released versions of the code were installed on BNL's computer system, verification exercises were performed, and defect investigation reports were sent to SNL. Benchmarking calculations of integral severe fuel damage tests performed at BNL have helped to identify areas of modeling strengths and weaknesses in MELCOR; the most appropriate choices for input parameters; selection of axial nodalization for core cells and heat structures; and workarounds that extend the capabilities of MELCOR. These insights are explored in greater detail in the paper, with the help of selected results and comparisons. Full plant applications calculations at BNL have helped to evaluate the ability of MELCOR to successfully simulate various accident sequences and calculate source terms to the environment for both BWRs and PWRs. A summary of results, including timing of key events, thermal-hydraulic response, and environmental releases of fission products are presented for selected calculations, along with comparisons with Source Term Code Package (STCP) calculations of the same sequences. Differences in results are explained on the basis of modeling differences between the two codes. The results of a sensitivity calculation are also shown. The paper concludes by highlighting some insights on bottomline issues, and the contribution of the BNL program to MELCOR development, assessment, and the identification of user needs for optimum use of the code.

  14. Benchmark Data Through The International Reactor Physics Experiment Evaluation Project (IRPHEP)

    SciTech Connect

    J. Blair Briggs; Dr. Enrico Sartori

    2005-09-01

    The International Reactor Physics Experiments Evaluation Project (IRPhEP) was initiated by the Organization for Economic Cooperation and Development (OECD) Nuclear Energy Agency’s (NEA) Nuclear Science Committee (NSC) in June of 2002. The IRPhEP focus is on the derivation of internationally peer reviewed benchmark models for several types of integral measurements, in addition to the critical configuration. While the benchmarks produced by the IRPhEP are of primary interest to the Reactor Physics Community, many of the benchmarks can be of significant value to the Criticality Safety and Nuclear Data Communities. Benchmarks that support the Next Generation Nuclear Plant (NGNP), for example, also support fuel manufacture, handling, transportation, and storage activities and could challenge current analytical methods. The IRPhEP is patterned after the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and is closely coordinated with the ICSBEP. This paper highlights the benchmarks that are currently being prepared by the IRPhEP that are also of interest to the Criticality Safety Community. The different types of measurements and associated benchmarks that can be expected in the first publication and beyond are described. The protocol for inclusion of IRPhEP benchmarks as ICSBEP benchmarks and for inclusion of ICSBEP benchmarks as IRPhEP benchmarks is detailed. The format for IRPhEP benchmark evaluations is described as an extension of the ICSBEP format. Benchmarks produced by the IRPhEP add new dimension to criticality safety benchmarking efforts and expand the collection of available integral benchmarks for nuclear data testing. The first publication of the "International Handbook of Evaluated Reactor Physics Benchmark Experiments" is scheduled for January of 2006.

  15. Analogue experiments as benchmarks for models of lava flow emplacement

    NASA Astrophysics Data System (ADS)

    Garel, F.; Kaminski, E. C.; Tait, S.; Limare, A.

    2013-12-01

    During an effusive volcanic eruption, the crisis management is mainly based on the prediction of lava flow advance and its velocity. The spreading of a lava flow, seen as a gravity current, depends on its "effective rheology" and on the effusion rate. Fast-computing models have arisen in the past decade in order to predict in near real time lava flow path and rate of advance. This type of model, crucial to mitigate volcanic hazards and organize potential evacuation, has been mainly compared a posteriori to real cases of emplaced lava flows. The input parameters of such simulations applied to natural eruptions, especially effusion rate and topography, are often not known precisely, and are difficult to evaluate after the eruption. It is therefore not straightforward to identify the causes of discrepancies between model outputs and observed lava emplacement, whereas the comparison of models with controlled laboratory experiments appears easier. The challenge for numerical simulations of lava flow emplacement is to model the simultaneous advance and thermal structure of viscous lava flows. To provide original constraints later to be used in benchmark numerical simulations, we have performed lab-scale experiments investigating the cooling of isoviscous gravity currents. The simplest experimental set-up is as follows: silicone oil, whose viscosity, around 5 Pa.s, varies less than a factor of 2 in the temperature range studied, is injected from a point source onto a horizontal plate and spreads axisymmetrically. The oil is injected hot, and progressively cools down to ambient temperature away from the source. Once the flow is developed, it presents a stationary radial thermal structure whose characteristics depend on the input flow rate. In addition to the experimental observations, we have developed in Garel et al., JGR, 2012 a theoretical model confirming the relationship between supply rate, flow advance and stationary surface thermal structure. We also provide

  16. INTEGRAL BENCHMARKS AVAILABLE THROUGH THE INTERNATIONAL REACTOR PHYSICS EXPERIMENT EVALUATION PROJECT AND THE INTERNATIONAL CRITICALITY SAFETY BENCHMARK EVALUATION PROJECT

    SciTech Connect

    J. Blair Briggs; Lori Scott; Enrico Sartori; Yolanda Rugama

    2008-09-01

    Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) continue to expand their efforts and broaden their scope to identify, evaluate, and provide integral benchmark data for method and data validation. Benchmark model specifications provided by these two projects are used heavily by the international reactor physics, nuclear data, and criticality safety communities. Thus far, 14 countries have contributed to the IRPhEP, and 20 have contributed to the ICSBEP. The status of the IRPhEP and ICSBEP is discussed in this paper, and the future of the two projects is outlined and discussed. Selected benchmarks that have been added to the IRPhEP and ICSBEP handbooks since PHYSOR’06 are highlighted, and the future of the two projects is discussed.

  17. Community-based benchmarking of the CMIP DECK experiments

    NASA Astrophysics Data System (ADS)

    Gleckler, P. J.

    2015-12-01

    A diversity of community-based efforts are independently developing "diagnostic packages" with little or no coordination between them. A short list of examples include NCAR's Climate Variability Diagnostics Package (CVDP), ORNL's International Land Model Benchmarking (ILAMB), LBNL's Toolkit for Extreme Climate Analysis (TECA), PCMDI's Metrics Package (PMP), the EU EMBRACE ESMValTool, the WGNE MJO diagnostics package, and CFMIP diagnostics. The full value of these efforts cannot be realized without some coordination. As a first step, a WCRP effort has initiated a catalog to document candidate packages that could potentially be applied in a "repeat-use" fashion to all simulations contributed to the CMIP DECK (Diagnostic, Evaluation and Characterization of Klima) experiments. Some coordination of community-based diagnostics has the additional potential to improve how CMIP modeling groups analyze their simulations during model-development. The fact that most modeling groups now maintain a "CMIP compliant" data stream means that in principal without much effort they could readily adopt a set of well organized diagnostic capabilities specifically designed to operate on CMIP DECK experiments. Ultimately, a detailed listing of and access to analysis codes that are demonstrated to work "out of the box" with CMIP data could enable model developers (and others) to select those codes they wish to implement in-house, potentially enabling more systematic evaluation during the model development process.

  18. Prior Computer Experience and Technology Acceptance

    ERIC Educational Resources Information Center

    Varma, Sonali

    2010-01-01

    Prior computer experience with information technology has been identified as a key variable (Lee, Kozar, & Larsen, 2003) that can influence an individual's future use of newer computer technology. The lack of a theory driven approach to measuring prior experience has however led to conceptually different factors being used interchangeably in…

  19. Development of an ICSBEP Benchmark Evaluation, Nearly 20 Years of Experience

    SciTech Connect

    J. Blair Briggs; John D. Bess

    2011-06-01

    The basic structure of all ICSBEP benchmark evaluations is essentially the same and includes (1) a detailed description of the experiment; (2) an evaluation of the experiment, including an exhaustive effort to quantify the effects of uncertainties on measured quantities; (3) a concise presentation of benchmark-model specifications; (4) sample calculation results; and (5) a summary of experimental references. Computer code input listings and other relevant information are generally preserved in appendixes. Details of an ICSBEP evaluation is presented.

  20. Experiment vs simulation RT WFNDEC 2014 benchmark: CIVA results

    SciTech Connect

    Tisseur, D. Costin, M. Rattoni, B. Vienne, C. Vabre, A. Cattiaux, G.; Sollier, T.

    2015-03-31

    The French Atomic Energy Commission and Alternative Energies (CEA) has developed for years the CIVA software dedicated to simulation of NDE techniques such as Radiographic Testing (RT). RT modelling is achieved in CIVA using combination of a determinist approach based on ray tracing for transmission beam simulation and a Monte Carlo model for the scattered beam computation. Furthermore, CIVA includes various detectors models, in particular common x-ray films and a photostimulable phosphor plates. This communication presents the results obtained with the configurations proposed in the World Federation of NDEC 2014 RT modelling benchmark with the RT models implemented in the CIVA software.

  1. Accuracy requirements and benchmark experiments for CFD validation

    NASA Technical Reports Server (NTRS)

    Marvin, Joseph G.

    1988-01-01

    The role of experiment in the development of Computation Fluid Dynamics (CFD) for aerodynamic flow prediction is discussed. The CFD verification is a concept that depends on closely coordinated planning between computational and experimental disciplines. Because code applications are becoming more complex and their potential for design more feasible, it no longer suffices to use experimental data from surface or integral measurements alone to provide the required verification. Flow physics and modeling, flow field, and boundary condition measurements are emerging as critical data. Four types of experiments are introduced and examples given that meet the challenge of validation: flow physics experiments; flow modeling experiments; calibration experiments; and verification experiments. Measurement and accuracy requirements for each of these differ and are discussed. A comprehensive program of validation is described, some examples given, and it is concluded that the future prospects are encouraging.

  2. Accuracy requirements and benchmark experiments for CFD validation

    NASA Astrophysics Data System (ADS)

    Marvin, Joseph G.

    1988-12-01

    The role of experiment in the development of Computational Fluid Dynamics (CFD) for aerodynamic flow prediction is discussed. The CFD verification is a concept that depends on closely coordinated planning between computational and experimental disciplines. Because code applications are becoming more complex and their potential for design more feasible, it no longer suffices to use experimental data from surface or integral measurements alone to provide the required verification. Flow physics and modeling, flow field, and boundary condition measurements are emerging as critical data. Four types of experiments are introduced and examples are given that meet the challenge of validation: flow physics experiments; flow modeling experiments; calibration experiments; and verification experiments. Measurement and accuracy requirements for each of these differ and are discussed. A comprehensive program of validation is described, some examples given, and it is concluded that the future prospects are encouraging.

  3. Accuracy requirements and benchmark experiments for CFD validation

    NASA Astrophysics Data System (ADS)

    Marvin, Joseph G.

    1988-05-01

    The role of experiment in the development of Computation Fluid Dynamics (CFD) for aerodynamic flow prediction is discussed. The CFD verification is a concept that depends on closely coordinated planning between computational and experimental disciplines. Because code applications are becoming more complex and their potential for design more feasible, it no longer suffices to use experimental data from surface or integral measurements alone to provide the required verification. Flow physics and modeling, flow field, and boundary condition measurements are emerging as critical data. Four types of experiments are introduced and examples given that meet the challenge of validation: flow physics experiments; flow modeling experiments; calibration experiments; and verification experiments. Measurement and accuracy requirements for each of these differ and are discussed. A comprehensive program of validation is described, some examples given, and it is concluded that the future prospects are encouraging.

  4. Automatically generated acceptance test: A software reliability experiment

    NASA Technical Reports Server (NTRS)

    Protzel, Peter W.

    1988-01-01

    This study presents results of a software reliability experiment investigating the feasibility of a new error detection method. The method can be used as an acceptance test and is solely based on empirical data about the behavior of internal states of a program. The experimental design uses the existing environment of a multi-version experiment previously conducted at the NASA Langley Research Center, in which the launch interceptor problem is used as a model. This allows the controlled experimental investigation of versions with well-known single and multiple faults, and the availability of an oracle permits the determination of the error detection performance of the test. Fault interaction phenomena are observed that have an amplifying effect on the number of error occurrences. Preliminary results indicate that all faults examined so far are detected by the acceptance test. This shows promise for further investigations, and for the employment of this test method on other applications.

  5. Developing Evidence for Action on the Postgraduate Experience: An Effective Local Instrument to Move beyond Benchmarking

    ERIC Educational Resources Information Center

    Sampson, K. A.; Johnston, L.; Comer, K.; Brogt, E.

    2016-01-01

    Summative and benchmarking surveys to measure the postgraduate student research experience are well reported in the literature. While useful, we argue that local instruments that provide formative resources with an academic development focus are also required. If higher education institutions are to move beyond the identification of issues and…

  6. RANS Modeling of Benchmark Shockwave / Boundary Layer Interaction Experiments

    NASA Technical Reports Server (NTRS)

    Georgiadis, Nick; Vyas, Manan; Yoder, Dennis

    2010-01-01

    This presentation summarizes the computations of a set of shock wave / turbulent boundary layer interaction (SWTBLI) test cases using the Wind-US code, as part of the 2010 American Institute of Aeronautics and Astronautics (AIAA) shock / boundary layer interaction workshop. The experiments involve supersonic flows in wind tunnels with a shock generator that directs an oblique shock wave toward the boundary layer along one of the walls of the wind tunnel. The Wind-US calculations utilized structured grid computations performed in Reynolds-averaged Navier-Stokes mode. Three turbulence models were investigated: the Spalart-Allmaras one-equation model, the Menter Shear Stress Transport wavenumber-angular frequency two-equation model, and an explicit algebraic stress wavenumber-angular frequency formulation. Effects of grid resolution and upwinding scheme were also considered. The results from the CFD calculations are compared to particle image velocimetry (PIV) data from the experiments. As expected, turbulence model effects dominated the accuracy of the solutions with upwinding scheme selection indicating minimal effects.!

  7. Benchmark experiments on neutron streaming through JET Torus Hall penetrations

    NASA Astrophysics Data System (ADS)

    Batistoni, P.; Conroy, S.; Lilley, S.; Naish, J.; Obryk, B.; Popovichev, S.; Stamatelatos, I.; Syme, B.; Vasilopoulou, T.; contributors, JET

    2015-05-01

    Neutronics experiments are performed at JET for validating in a real fusion environment the neutronics codes and nuclear data applied in ITER nuclear analyses. In particular, the neutron fluence through the penetrations of the JET torus hall is measured and compared with calculations to assess the capability of state-of-art numerical tools to correctly predict the radiation streaming in the ITER biological shield penetrations up to large distances from the neutron source, in large and complex geometries. Neutron streaming experiments started in 2012 when several hundreds of very sensitive thermo-luminescence detectors (TLDs), enriched to different levels in 6LiF/7LiF, were used to measure the neutron and gamma dose separately. Lessons learnt from this first experiment led to significant improvements in the experimental arrangements to reduce the effects due to directional neutron source and self-shielding of TLDs. Here we report the results of measurements performed during the 2013-2014 JET campaign. Data from new positions, at further locations in the South West labyrinth and down to the Torus Hall basement through the air duct chimney, were obtained up to about a 40 m distance from the plasma neutron source. In order to avoid interference between TLDs due to self-shielding effects, only TLDs containing natural Lithium and 99.97% 7Li were used. All TLDs were located in the centre of large polyethylene (PE) moderators, with natLi and 7Li crystals evenly arranged within two PE containers, one in horizontal and the other in vertical orientation, to investigate the shadowing effect in the directional neutron field. All TLDs were calibrated in the quantities of air kerma and neutron fluence. This improved experimental arrangement led to reduced statistical spread in the experimental data. The Monte Carlo N-Particle (MCNP) code was used to calculate the air kerma due to neutrons and the neutron fluence at detector positions, using a JET model validated up to the

  8. Linac code benchmarking of HALODYN and PARMILA based on beam experiments

    NASA Astrophysics Data System (ADS)

    Yin, X.; Bayer, W.; Hofmann, I.

    2016-01-01

    As part of the 'High Intensity Pulsed Proton Injector' (HIPPI) project in the European Framework Programme, a program for the comparison and benchmarking of 3D Particle-In-Cell (PIC) linac codes with experiment has been implemented. HALODYN and PARMILA are two of the codes involved in this program. In this study, the initial Twiss parameters were obtained from the results of beam experiments that were conducted using the GSI UNILAC in low-beam-current. Furthermore, beam dynamics simulations of the Alvarez Drift Tube Linac (DTL) section were performed by HALODYN and PARMILA codes and benchmarked for the same beam experiments. These simulation results exhibit some agreements with the experimental results for the low-beam-current case. The similarities and differences between the experimental and simulated results were analyzed quantitatively. In addition, various physical aspects of the simulation codes and the linac design strategy are also discussed.

  9. Evaluation of HEU-Beryllium Benchmark Experiments to Improve Computational Analysis of Space Reactors

    SciTech Connect

    Bess, John; Bledsoe, Keith C; Rearden, Bradley T

    2011-01-01

    An assessment was previously performed to evaluate modeling capabilities and quantify preliminary biases and uncertainties associated with the modeling methods and data utilized in designing a nuclear reactor such as a beryllium-reflected, highly-enriched-uranium (HEU)-O2 fission surface power (FSP) system for space nuclear power. The conclusion of the previous study was that current capabilities could preclude the necessity of a cold critical test of the FSP; however, additional testing would reduce uncertainties in the beryllium and uranium cross-section data and the overall uncertainty in the computational models. A series of critical experiments using HEU metal were performed in the 1960s and 1970s in support of criticality safety operations at the Y-12 Plant. Of the hundreds of experiments, three were identified as fast-fission configurations reflected by beryllium metal. These experiments have been evaluated as benchmarks for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE). Further evaluation of the benchmark experiments was performed using the sensitivity and uncertainty analysis capabilities of SCALE 6. The data adjustment methods of SCALE 6 have been employed in the validation of an example FSP design model to reduce the uncertainty due to the beryllium cross section data.

  10. Evaluation of HEU-Beryllium Benchmark Experiments to Improve Computational Analysis of Space Reactors

    SciTech Connect

    John D. Bess; Keith C. Bledsoe; Bradley T. Rearden

    2011-02-01

    An assessment was previously performed to evaluate modeling capabilities and quantify preliminary biases and uncertainties associated with the modeling methods and data utilized in designing a nuclear reactor such as a beryllium-reflected, highly-enriched-uranium (HEU)-O2 fission surface power (FSP) system for space nuclear power. The conclusion of the previous study was that current capabilities could preclude the necessity of a cold critical test of the FSP; however, additional testing would reduce uncertainties in the beryllium and uranium cross-section data and the overall uncertainty in the computational models. A series of critical experiments using HEU metal were performed in the 1960s and 1970s in support of criticality safety operations at the Y-12 Plant. Of the hundreds of experiments, three were identified as fast-fission configurations reflected by beryllium metal. These experiments have been evaluated as benchmarks for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE). Further evaluation of the benchmark experiments was performed using the sensitivity and uncertainty analysis capabilities of SCALE 6. The data adjustment methods of SCALE 6 have been employed in the validation of an example FSP design model to reduce the uncertainty due to the beryllium cross section data.

  11. Creation of a simplified benchmark model for the neptunium sphere experiment

    SciTech Connect

    Mosteller, R. D.; Loaiza, D. J.; Sanchez, R. G.

    2004-01-01

    Although neptunium is produced in significant amounts by nuclear power reactors, its critical mass is not well known. In addition, sizeable uncertainties exist for its cross sections. As an important step toward resolution of these issues, a critical experiment was conducted in 2002 at the Los Alamos Critical Experiments Facility. In the experiment, a 6-kg sphere of {sup 237}Np was surrounded by nested hemispherical shells of highly enriched uranium. The shells were required in order to reach a critical condition. Subsequently, a detailed model of the experiment was developed. This model faithfully reproduces the components of the experiment, but it is geometrically complex. Furthermore, the isotopics analysis upon which that model is based omits nearly 1 % of the mass of the sphere. A simplified benchmark model has been constructed that retains all of the neutronically important aspects of the detailed model and substantially reduces the computer resources required for the calculation. The reactivity impact, of each of the simplifications is quantified, including the effect of the missing mass. A complete set of specifications for the benchmark is included in the full paper. Both the detailed and simplified benchmark models underpredict k{sub eff} by more than 1% {Delta}k. This discrepancy supports the suspicion that better cross sections are needed for {sup 237}Np.

  12. Material Activation Benchmark Experiments at the NuMI Hadron Absorber Hall in Fermilab

    SciTech Connect

    Matsumura, H.; Matsuda, N.; Kasugai, Y.; Toyoda, A.; Yashima, H.; Sekimoto, S.; Iwase, H.; Oishi, K.; Sakamoto, Y.; Nakashima, H.; Leveling, A.; Boehnlein, D.; Lauten, G.; Mokhov, N.; Vaziri, K.

    2014-06-15

    In our previous study, double and mirror symmetric activation peaks found for Al and Au arranged spatially on the back of the Hadron absorber of the NuMI beamline in Fermilab were considerably higher than those expected purely from muon-induced reactions. From material activation bench-mark experiments, we conclude that this activation is due to hadrons with energy greater than 3 GeV that had passed downstream through small gaps in the hadron absorber.

  13. Concrete benchmark experiment as support to ex-vessel LWR surveillance dosimetry

    SciTech Connect

    Abderrahim, H.A.; D`hondt, P.J.; Oeyen, J.B.

    1994-12-31

    The analysis of DOEL-1 in-vessel and ex-vessel neutron dosimetry, using the DOT 3.5 Sn code coupled with the VITAMIN-C cross-section library, showed the same C/E values for different detectors at the surveillance capsule and the ex-vessel cavity positions. These results seem to be in contradiction with those obtained in several benchmark experiments (PCA, PSF, VENUS ...) that used the same computational tools. Indeed a strong decreasing radial trend of the C/E was observed, partly explained by the overestimation of the iron inelastic scattering. The flat trend seen in DOEL-1 could be explained by compensating errors in the calculation such as the backscattering due to the concrete walls outside the cavity. The concrete Benchmark experiment has been designed to judge the ability of this calculational method to treat the backscattering. This paper describes the Concrete Benchmark experiment, the measured and computed neutron dosimetry results and their comparison. This preliminary analysis seems to indicate an overestimation of the backscattering effect in the calculations.

  14. Graphite and Beryllium Reflector Critical Assemblies of UO2 (Benchmark Experiments 2 and 3)

    SciTech Connect

    Margaret A. Marshall; John D. Bess

    2012-11-01

    INTRODUCTION A series of experiments was carried out in 1962-65 at the Oak Ridge National Laboratory Critical Experiments Facility (ORCEF) for use in space reactor research programs. A core containing 93.2 wt% enriched UO2 fuel rods was used in these experiments. The first part of the experimental series consisted of 252 tightly-packed fuel rods (1.27-cm triangular pitch) with graphite reflectors [1], the second part used 252 graphite-reflected fuel rods organized in a 1.506-cm triangular-pitch array [2], and the final part of the experimental series consisted of 253 beryllium-reflected fuel rods in a 1.506-cm-triangular-pitch configuration and in a 7-tube-cluster configuration [3]. Fission rate distribution and cadmium ratio measurements were taken for all three parts of the experimental series. Reactivity coefficient measurements were taken for various materials placed in the beryllium reflected core. All three experiments in the series have been evaluated for inclusion in the International Reactor Physics Experiment Evaluation Project (IRPhEP) [4] and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbooks, [5]. The evaluation of the first experiment in the series was discussed at the 2011 ANS Winter meeting [6]. The evaluations of the second and third experiments are discussed below. These experiments are of interest as benchmarks because they support the validation of compact reactor designs with similar characteristics to the design parameters for a space nuclear fission surface power systems [7].

  15. Benchmark experiments for validation of reaction rates determination in reactor dosimetry

    NASA Astrophysics Data System (ADS)

    Rataj, J.; Huml, O.; Heraltova, L.; Bily, T.

    2014-11-01

    The precision of Monte Carlo calculations of quantities of neutron dosimetry strongly depends on precision of reaction rates prediction. Research reactor represents a very useful tool for validation of the ability of a code to calculate such quantities as it can provide environments with various types of neutron energy spectra. Especially, a zero power research reactor with well-defined core geometry and neutronic properties enables precise comparison between experimental and calculated data. Thus, at the VR-1 zero power research reactor, a set of benchmark experiments were proposed and carried out to verify the MCNP Monte Carlo code ability to predict correctly the reaction rates. For that purpose two frequently used reactions were chosen: He-3(n,p)H-3 and Au-197(n,γ)Au-198. The benchmark consists of response measurement of small He-3 gas filled detector in various positions of reactor core and of activated gold wires placed inside the core or to its vicinity. The reaction rates were calculated in MCNP5 code utilizing a detailed model of VR-1 reactor which was validated for neutronic calculations at the reactor. The paper describes in detail the experimental set-up of the benchmark, the MCNP model of the VR-1 reactor and provides a comparison between experimental and calculated data.

  16. Quality in E-Learning--A Conceptual Framework Based on Experiences from Three International Benchmarking Projects

    ERIC Educational Resources Information Center

    Ossiannilsson, E.; Landgren, L.

    2012-01-01

    Between 2008 and 2010, Lund University took part in three international benchmarking projects, "E-xcellence+," the "eLearning Benchmarking Exercise 2009," and the "First Dual-Mode Distance Learning Benchmarking Club." A comparison of these models revealed a rather high level of correspondence. From this finding and from desktop studies of the…

  17. A parallel high-order accurate finite element nonlinear Stokes ice sheet model and benchmark experiments

    SciTech Connect

    Leng, Wei; Ju, Lili; Gunzburger, Max; Price, Stephen; Ringler, Todd

    2012-01-01

    The numerical modeling of glacier and ice sheet evolution is a subject of growing interest, in part because of the potential for models to inform estimates of global sea level change. This paper focuses on the development of a numerical model that determines the velocity and pressure fields within an ice sheet. Our numerical model features a high-fidelity mathematical model involving the nonlinear Stokes system and combinations of no-sliding and sliding basal boundary conditions, high-order accurate finite element discretizations based on variable resolution grids, and highly scalable parallel solution strategies, all of which contribute to a numerical model that can achieve accurate velocity and pressure approximations in a highly efficient manner. We demonstrate the accuracy and efficiency of our model by analytical solution tests, established ice sheet benchmark experiments, and comparisons with other well-established ice sheet models.

  18. Benchmark experiment on a copper slab assembly bombarded by D-T neutrons

    NASA Astrophysics Data System (ADS)

    Maekawa, Fujio; Oyama, Yukio; Konno, Chikara; Ikeda, Yujiro; Maekawa, Hiroshi; Kosako, Kazuaki

    1994-03-01

    Copper is a very important material for fusion reactor because it is used in superconducting magnets or first walls and so on. To verify nuclear data of copper, a benchmark experiment was performed using the D-T neutron source of the FNS facility in Japan Atomic Energy Research Institute. An cylindrical experimental assembly of 629 mm in diameter and 608 mm in thickness made of pure copper was located at 200 mm from the D-T neutron source. In the assembly, the following quantities were measured: (1) neutron spectra in energy regions of MeV and keV, (2) neutron reaction rates, (3) prompt and decay gamma-ray spectra, and (4) gamma-ray heating rates. The obtained experimental data were compiled in this report.

  19. Maternal immunization. Clinical experiences, challenges, and opportunities in vaccine acceptance.

    PubMed

    Moniz, Michelle H; Beigi, Richard H

    2014-01-01

    Maternal immunization holds tremendous promise to improve maternal and neonatal health for a number of infectious conditions. The unique susceptibilities of pregnant women to infectious conditions, as well as the ability of maternally-derived antibody to offer vital neonatal protection (via placental transfer), together have produced the recent increased attention on maternal immunization. The Advisory Committee on Immunization Practices (ACIP) currently recommends 2 immunizations for all pregnant women lacking contraindication, inactivated Influenza and tetanus toxoid, reduced diphtheria toxoid, and acellular pertussis (Tdap). Given ongoing research the number of vaccines recommended during pregnancy is likely to increase. Thus, achieving high vaccination coverage of pregnant women for all recommended immunizations is a key public health enterprise. This review will focus on the present state of vaccine acceptance in pregnancy, with attention to currently identified barriers and determinants of vaccine acceptance. Additionally, opportunities for improvement will be considered. PMID:25483490

  20. Apollo experience report: Acceptance checkout equipment for the Apollo spacecraft

    NASA Technical Reports Server (NTRS)

    Burtzlaff, I. J.

    1972-01-01

    The acceptance checkout equipment for the Apollo spacecraft is described, and the history of the major equipment modifications that were required to meet the Apollo Program checkout requirements is traced. Some major problem areas are outlined, and a discussion of future checkout methods is included. The concept of the future checkout methods presented provides for an increase in test equipment standardization among NASA programs and among all testing phases within a program. The capability for increased automation and reduction in the test equipment inventory is provided in the proposed concept.

  1. Use of Student Ratings to Benchmark Universities: Multilevel Modeling of Responses to the Australian Course Experience Questionnaire (CEQ)

    ERIC Educational Resources Information Center

    Marsh, Herbert W.; Ginns, Paul; Morin, Alexandre J. S.; Nagengast, Benjamin; Martin, Andrew J.

    2011-01-01

    Recently graduated university students from all Australian Universities rate their overall departmental and university experiences (DUEs), and their responses (N = 44,932, 41 institutions) are used by the government to benchmark departments and universities. We evaluate this DUE strategy of rating overall departments and universities rather than…

  2. MELCOR-H2 Benchmarking of the SNL Transient Sulfuric Acid Decomposition Experiments

    SciTech Connect

    Rodriguez, Sal B.; Gauntt, Randall O.; Gelbard, Fred; Pickard, Paul; Cole, Randy; McFadden, Katherine; Drennen, Tom; Martin, Billy; Louie, David; Archuleta, Louis; Revankar, Shripad T.; Vierow, Karen; El-Genk, Mohamed; Tournier, Jean Michel

    2007-07-01

    MELCOR is a world-renowned nuclear reactor safety analysis code that is used to simulate both light water and gas-cooled reactors. MELCOR-H2 is an extension of MELCOR that can model detailed nuclear reactors that are fully coupled with modular secondary-system components and the sulfur iodine (SI) thermochemical cycle for the generation of hydrogen and electricity. The models are applicable to both steady state and transient calculations. Previous work has shown that the hydrogen generation rate calculated by MELCOR-H2 for the SI cycle was within the expected theoretical yield, thus providing a macroscopic confirmation that MELCOR-H2's computational approach is reasonable. However, in order to better quantify its adequacy, benchmarking of the code with experimental data is required. Sulfuric acid decomposition experiments were conducted during late 2006 at Sandia National Laboratories, and MELCOR-H2 was used to simulate them. We developed an input deck based on the experiment's geometry, as well as the initial and boundary conditions, and then proceeded to compare the experimental acid conversion efficiency and SO{sub 2} production data with the code output. The comparison showed that the simulation output was typically within less than 10% of experimental data, and that key experimental data trends such as acid conversion efficiency, molar acid flow rate, and solution mole % were computed adequately by the MELCOR-H2. (authors)

  3. Criticality experiments and benchmarks for cross section evaluation: the neptunium case

    NASA Astrophysics Data System (ADS)

    Leong, L. S.; Tassan-Got, L.; Audouin, L.; Paradela, C.; Wilson, J. N.; Tarrio, D.; Berthier, B.; Duran, I.; Le Naour, C.; Stéphan, C.

    2013-03-01

    The 237Np neutron-induced fission cross section has been recently measured in a large energy range (from eV to GeV) at the n_TOF facility at CERN. When compared to previous measurement the n_TOF fission cross section appears to be higher by 5-7% beyond the fission threshold. To check the relevance of n_TOF data, we apply a criticality experiment performed at Los Alamos with a 6 kg sphere of 237Np, surrounded by enriched uranium 235U so as to approach criticality with fast neutrons. The multiplication factor ke f f of the calculation is in better agreement with the experiment (the deviation of 750 pcm is reduced to 250 pcm) when we replace the ENDF/B-VII.0 evaluation of the 237Np fission cross section by the n_TOF data. We also explore the hypothesis of deficiencies of the inelastic cross section in 235U which has been invoked by some authors to explain the deviation of 750 pcm. With compare to inelastic large distortion calculation, it is incompatible with existing measurements. Also we show that the v of 237Np can hardly be incriminated because of the high accuracy of the existing data. Fission rate ratios or averaged fission cross sections measured in several fast neutron fields seem to give contradictory results on the validation of the 237Np cross section but at least one of the benchmark experiments, where the active deposits have been well calibrated for the number of atoms, favors the n_TOF data set. These outcomes support the hypothesis of a higher fission cross section of 237Np.

  4. 2-D Circulation Control Airfoil Benchmark Experiments Intended for CFD Code Validation

    NASA Technical Reports Server (NTRS)

    Englar, Robert J.; Jones, Gregory S.; Allan, Brian G.; Lin, Johb C.

    2009-01-01

    A current NASA Research Announcement (NRA) project being conducted by Georgia Tech Research Institute (GTRI) personnel and NASA collaborators includes the development of Circulation Control (CC) blown airfoils to improve subsonic aircraft high-lift and cruise performance. The emphasis of this program is the development of CC active flow control concepts for both high-lift augmentation, drag control, and cruise efficiency. A collaboration in this project includes work by NASA research engineers, whereas CFD validation and flow physics experimental research are part of NASA s systematic approach to developing design and optimization tools for CC applications to fixed-wing aircraft. The design space for CESTOL type aircraft is focusing on geometries that depend on advanced flow control technologies that include Circulation Control aerodynamics. The ability to consistently predict advanced aircraft performance requires improvements in design tools to include these advanced concepts. Validation of these tools will be based on experimental methods applied to complex flows that go beyond conventional aircraft modeling techniques. This paper focuses on recent/ongoing benchmark high-lift experiments and CFD efforts intended to provide 2-D CFD validation data sets related to NASA s Cruise Efficient Short Take Off and Landing (CESTOL) study. Both the experimental data and related CFD predictions are discussed.

  5. Acceptance of cravings: how smoking cessation experiences affect craving beliefs.

    PubMed

    Nosen, Elizabeth; Woody, Sheila R

    2014-08-01

    Metacognitive models theorize that more negative appraisals of craving-related thoughts and feelings, and greater efforts to avoid or control these experiences, exacerbate suffering and increase chances the person will use substances to obtain relief. Thus far, little research has examined how attempts to quit smoking influence the way people perceive and respond to cravings. As part of a larger study, 176 adult smokers interested in quitting participated in two lab sessions, four days apart. Half the sample began a quit attempt the day after the first session; craving-related beliefs, metacognitive strategies, and negative affect were assessed at the second session. Participants who failed to abstain from smoking more strongly endorsed appraisals of craving-related thoughts as negative and personally relevant. Negative appraisals correlated strongly with distress and withdrawal symptoms. Attempting to quit smoking increased use of distraction, thought suppression and re-appraisal techniques, with no difference between successful and unsuccessful quitters. Negative beliefs about cravings and rumination predicted less change in smoking one month later. Results suggest that smoking cessation outcomes and metacognitive beliefs likely have a bidirectional relationship that is strongly related to negative affect. Greater consideration of the impact of cessation experiences on mood and craving beliefs is warranted. PMID:25014920

  6. Identification of Integral Benchmarks for Nuclear Data Testing Using DICE (Database for the International Handbook of Evaluated Criticality Safety Benchmark Experiments)

    SciTech Connect

    J. Blair Briggs; A. Nichole Ellis; Yolanda Rugama; Nicolas Soppera; Manuel Bossant

    2011-08-01

    Typical users of the International Criticality Safety Evaluation Project (ICSBEP) Handbook have specific criteria to which they desire to find matching experiments. Depending on the application, those criteria may consist of any combination of physical or chemical characteristics and/or various neutronic parameters. The ICSBEP Handbook contains a structured format helping the user narrow the search for experiments of interest. However, with nearly 4300 different experimental configurations and the ever increasing addition of experimental data, the necessity to perform multiple criteria searches have rendered these features insufficient. As a result, a relational database was created with information extracted from the ICSBEP Handbook. A users’ interface was designed by OECD and DOE to allow the interrogation of this database. The database and the corresponding users’ interface are referred to as DICE. DICE currently offers the capability to perform multiple criteria searches that go beyond simple fuel, physical form and spectra and includes expanded general information, fuel form, moderator/coolant, neutron-absorbing material, cladding, reflector, separator, geometry, benchmark results, spectra, and neutron balance parameters. DICE also includes the capability to display graphical representations of neutron spectra, detailed neutron balance, sensitivity coefficients for capture, fission, elastic scattering, inelastic scattering, nu-bar and mu-bar, as well as several other features.

  7. Let it be: Accepting negative emotional experiences predicts decreased negative affect and depressive symptoms

    PubMed Central

    Shallcross, Amanda J.; Troy, Allison S.; Boland, Matthew; Mauss, Iris B.

    2010-01-01

    The present studies examined whether a tendency to accept negative emotional experiences buffers individuals from experiencing elevated negative affect during negative emotional situations (Study 1) and from developing depressive symptoms in the face of life stress (Study 2). Both studies examined female samples. This research expands on existing acceptance research in four ways. First, it examined whether acceptance has beneficial correlates when it matters most: in emotionally taxing (versus more neutral) contexts. Second, in Study 2 a prospective design was used in which acceptance was measured before stress was encountered and before outcomes were measured. Third, depressive symptoms (rather than general functioning or trauma symptoms) were examined as a particularly relevant outcome in the context of stress. Fourth, to enhance generalizability, a community sample (versus undergraduates or a purely clinical sample) was recruited. Results indicated that acceptance was correlated with decreased negative affect during a negative emotion induction but not an affectively neutral condition (Study 1). In Study 2, acceptance interacted with life stress such that acceptance predicted lower levels of depressive symptoms after higher, but not lower, life stress. These results suggest that accepting negative experiences may protect individuals from experiencing negative affect and from developing depressive symptoms. PMID:20566191

  8. Labeling Sexual Victimization Experiences: The Role of Sexism, Rape Myth Acceptance, and Tolerance for Sexual Harassment.

    PubMed

    LeMaire, Kelly L; Oswald, Debra L; Russell, Brenda L

    2016-01-01

    This study investigated whether attitudinal variables, such as benevolent and hostile sexism toward men and women, female rape myth acceptance, and tolerance of sexual harassment are related to women labeling their sexual assault experiences as rape. In a sample of 276 female college students, 71 (25.7%) reported at least one experience that met the operational definition of rape, although only 46.5% of those women labeled the experience "rape." Benevolent sexism, tolerance of sexual harassment, and rape myth acceptance, but not hostile sexism, significantly predicted labeling of previous sexual assault experiences by the victims. Specifically, those with more benevolent sexist attitudes toward both men and women, greater rape myth acceptance, and more tolerant attitudes of sexual harassment were less likely to label their past sexual assault experience as rape. The results are discussed for their clinical and theoretical implications.

  9. Benchmark characterization

    NASA Technical Reports Server (NTRS)

    Conte, Thomas M.; Hwu, Wen-Mei W.

    1991-01-01

    An abstract system of benchmark characteristics that makes it possible, in the beginning of the design stage, to design with benchmark performance in mind is presented. The benchmark characteristics for a set of commonly used benchmarks are then shown. The benchmark set used includes some benchmarks from the Systems Performance Evaluation Cooperative (SPEC). The SPEC programs are industry-standard applications that use specific inputs. Processor, memory-system, and operating-system characteristics are addressed.

  10. Overview of the 2014 Edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook)

    SciTech Connect

    John D. Bess; J. Blair Briggs; Jim Gulliford; Ian Hill

    2014-10-01

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) is a widely recognized world class program. The work of the IRPhEP is documented in the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook). Integral data from the IRPhEP Handbook is used by reactor safety and design, nuclear data, criticality safety, and analytical methods development specialists, worldwide, to perform necessary validations of their calculational techniques. The IRPhEP Handbook is among the most frequently quoted reference in the nuclear industry and is expected to be a valuable resource for future decades.

  11. The Influence of Provider Communication Behaviors on Parental Vaccine Acceptance and Visit Experience

    PubMed Central

    Mangione-Smith, Rita; Robinson, Jeffrey D.; Heritage, John; DeVere, Victoria; Salas, Halle S.; Zhou, Chuan; Taylor, James A.

    2015-01-01

    Objectives. We investigated how provider vaccine communication behaviors influence parental vaccination acceptance and visit experience. Methods. In a cross-sectional observational study, we videotaped provider–parent vaccine discussions (n = 111). We coded visits for the format providers used for initiating the vaccine discussion (participatory vs presumptive), parental verbal resistance to vaccines after provider initiation (yes vs no), and provider pursuit of recommendations in the face of parental resistance (pursuit vs mitigated or no pursuit). Main outcomes were parental verbal acceptance of recommended vaccines at visit’s end (all vs ≥ 1 refusal) and parental visit experience (highly vs lower rated). Results. In multivariable models, participatory (vs presumptive) initiation formats were associated with decreased odds of accepting all vaccines at visit’s end (adjusted odds ratio [AOR] = 0.04; 95% confidence interval [CI] = 0.01, 0.15) and increased odds of a highly rated visit experience (AOR = 17.3; 95% CI = 1.5, 200.3). Conclusions. In the context of 2 general communication formats used by providers to initiate vaccine discussions, there appears to be an inverse relationship between parental acceptance of vaccines and visit experience. Further exploration of this inverse relationship in longitudinal studies is needed. PMID:25790386

  12. Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program

    DOE PAGES

    Bess, John D.; Montierth, Leland; Köberl, Oliver; Snoj, Luka

    2014-10-09

    Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the ²³⁵U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of keff with MCNP5 and ENDF/B-VII.0 neutron nuclear data are greatermore » than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of keff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  13. Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program

    SciTech Connect

    Bess, John D.; Montierth, Leland; Köberl, Oliver; Snoj, Luka

    2014-10-09

    Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the ²³⁵U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of keff with MCNP5 and ENDF/B-VII.0 neutron nuclear data are greater than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of keff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.

  14. Integrating CFD, CAA, and Experiments Towards Benchmark Datasets for Airframe Noise Problems

    NASA Technical Reports Server (NTRS)

    Choudhari, Meelan M.; Yamamoto, Kazuomi

    2012-01-01

    Airframe noise corresponds to the acoustic radiation due to turbulent flow in the vicinity of airframe components such as high-lift devices and landing gears. The combination of geometric complexity, high Reynolds number turbulence, multiple regions of separation, and a strong coupling with adjacent physical components makes the problem of airframe noise highly challenging. Since 2010, the American Institute of Aeronautics and Astronautics has organized an ongoing series of workshops devoted to Benchmark Problems for Airframe Noise Computations (BANC). The BANC workshops are aimed at enabling a systematic progress in the understanding and high-fidelity predictions of airframe noise via collaborative investigations that integrate state of the art computational fluid dynamics, computational aeroacoustics, and in depth, holistic, and multifacility measurements targeting a selected set of canonical yet realistic configurations. This paper provides a brief summary of the BANC effort, including its technical objectives, strategy, and selective outcomes thus far.

  15. Benchmarking initiatives in the water industry.

    PubMed

    Parena, R; Smeets, E

    2001-01-01

    Customer satisfaction and service care are every day pushing professionals in the water industry to seek to improve their performance, lowering costs and increasing the provided service level. Process Benchmarking is generally recognised as a systematic mechanism of comparing one's own utility with other utilities or businesses with the intent of self-improvement by adopting structures or methods used elsewhere. The IWA Task Force on Benchmarking, operating inside the Statistics and Economics Committee, has been committed to developing a general accepted concept of Process Benchmarking to support water decision-makers in addressing issues of efficiency. In a first step the Task Force disseminated among the Committee members a questionnaire focused on providing suggestions about the kind, the evolution degree and the main concepts of Benchmarking adopted in the represented Countries. A comparison among the guidelines adopted in The Netherlands and Scandinavia has recently challenged the Task Force in drafting a methodology for a worldwide process benchmarking in water industry. The paper provides a framework of the most interesting benchmarking experiences in the water sector and describes in detail both the final results of the survey and the methodology focused on identification of possible improvement areas. PMID:11547972

  16. Self-Compassion Promotes Personal Improvement From Regret Experiences via Acceptance.

    PubMed

    Zhang, Jia Wei; Chen, Serena

    2016-02-01

    Why do some people report more personal improvement from their regret experiences than others? Three studies examined whether self-compassion promotes personal improvement derived from recalled regret experiences. In Study 1, we coded anonymous regret descriptions posted on a blog website. People who spontaneously described their regret with greater self-compassion were also judged as having expressed more personal improvement. In Study 2, higher trait self-compassion predicted greater self-reported and observer-rated personal improvement derived from recalled regret experiences. In Study 3, people induced to take a self-compassionate perspective toward a recalled regret experience reported greater acceptance, forgiveness, and personal improvement. A multiple mediation analysis comparing acceptance and forgiveness showed self-compassion led to greater personal improvement, in part, through heightened acceptance. Furthermore, self-compassion's effects on personal improvement were distinct from self-esteem and were not explained by adaptive emotional responses. Overall, the results suggest that self-compassion spurs positive adjustment in the face of regrets. PMID:26791595

  17. Self-Compassion Promotes Personal Improvement From Regret Experiences via Acceptance.

    PubMed

    Zhang, Jia Wei; Chen, Serena

    2016-02-01

    Why do some people report more personal improvement from their regret experiences than others? Three studies examined whether self-compassion promotes personal improvement derived from recalled regret experiences. In Study 1, we coded anonymous regret descriptions posted on a blog website. People who spontaneously described their regret with greater self-compassion were also judged as having expressed more personal improvement. In Study 2, higher trait self-compassion predicted greater self-reported and observer-rated personal improvement derived from recalled regret experiences. In Study 3, people induced to take a self-compassionate perspective toward a recalled regret experience reported greater acceptance, forgiveness, and personal improvement. A multiple mediation analysis comparing acceptance and forgiveness showed self-compassion led to greater personal improvement, in part, through heightened acceptance. Furthermore, self-compassion's effects on personal improvement were distinct from self-esteem and were not explained by adaptive emotional responses. Overall, the results suggest that self-compassion spurs positive adjustment in the face of regrets.

  18. Benchmark experiments and numerical modelling of the columnar-equiaxed dendritic growth in the transparent alloy Neopentylglycol-(d)Camphor

    NASA Astrophysics Data System (ADS)

    Sturz, L.; Wu, M.; Zimmermann, G.; Ludwig, A.; Ahmadein, M.

    2015-06-01

    Solidification benchmark experiments on columnar and equiaxed dendritic growth, as well as the columnar-equiaxed transition have been carried out under diffusion-dominated conditions for heat and mass transfer in a low-gravity environment. The system under investigation is the transparent organic alloy system Neopentylglycol-37.5wt.-%(d)Camphor, processed aboard a TEXUS sounding rocket flight. Solidifications was observed by standard optical methods in addition to measurements of the thermal fields within the sheet like experimental cells of 1 mm thickness. The dendrite tip kinetic, primary dendrite arm spacing, temporal and spatial temperature evolution, columnar tip velocity and the critical parameters at the CET have been analysed. Here we focus on a detailed comparison of the experiment “TRACE” with a 5-phase volume averaging model to validate the numerical model and to give insight into the corresponding physical mechanisms and parameters leading to CET. The results are discussed in terms of sensitivity versus numerical parameters.

  19. ENDF/B-VII.1 Neutron Cross Section Data Testing with Critical Assembly Benchmarks and Reactor Experiments

    SciTech Connect

    Kahler, A.; Macfarlane, R E; Mosteller, R D; Kiedrowski, B C; Frankle, S C; Chadwick, M. B.; Mcknight, R D; Lell, R M; Palmiotti, G; Hiruta, h; Herman, Micheal W; Arcilla, r; Mughabghab, S F; Sublet, J C; Trkov, A.; Trumbull, T H; Dunn, Michael E

    2011-01-01

    The ENDF/B-VII.1 library is the latest revision to the United States' Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 423 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [1]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unrnoderated and uranium reflected (235)U and (239)Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also confirmed for selected actinide reaction rates such as (236)U; (238,242)Pu and (241,243)Am capture in fast systems. Other deficiencies, such as the overprediction of Pu solution system critical eigenvalues

  20. ENDF/B-VII.1 Neutron Cross Section Data Testing with Critical Assembly Benchmarks and Reactor Experiments

    SciTech Connect

    G. Palmiotti

    2011-12-01

    The ENDF/B-VII.1 library is the latest revision to the United States' Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 418 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [1]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unmoderated and uranium reflected 235U and 239Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also confirmed for selected actinide reaction rates such as 236U capture. Other deficiencies, such as the overprediction of Pu solution system critical eigenvalues and a decreasing trend in calculated eigenvalue for

  1. ENDF/B-VII.1 Neutron Cross Section Data Testing with Critical Assembly Benchmarks and Reactor Experiments

    SciTech Connect

    Kahler, A.C.; Herman, M.; Kahler,A.C.; MacFarlane,R.E.; Mosteller,R.D.; Kiedrowski,B.C.; Frankle,S.C.; Chadwick,M.B.; McKnight,R.D.; Lell,R.M.; Palmiotti,G.; Hiruta,H.; Herman,M.; Arcilla,R.; Mughabghab,S.F.; Sublet,J.C.; Trkov,A.; Trumbull,T.H.; Dunn,M.

    2011-12-01

    The ENDF/B-VII.1 library is the latest revision to the United States Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 423 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [M. B. Chadwick et al., 'ENDF/B-VII.1 Nuclear Data for Science and Technology: Cross Sections, Covariances, Fission Product Yields and Decay Data,' Nuclear Data Sheets, 112, 2887 (2011)]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unmoderated and uranium reflected {sup 235}U and {sup 239}Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also confirmed for

  2. ENDF/B-VII.1 Neutron Cross Section Data Testing with Critical Assembly Benchmarks and Reactor Experiments

    NASA Astrophysics Data System (ADS)

    Kahler, A. C.; MacFarlane, R. E.; Mosteller, R. D.; Kiedrowski, B. C.; Frankle, S. C.; Chadwick, M. B.; McKnight, R. D.; Lell, R. M.; Palmiotti, G.; Hiruta, H.; Herman, M.; Arcilla, R.; Mughabghab, S. F.; Sublet, J. C.; Trkov, A.; Trumbull, T. H.; Dunn, M.

    2011-12-01

    The ENDF/B-VII.1 library is the latest revision to the United States' Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 423 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [M. B. Chadwick et al., "ENDF/B-VII.1 Nuclear Data for Science and Technology: Cross Sections, Covariances, Fission Product Yields and Decay Data," Nuclear Data Sheets, 112, 2887 (2011)]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unmoderated and uranium reflected 235U and 239Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also confirmed for selected

  3. Turbulent opposed-jet flames: A critical benchmark experiment for combustion LES

    SciTech Connect

    Geyer, D.; Dreizler, A.; Janicka, J.; Kempf, A.

    2005-12-01

    Turbulent opposed-jet configurations have gained attention as a challenging test case to validate the mixing and combustion models used in the simulation of turbulent combustion. In general, validation requires comprehensive experimental information on flow and scalar fields, and the emergence of combustion large-eddy simulation (CLES) necessitated more advanced diagnostics. These laser-optical techniques allow measurements not only of single-point statistics but of structural information of the flame, such as correlations, gradients, and structure functions. This paper presents thorough experimental and numerical investigations of one isothermal and two reacting turbulent opposed jets with fuel jets consisting of partially premixed methane. Its focus is on one configuration at and one configuration close to the highest possible Reynolds numbers where flames could be stabilized. The experimental data presented comprise information on axial velocity, main species concentrations, temperature, mixture fraction, scalar dissipation rate, joint probability density functions, and structure functions. These quantities are compared to results of highly resolved CLES to show the configuration's suitability as a critical benchmark for state-of-the art combustion LES.

  4. Benchmark of the FLUKA model of crystal channeling against the UA9-H8 experiment

    NASA Astrophysics Data System (ADS)

    Schoofs, P.; Cerutti, F.; Ferrari, A.; Smirnov, G.

    2015-07-01

    Channeling in bent crystals is increasingly considered as an option for the collimation of high-energy particle beams. The installation of crystals in the LHC has taken place during this past year and aims at demonstrating the feasibility of crystal collimation and a possible cleaning efficiency improvement. The performance of CERN collimation insertions is evaluated with the Monte Carlo code FLUKA, which is capable to simulate energy deposition in collimators as well as beam loss monitor signals. A new model of crystal channeling was developed specifically so that similar simulations can be conducted in the case of crystal-assisted collimation. In this paper, most recent results of this model are brought forward in the framework of a joint activity inside the UA9 collaboration to benchmark the different simulation tools available. The performance of crystal STF 45, produced at INFN Ferrara, was measured at the H8 beamline at CERN in 2010 and serves as the basis to the comparison. Distributions of deflected particles are shown to be in very good agreement with experimental data. Calculated dechanneling lengths and crystal performance in the transition region between amorphous regime and volume reflection are also close to the measured ones.

  5. Three dimensional modeling of Laser-Plasma interaction: benchmarking our predictive modeling tools vs. experiments

    SciTech Connect

    Divol, L; Berger, R; Meezan, N; Froula, D H; Dixit, S; Suter, L; Glenzer, S H

    2007-11-08

    We have developed a new target platform to study Laser Plasma Interaction in ignition-relevant condition at the Omega laser facility (LLE/Rochester)[1]. By shooting an interaction beam along the axis of a gas-filled hohlraum heated by up to 17 kJ of heater beam energy, we were able to create a millimeter-scale underdense uniform plasma at electron temperatures above 3 keV. Extensive Thomson scattering measurements allowed us to benchmark our hydrodynamic simulations performed with HYDRA[2]. As a result of this effort, we can use with much confidence these simulations as input parameters for our LPI simulation code pF3d[3]. In this paper, we show that by using accurate hydrodynamic profiles and full three-dimensional simulations including a realistic modeling of the laser intensity pattern generated by various smoothing options, whole beam three-dimensional linear kinetic modeling of stimulated Brillouin scattering reproduces quantitatively the experimental measurements(SBS thresholds, reflectivity values and the absence of measurable SRS). This good agreement was made possible by the recent increase in computing power routinely available for such simulations. These simulations accurately predicted the strong reduction of SBS measured when polarization smoothing is used.

  6. Three-dimensional modeling of laser-plasma interaction: Benchmarking our predictive modeling tools versus experiments

    SciTech Connect

    Divol, L.; Berger, R. L.; Meezan, N. B.; Froula, D. H.; Dixit, S.; Michel, P.; London, R.; Strozzi, D.; Ross, J.; Williams, E. A.; Still, B.; Suter, L. J.; Glenzer, S. H.

    2008-05-15

    New experimental capabilities [Froula et al., Phys. Rev. Lett. 98, 085001 (2007)] have been developed to study laser-plasma interaction (LPI) in ignition-relevant condition at the Omega laser facility (LLE/Rochester). By shooting an interaction beam along the axis of a gas-filled hohlraum heated by up to 17 kJ of heater beam energy, a millimeter-scale underdense uniform plasma at electron temperatures above 3 keV was created. Extensive Thomson scattering measurements allowed to benchmark hydrodynamic simulations performed with HYDRA [Meezan et al., Phys. Plasmas 14, 056304 (2007)]. As a result of this effort, these simulations can be used with much confidence as input parameters for the LPI simulation code PF3D [Berger et al., Phys. Plasmas 5, 4337 (1998)]. In this paper, it is shown that by using accurate hydrodynamic profiles and full three-dimensional simulations including a realistic modeling of the laser intensity pattern generated by various smoothing options, whole beam three-dimensional linear kinetic modeling of stimulated Brillouin scattering (SBS) reproduces quantitatively the experimental measurements (SBS thresholds, reflectivity values, and the absence of measurable stimulated Raman scattering). This good agreement was made possible by the recent increase in computing power routinely available for such simulations. These simulations accurately predicted the strong reduction of SBS measured when polarization smoothing is used.

  7. Physics of Colloids in Space--Plus (PCS+) Experiment Completed Flight Acceptance Testing

    NASA Technical Reports Server (NTRS)

    Doherty, Michael P.

    2004-01-01

    The Physics of Colloids in Space--Plus (PCS+) experiment successfully completed system-level flight acceptance testing in the fall of 2003. This testing included electromagnetic interference (EMI) testing, vibration testing, and thermal testing. PCS+, an Expedite the Process of Experiments to Space Station (EXPRESS) Rack payload will deploy a second set of colloid samples within the PCS flight hardware system that flew on the International Space Station (ISS) from April 2001 to June 2002. PCS+ is slated to return to the ISS in late 2004 or early 2005.

  8. The effects of video compression on acceptability of images for monitoring life sciences experiments

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Chuang, Sherry L.

    1992-01-01

    Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters

  9. The effects of video compression on acceptability of images for monitoring life sciences experiments

    NASA Astrophysics Data System (ADS)

    Haines, Richard F.; Chuang, Sherry L.

    1992-07-01

    Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters

  10. TWODANT benchmark. Progress report

    SciTech Connect

    Lee, Sung

    1994-01-11

    TWODANT (Two-Dimensional, Diffusion-Accelerated, Neutral-Particle Transport) code has been benchmarked against 6 critical experiments (Jezebel plutonium critical assembly) and their k effective values compared with those of KENO and MCNP codes.

  11. Benchmark measurements and calculations of a 3-dimensional neutron streaming experiment

    NASA Astrophysics Data System (ADS)

    Barnett, D. A., Jr.

    1991-02-01

    An experimental assembly known as the Dog-Legged Void assembly was constructed to measure the effect of neutron streaming in iron and void regions. The primary purpose of the measurements was to provide benchmark data against which various neutron transport calculation tools could be compared. The measurements included neutron flux spectra at four places and integral measurements at two places in the iron streaming path as well as integral measurements along several axial traverses. These data have been used in the verification of Oak Ridge National Laboratory's three-dimensional discrete ordinates code, TORT. For a base case calculation using one-half inch mesh spacing, finite difference spatial differencing, an S(sub 16) quadrature and P(sub 1) cross sections in the MUFT multigroup structure, the calculated solution agreed to within 18 percent with the spectral measurements and to within 24 percent of the integral measurements. Variations on the base case using a fewgroup energy structure and P(sub 1) and P(sub 3) cross sections showed similar agreement. Calculations using a linear nodal spatial differencing scheme and fewgroup cross sections also showed similar agreement. For the same mesh size, the nodal method was seen to require 2.2 times as much CPU time as the finite difference method. A nodal calculation using a typical mesh spacing of 2 inches, which had approximately 32 times fewer mesh cells than the base case, agreed with the measurements to within 34 percent and yet required on 8 percent of the CPU time.

  12. Trust, confidence, procedural fairness, outcome fairness, moral conviction, and the acceptance of GM field experiments.

    PubMed

    Siegrist, Michael; Connor, Melanie; Keller, Carmen

    2012-08-01

    In 2005, Swiss citizens endorsed a moratorium on gene technology, resulting in the prohibition of the commercial cultivation of genetically modified crops and the growth of genetically modified animals until 2013. However, scientific research was not affected by this moratorium, and in 2008, GMO field experiments were conducted that allowed us to examine the factors that influence their acceptance by the public. In this study, trust and confidence items were analyzed using principal component analysis. The analysis revealed the following three factors: "economy/health and environment" (value similarity based trust), "trust and honesty of industry and scientists" (value similarity based trust), and "competence" (confidence). The results of a regression analysis showed that all the three factors significantly influenced the acceptance of GM field experiments. Furthermore, risk communication scholars have suggested that fairness also plays an important role in the acceptance of environmental hazards. We, therefore, included measures for outcome fairness and procedural fairness in our model. However, the impact of fairness may be moderated by moral conviction. That is, fairness may be significant for people for whom GMO is not an important issue, but not for people for whom GMO is an important issue. The regression analysis showed that, in addition to the trust and confidence factors, moral conviction, outcome fairness, and procedural fairness were significant predictors. The results suggest that the influence of procedural fairness is even stronger for persons having high moral convictions compared with persons having low moral convictions. PMID:22150405

  13. Trust, confidence, procedural fairness, outcome fairness, moral conviction, and the acceptance of GM field experiments.

    PubMed

    Siegrist, Michael; Connor, Melanie; Keller, Carmen

    2012-08-01

    In 2005, Swiss citizens endorsed a moratorium on gene technology, resulting in the prohibition of the commercial cultivation of genetically modified crops and the growth of genetically modified animals until 2013. However, scientific research was not affected by this moratorium, and in 2008, GMO field experiments were conducted that allowed us to examine the factors that influence their acceptance by the public. In this study, trust and confidence items were analyzed using principal component analysis. The analysis revealed the following three factors: "economy/health and environment" (value similarity based trust), "trust and honesty of industry and scientists" (value similarity based trust), and "competence" (confidence). The results of a regression analysis showed that all the three factors significantly influenced the acceptance of GM field experiments. Furthermore, risk communication scholars have suggested that fairness also plays an important role in the acceptance of environmental hazards. We, therefore, included measures for outcome fairness and procedural fairness in our model. However, the impact of fairness may be moderated by moral conviction. That is, fairness may be significant for people for whom GMO is not an important issue, but not for people for whom GMO is an important issue. The regression analysis showed that, in addition to the trust and confidence factors, moral conviction, outcome fairness, and procedural fairness were significant predictors. The results suggest that the influence of procedural fairness is even stronger for persons having high moral convictions compared with persons having low moral convictions.

  14. TESTING AND ACCEPTANCE OF FUEL PLATES FOR RERTR FUEL DEVELOPMENT EXPERIMENTS

    SciTech Connect

    J.M. Wight; G.A. Moore; S.C. Taylor

    2008-10-01

    This paper discusses how candidate fuel plates for RERTR Fuel Development experiments are examined and tested for acceptance prior to reactor insertion. These tests include destructive and nondestructive examinations (DE and NDE). The DE includes blister annealing for dispersion fuel plates, bend testing of adjacent cladding, and microscopic examination of archive fuel plates. The NDE includes Ultrasonic (UT) scanning and radiography. UT tests include an ultrasonic scan for areas of “debonds” and a high frequency ultrasonic scan to determine the "minimum cladding" over the fuel. Radiography inspections include identifying fuel outside of the maximum fuel zone and measurements and calculations for fuel density. Details of each test are provided and acceptance criteria are defined. These tests help to provide a high level of confidence the fuel plate will perform in the reactor without a breach in the cladding.

  15. Advances in code validation for mixed-oxide fuel use in light-water reactors through benchmark experiments in the VENUS critical facility

    SciTech Connect

    D'hondt, Pierre; Baeten, Peter; Lance, Bernard; Marloye, Daniel; Basselier, Jacques

    2004-07-01

    Based on the experience accumulated during 25-years of collaboration SCK.CEN together with Belgonucleaire decided to implement a series of Benchmark experiments in the VENUS critical facility in Mol, Belgium in order to give to organizations concerned with MOX fuel the possibility to calibrate and to improve their neutronic calculation tools. In this paper these Benchmark programmes and their outcome are highlighted, they have demonstrated that VENUS is a very flexible and easy to use tool for the investigation of neutronic data as well as for the study of licensing, safety and operation aspects for MOX use in LWR's. (authors)

  16. The journey to accepting support: how parents of profoundly disabled children experience support in their lives.

    PubMed

    Brett, Jane

    2004-10-01

    Advances in medical knowledge and care have extended the lives of children with profound and multiple disabilities. In most cases it is the parents who meet the often complex and continual needs of their child with disabilities in their own home. This study explored the experience of support in the lives of such parents. The interpretive, hermeneutic phenomenology of Heidegger was employed to create a detailed and authentic account of the parents' experiences of support. Five interrelated themes emerged from data from in-depth interviews with six parents randomly selected from a purposive sample in a special school setting. The themes were: parents' feelings about support, the journey to accepting support, support as a loss, disability and the parent and the supportive relationship. Understanding the experience of support from the parent's perspective may lead to a consideration of flexible systems that challenge practice to ensure that supporters listen, learn, develop and deliver support in ways that are helpful. PMID:15537108

  17. Verification and validation benchmarks.

    SciTech Connect

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  18. Validation of the Serpent 2 code on TRIGA Mark II benchmark experiments.

    PubMed

    Ćalić, Dušan; Žerovnik, Gašper; Trkov, Andrej; Snoj, Luka

    2016-01-01

    The main aim of this paper is the development and validation of a 3D computational model of TRIGA research reactor using Serpent 2 code. The calculated parameters were compared to the experimental results and to calculations performed with the MCNP code. The results show that the calculated normalized reaction rates and flux distribution within the core are in good agreement with MCNP and experiment, while in the reflector the flux distribution differ up to 3% from the measurements. PMID:26516989

  19. Large acceptance forward Cherenkov detector for the BRAHMS experiment at RHIC

    NASA Astrophysics Data System (ADS)

    Budick, B.; Beavis, D.; Chasman, C.

    2010-09-01

    A multi-element detector based on Cherenkov radiation in plastic and on photomultiplier tubes has been constructed that is particularly useful in collider experiments. The detector covers the pseudorapidity interval 3.23< η<5.25 with large acceptance for the products of proton-proton and heavy ion collisions. The detector's primary purposes are determining the vertex of the interaction, providing a minimum bias trigger, finding the start time for time of flight (and other timing applications), and monitoring the luminosity. Monte Carlo simulations describe the pulse height response of the detector well, as does an analytic expression that has been developed. The detector performed well in the RHIC experiment BRAHMS.

  20. NASA Controller Acceptability Study 1(CAS-1) Experiment Description and Initial Observations

    NASA Technical Reports Server (NTRS)

    Chamberlain, James P.; Consiglio, Maria C.; Comstock, James R., Jr.; Ghatas, Rania W.; Munoz, Cesar

    2015-01-01

    This paper describes the Controller Acceptability Study 1 (CAS-1) experiment that was conducted by NASA Langley Research Center personnel from January through March 2014 and presents partial CAS-1 results. CAS-1 employed 14 air traffic controller volunteers as research subjects to assess the viability of simulated future unmanned aircraft systems (UAS) operating alongside manned aircraft in moderate-density, moderate-complexity Class E airspace. These simulated UAS were equipped with a prototype pilot-in-the-loop (PITL) Detect and Avoid (DAA) system, specifically the Self-Separation (SS) function of such a system based on Stratway+ software to replace the see-and-avoid capabilities of manned aircraft pilots. A quantitative CAS-1 objective was to determine horizontal miss distance (HMD) values for SS encounters that were most acceptable to air traffic controllers, specifically HMD values that were assessed as neither unsafely small nor disruptively large. HMD values between 0.5 and 3.0 nautical miles (nmi) were assessed for a wide array of encounter geometries between UAS and manned aircraft. The paper includes brief introductory material about DAA systems and their SS functions, followed by descriptions of the CAS-1 simulation environment, prototype PITL SS capability, and experiment design, and concludes with presentation and discussion of partial CAS-1 data and results.

  1. Effects of an Educational Experience Incorporating an Inventory of Factors Potentially Influencing Student Acceptance of Biological Evolution

    ERIC Educational Resources Information Center

    Wiles, Jason R.; Alters, Brian

    2011-01-01

    This investigation provides an extensive review of scientific, religious, and otherwise non-scientific factors that may influence student acceptance of biological evolution. We also measure the extent to which students' levels of acceptance changed following an educational experience designed to address an inclusive inventory of factors identified…

  2. Electron-impact ionization of helium: A comprehensive experiment benchmarks theory

    SciTech Connect

    Ren, X.; Pflueger, T.; Senftleben, A.; Xu, S.; Dorn, A.; Ullrich, J.; Bray, I.; Fursa, D.V.; Colgan, J.; Pindzola, M.S.

    2011-05-15

    Single ionization of helium by 70.6-eV electron impact is studied in a comprehensive experiment covering a major part of the entire collision kinematics and the full 4{pi} solid angle for the emitted electron. The absolutely normalized triple-differential experimental cross sections are compared with results from the convergent close-coupling (CCC) and the time-dependent close-coupling (TDCC) theories. Whereas excellent agreement with the TDCC prediction is only found for equal energy sharing, the CCC calculations are in excellent agreement with essentially all experimentally observed dynamical features, including the absolute magnitude of the cross sections.

  3. Limitations of Community College Benchmarking and Benchmarks

    ERIC Educational Resources Information Center

    Bers, Trudy H.

    2006-01-01

    This chapter distinguishes between benchmarks and benchmarking, describes a number of data and cultural limitations to benchmarking projects, and suggests that external demands for accountability are the dominant reason for growing interest in benchmarking among community colleges.

  4. Benchmark Experiments of Thermal Neutron and Capture Gamma-Ray Distributions in Concrete Using {sup 252}Cf

    SciTech Connect

    Asano, Yoshihiro; Sugita, Takeshi; Hirose, Hideyuki; Suzaki, Takenori

    2005-10-15

    The distributions of thermal neutrons and capture gamma rays in ordinary concrete were investigated by using {sup 252}Cf. Two subjects are considered. One is the benchmark experiments for the thermal neutron and the capture gamma-ray distributions in ordinary concrete. The thermal neutron and the capture gamma-ray distributions were measured by using gold-foil activation detectors and thermoluminescence detectors. These were compared with the simulations by using the discrete ordinates code ANISN with two different group structure types of cross-section library of a new Japanese version, JENDL-3.3, showing reasonable agreement with both fine and rough structure groups of thermal neutron energy. The other is a comparison of the simulations with two different cross-section libraries, JENDL-3.3 and ENDF/B-VI, for the deep penetration of neutrons in the concrete, showing close agreement in 0- to 100-cm-thick concrete. However, the differences in flux grow with an increase in concrete thickness, reaching up to approximately eight times near 4-m thickness.

  5. Willingness-To-Accept Pharmaceutical Retail Inconvenience: Evidence from a Contingent Choice Experiment

    PubMed Central

    Finlay, Keith; Stoecker, Charles; Cunningham, Scott

    2015-01-01

    Objectives Restrictions on retail purchases of pseudoephedrine are one regulatory approach to reduce the social costs of methamphetamine production and use, but may impose costs on legitimate users of nasal decongestants. This is the first study to evaluate the costs of restricting access to medications on consumer welfare. Our objective was to measure the inconvenience cost consumers place on restrictions for cold medication purchases including identification requirements, purchase limits, over-the-counter availability, prescription requirements, and the active ingredient. Methods We conducted a contingent choice experiment with Amazon Mechanical Turk workers that presented participants with randomized, hypothetical product prices and combinations of restrictions that reflect the range of public policies. We used a conditional logit model to calculate willingness-to-accept each restriction. Results Respondents’ willingness-to-accept prescription requirements was $14.17 ($9.76–$18.58) and behind-the-counter restrictions was $9.68 ($7.03–$12.33) per box of pseudoephedrine product. Participants were willing to pay $4.09 ($1.66–$6.52) per box to purchase pseudoephedrine-based products over phenylephrine-based products. Conclusions Restricting access to medicines as a means of reducing the social costs of non-medical use can imply large inconvenience costs for legitimate consumers. These results are relevant to discussions of retail access restrictions on other medications. PMID:26024444

  6. 'Feel the Feeling': Psychological practitioners' experience of acceptance and commitment therapy well-being training in the workplace.

    PubMed

    Wardley, Matt Nj; Flaxman, Paul E; Willig, Carla; Gillanders, David

    2016-08-01

    This empirical study investigates psychological practitioners' experience of worksite training in acceptance and commitment therapy using an interpretative phenomenological analysis methodology. Semi-structured interviews were conducted with eight participants, and three themes emerged from the interpretative phenomenological analysis data analysis: influence of previous experiences, self and others and impact and application The significance of the experiential nature of the acceptance and commitment therapy training is explored as well as the dual aspects of developing participants' self-care while also considering their own clinical practice. Consistencies and inconsistencies across acceptance and commitment therapy processes are considered as well as clinical implications, study limitations and future research suggestions.

  7. Pain Management Experiences and the Acceptability of Cognitive Behavioral Strategies Among American Indians and Alaska Natives

    PubMed Central

    Haozous, Emily A.; Doorenbos, Ardith Z.; Stoner, Susan

    2014-01-01

    Purpose The purpose of this project was to explore the chronic pain experience and establish cultural appropriateness of cognitive behavioral pain management (CBPM) techniques in American Indians and Alaska Natives (AI/ANs). Design A semistructured interview guide was used with three focus groups of AI/AN patients in the U.S. Southwest and Pacific Northwest regions to explore pain and CBPM in AI/ANs. Findings The participants provided rich qualitative data regarding chronic pain and willingness to use CBPM. Themes included empty promises and health care insufficiencies, individuality, pain management strategies, and suggestions for health care providers. Conclusion Results suggest that there is room for improvement in chronic pain care among AI/ANs and that CBPM would likely be a viable and culturally appropriate approach for chronic pain management. Implications This research provides evidence that CBPM is culturally acceptable and in alignment with existing traditional AI/AN strategies for coping and healing. PMID:25403169

  8. A high resolution, broad energy acceptance spectrometer for laser wakefield acceleration experiments.

    PubMed

    Sears, Christopher M S; Cuevas, Sofia Benavides; Schramm, Ulrich; Schmid, Karl; Buck, Alexander; Habs, Dieter; Krausz, Ferenc; Veisz, Laszlo

    2010-07-01

    Laser wakefield experiments present a unique challenge in measuring the resulting electron energy properties due to the large energy range of interest, typically several 100 MeV, and the large electron beam divergence and pointing jitter >1 mrad. In many experiments the energy resolution and accuracy are limited by the convolved transverse spot size and pointing jitter of the beam. In this paper we present an electron energy spectrometer consisting of two magnets designed specifically for laser wakefield experiments. In the primary magnet the field is produced by permanent magnets. A second optional electromagnet can be used to obtain better resolution for electron energies above 75 MeV. The spectrometer has an acceptance of 2.5-400 MeV (E(max)/E(min)>100) with a resolution of better than 1% rms for electron energies above 25 MeV. This high resolution is achieved by refocusing electrons in the energy plane and without any postprocessing image deconvolution. Finally, the spectrometer employs two complimentary detection mechanisms: (1) absolutely calibrated scintillation screens imaged by cameras outside the vacuum chamber and (2) an array of scintillating fibers coupled to a low-noise charge-coupled device.

  9. A high resolution, broad energy acceptance spectrometer for laser wakefield acceleration experiments

    SciTech Connect

    Sears, Christopher M. S.; Cuevas, Sofia Benavides; Veisz, Laszlo; Schramm, Ulrich; Schmid, Karl; Buck, Alexander; Habs, Dieter; Krausz, Ferenc

    2010-07-15

    Laser wakefield experiments present a unique challenge in measuring the resulting electron energy properties due to the large energy range of interest, typically several 100 MeV, and the large electron beam divergence and pointing jitter >1 mrad. In many experiments the energy resolution and accuracy are limited by the convolved transverse spot size and pointing jitter of the beam. In this paper we present an electron energy spectrometer consisting of two magnets designed specifically for laser wakefield experiments. In the primary magnet the field is produced by permanent magnets. A second optional electromagnet can be used to obtain better resolution for electron energies above 75 MeV. The spectrometer has an acceptance of 2.5-400 MeV (E{sub max}/E{sub min}>100) with a resolution of better than 1% rms for electron energies above 25 MeV. This high resolution is achieved by refocusing electrons in the energy plane and without any postprocessing image deconvolution. Finally, the spectrometer employs two complimentary detection mechanisms: (1) absolutely calibrated scintillation screens imaged by cameras outside the vacuum chamber and (2) an array of scintillating fibers coupled to a low-noise charge-coupled device.

  10. Acceptability of Financial Incentives for Health Behaviours: A Discrete Choice Experiment

    PubMed Central

    Giles, Emma L.; Becker, Frauke; Ternent, Laura; Sniehotta, Falko F.; McColl, Elaine

    2016-01-01

    Background Healthy behaviours are important determinants of health and disease, but many people find it difficult to perform these behaviours. Systematic reviews support the use of personal financial incentives to encourage healthy behaviours. There is concern that financial incentives may be unacceptable to the public, those delivering services and policymakers, but this has been poorly studied. Without widespread acceptability, financial incentives are unlikely to be widely implemented. We sought to answer two questions: what are the relative preferences of UK adults for attributes of financial incentives for healthy behaviours? Do preferences vary according to the respondents’ socio-demographic characteristics? Methods We conducted an online discrete choice experiment. Participants were adult members of a market research panel living in the UK selected using quota sampling. Preferences were examined for financial incentives for: smoking cessation, regular physical activity, attendance for vaccination, and attendance for screening. Attributes of interest (and their levels) were: type of incentive (none, cash, shopping vouchers or lottery tickets); value of incentive (a continuous variable); schedule of incentive (same value each week, or value increases as behaviour change is sustained); other information provided (none, written information, face-to-face discussion, or both); and recipients (all eligible individuals, people living in low-income households, or pregnant women). Results Cash or shopping voucher incentives were preferred as much as, or more than, no incentive in all cases. Lower value incentives and those offered to all eligible individuals were preferred. Preferences for additional information provided alongside incentives varied between behaviours. Younger participants and men were more likely to prefer incentives. There were no clear differences in preference according to educational attainment. Conclusions Cash or shopping voucher

  11. Benchmark Evaluation of Start-Up and Zero-Power Measurements at the High-Temperature Engineering Test Reactor

    SciTech Connect

    Bess, John D.; Fujimoto, Nozomu

    2014-10-09

    Benchmark models were developed to evaluate six cold-critical and two warm-critical, zero-power measurements of the HTTR. Additional measurements of a fully-loaded subcritical configuration, core excess reactivity, shutdown margins, six isothermal temperature coefficients, and axial reaction-rate distributions were also evaluated as acceptable benchmark experiments. Insufficient information is publicly available to develop finely-detailed models of the HTTR as much of the design information is still proprietary. However, the uncertainties in the benchmark models are judged to be of sufficient magnitude to encompass any biases and bias uncertainties incurred through the simplification process used to develop the benchmark models. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the impurity content of the various graphite blocks that comprise the HTTR. Monte Carlo calculations of keff are between approximately 0.9 % and 2.7 % greater than the benchmark values. Reevaluation of the HTTR models as additional information becomes available could improve the quality of this benchmark and possibly reduce the computational biases. High-quality characterization of graphite impurities would significantly improve the quality of the HTTR benchmark assessment. Simulation of the other reactor physics measurements are in good agreement with the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.

  12. Benchmark Evaluation of Start-Up and Zero-Power Measurements at the High-Temperature Engineering Test Reactor

    DOE PAGES

    Bess, John D.; Fujimoto, Nozomu

    2014-10-09

    Benchmark models were developed to evaluate six cold-critical and two warm-critical, zero-power measurements of the HTTR. Additional measurements of a fully-loaded subcritical configuration, core excess reactivity, shutdown margins, six isothermal temperature coefficients, and axial reaction-rate distributions were also evaluated as acceptable benchmark experiments. Insufficient information is publicly available to develop finely-detailed models of the HTTR as much of the design information is still proprietary. However, the uncertainties in the benchmark models are judged to be of sufficient magnitude to encompass any biases and bias uncertainties incurred through the simplification process used to develop the benchmark models. Dominant uncertainties in themore » experimental keff for all core configurations come from uncertainties in the impurity content of the various graphite blocks that comprise the HTTR. Monte Carlo calculations of keff are between approximately 0.9 % and 2.7 % greater than the benchmark values. Reevaluation of the HTTR models as additional information becomes available could improve the quality of this benchmark and possibly reduce the computational biases. High-quality characterization of graphite impurities would significantly improve the quality of the HTTR benchmark assessment. Simulation of the other reactor physics measurements are in good agreement with the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  13. Benchmark Evaluation of the NRAD Reactor LEU Core Startup Measurements

    SciTech Connect

    J. D. Bess; T. L. Maddock; M. A. Marshall

    2011-09-01

    The Neutron Radiography (NRAD) reactor is a 250-kW TRIGA-(Training, Research, Isotope Production, General Atomics)-conversion-type reactor at the Idaho National Laboratory; it is primarily used for neutron radiography analysis of irradiated and unirradiated fuels and materials. The NRAD reactor was converted from HEU to LEU fuel with 60 fuel elements and brought critical on March 31, 2010. This configuration of the NRAD reactor has been evaluated as an acceptable benchmark experiment and is available in the 2011 editions of the International Handbook of Evaluated Criticality Safety Benchmark Experiments (ICSBEP Handbook) and the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook). Significant effort went into precisely characterizing all aspects of the reactor core dimensions and material properties; detailed analyses of reactor parameters minimized experimental uncertainties. The largest contributors to the total benchmark uncertainty were the 234U, 236U, Er, and Hf content in the fuel; the manganese content in the stainless steel cladding; and the unknown level of water saturation in the graphite reflector blocks. A simplified benchmark model of the NRAD reactor was prepared with a keff of 1.0012 {+-} 0.0029 (1s). Monte Carlo calculations with MCNP5 and KENO-VI and various neutron cross section libraries were performed and compared with the benchmark eigenvalue for the 60-fuel-element core configuration; all calculated eigenvalues are between 0.3 and 0.8% greater than the benchmark value. Benchmark evaluations of the NRAD reactor are beneficial in understanding biases and uncertainties affecting criticality safety analyses of storage, handling, or transportation applications with LEU-Er-Zr-H fuel.

  14. Experience with a flavor in mother's milk modifies the infant's acceptance of flavored cereal.

    PubMed

    Mennella, J A; Beauchamp, G K

    1999-11-01

    The present series of studies aimed to investigate whether experience with a flavor in mothers' milk modifies the infants' acceptance of similarly flavored foods at weaning. First, we established, using methods developed in our laboratory, that the ingestion of carrot juice by lactating women produced a sensory change in their milk approximately 2 to 3 hr after the ingestion of the beverage. Second, we randomly formed two groups of breast-fed infants who had been fed cereal for a few weeks but had only experienced cereal prepared with water. Their mothers were asked to consume one of two types of beverages (i.e., carrot juice, water) during the exposure period. Each mother was observed feeding her infant cereal during four test sessions. The first two sessions occurred during the 2 days before the exposure period; in counterbalanced order, infants were fed cereal prepared with water on 1 testing day and cereal prepared with carrot juice on the other. These two test sessions were then repeated following the exposure period. The results demonstrated that the infants who had exposure to the flavor of carrots in their mothers' milk during the exposure period consumed less of the carrot-flavored cereal and spent less time feeding when compared to the control infants whose mothers consumed the water. This may be a form of sensory-specific satiety such that the infants become less responsive to a flavor that they have been extensively exposed to in the very recent past.

  15. A Quantitative Examination of User Experience as an Antecedent to Student Perception in Technology Acceptance Modeling

    ERIC Educational Resources Information Center

    Butler, Rory

    2013-01-01

    Internet-enabled mobile devices have increased the accessibility of learning content for students. Given the ubiquitous nature of mobile computing technology, a thorough understanding of the acceptance factors that impact a learner's intention to use mobile technology as an augment to their studies is warranted. Student acceptance of mobile…

  16. High School Students' Perceptions of Evolution Instruction: Acceptance and Evolution Learning Experiences

    ERIC Educational Resources Information Center

    Donnelly, Lisa A.; Kazempour, Mahsa; Amirshokoohi, Aidin

    2009-01-01

    Evolution is an important and sometimes controversial component of high school biology. In this study, we used a mixed methods approach to explore students' evolution acceptance and views of evolution teaching and learning. Students explained their acceptance and rejection of evolution in terms of evidence and conflicts with religion and…

  17. Effects of an Educational Experience Incorporating an Inventory of Factors Potentially Influencing Student Acceptance of Biological Evolution

    NASA Astrophysics Data System (ADS)

    Wiles, Jason R.; Alters, Brian

    2011-12-01

    This investigation provides an extensive review of scientific, religious, and otherwise non-scientific factors that may influence student acceptance of biological evolution. We also measure the extent to which students' levels of acceptance changed following an educational experience designed to address an inclusive inventory of factors identified as potentially affecting student acceptance of evolution (n = 81, pre-test/post-test) n = 37, one-year longitudinal). Acceptance of evolution was measured using the Measure of Acceptance of the Theory of Evolution (MATE) instrument among participants enrolled in a secondary-level academic programme during the summer prior to their final year of high school and as they transitioned to the post-secondary level. Student acceptance of evolution was measured to be significantly higher than initial levels both immediately following and over one year after the educational experience. Results reported herein carry implications for future quantitative and qualitative research as well as for cross-disciplinary instruction plans related to evolutionary science and non-scientific factors which may influence student understanding of evolution.

  18. Growth and Expansion of the International Criticality Safety Benchmark Evaluation Project and the Newly Organized International Reactor Physics Experiment Evaluation Project

    SciTech Connect

    J. Blair Briggs; Lori Scott; Yolanda Rugama; Enrico Satori

    2007-05-01

    Since ICNC 2003, the International Criticality Safety Benchmark Evaluation Project (ICSBEP) has continued to expand its efforts and broaden its scope. Criticality-alarm / shielding type benchmarks and fundamental physics measurements that are relevant to criticality safety applications are not only included in the scope of the project, but benchmark data are also included in the latest version of the handbook. A considerable number of improvements have been made to the searchable database, DICE and the criticality-alarm / shielding benchmarks and fundamental physics measurements have been included in the database. There were 12 countries participating on the ICSBEP in 2003. That number has increased to 18 with recent contributions of data and/or resources from Brazil, Czech Republic, Poland, India, Canada, and China. South Africa, Germany, Argentina, and Australia have been invited to participate. Since ICNC 2003, the contents of the “International Handbook of Evaluated Criticality Safety Benchmark Experiments” have increased from 350 evaluations (28,000 pages) containing benchmark specifications for 3070 critical or subcritical configurations to 442 evaluations (over 38,000 pages) containing benchmark specifications for 3957 critical or subcritical configurations, 23 criticality-alarm-placement / shielding configurations with multiple dose points for each, and 20 configurations that have been categorized as fundamental physics measurements that are relevant to criticality safety applications in the 2006 Edition of the ICSBEP Handbook. Approximately 30 new evaluations and 250 additional configurations are expected to be added to the 2007 Edition of the Handbook. Since ICNC 2003, a reactor physics counterpart to the ICSBEP, The International Reactor Physics Experiment Evaluation Project (IRPhEP) was initiated. Beginning in 1999, the IRPhEP was conducted as a pilot activity by the by the Organization of Economic Cooperation and Development (OECD) Nuclear Energy

  19. Comparison of five benchmarks

    SciTech Connect

    Huss, J. E.; Pennline, J. A.

    1987-02-01

    Five benchmark programs were obtained and run on the NASA Lewis CRAY X-MP/24. A comparison was made between the programs codes and between the methods for calculating performance figures. Several multitasking jobs were run to gain experience in how parallel performance is measured.

  20. Orientation of Oblique Airborne Image Sets - Experiences from the Isprs/eurosdr Benchmark on Multi-Platform Photogrammetry

    NASA Astrophysics Data System (ADS)

    Gerke, M.; Nex, F.; Remondino, F.; Jacobsen, K.; Kremer, J.; Karel, W.; Hu, H.; Ostrowski, W.

    2016-06-01

    During the last decade the use of airborne multi camera systems increased significantly. The development in digital camera technology allows mounting several mid- or small-format cameras efficiently onto one platform and thus enables image capture under different angles. Those oblique images turn out to be interesting for a number of applications since lateral parts of elevated objects, like buildings or trees, are visible. However, occlusion or illumination differences might challenge image processing. From an image orientation point of view those multi-camera systems bring the advantage of a better ray intersection geometry compared to nadir-only image blocks. On the other hand, varying scale, occlusion and atmospheric influences which are difficult to model impose problems to the image matching and bundle adjustment tasks. In order to understand current limitations of image orientation approaches and the influence of different parameters such as image overlap or GCP distribution, a commonly available dataset was released. The originally captured data comprises of a state-of-the-art image block with very high overlap, but in the first stage of the so-called ISPRS/EUROSDR benchmark on multi-platform photogrammetry only a reduced set of images was released. In this paper some first results obtained with this dataset are presented. They refer to different aspects like tie point matching across the viewing directions, influence of the oblique images onto the bundle adjustment, the role of image overlap and GCP distribution. As far as the tie point matching is concerned we observed that matching of overlapping images pointing to the same cardinal direction, or between nadir and oblique views in general is quite successful. Due to the quite different perspective between images of different viewing directions the standard tie point matching, for instance based on interest points does not work well. How to address occlusion and ambiguities due to different views onto

  1. Benchmarking: a method for continuous quality improvement in health.

    PubMed

    Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe

    2012-05-01

    Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields of application and examples of how it has been used in the healthcare sector. Benchmarking is often thought to consist simply of comparing indicators and is not perceived in its entirety, that is, as a tool based on voluntary and active collaboration among several organizations to create a spirit of competition and to apply best practices. The key feature of benchmarking is its integration within a comprehensive and participatory policy of continuous quality improvement (CQI). Conditions for successful benchmarking focus essentially on careful preparation of the process, monitoring of the relevant indicators, staff involvement and inter-organizational visits. Compared to methods previously implemented in France (CQI and collaborative projects), benchmarking has specific features that set it apart as a healthcare innovation. This is especially true for healthcare or medical-social organizations, as the principle of inter-organizational visiting is not part of their culture. Thus, this approach will need to be assessed for feasibility and acceptability before it is more widely promoted.

  2. Evaluation of the concrete shield compositions from the 2010 criticality accident alarm system benchmark experiments at the CEA Valduc SILENE facility

    SciTech Connect

    Miller, Thomas Martin; Celik, Cihangir; Dunn, Michael E; Wagner, John C; McMahan, Kimberly L; Authier, Nicolas; Jacquet, Xavier; Rousseau, Guillaume; Wolff, Herve; Savanier, Laurence; Baclet, Nathalie; Lee, Yi-kang; Trama, Jean-Christophe; Masse, Veronique; Gagnier, Emmanuel; Naury, Sylvie; Blanc-Tranchant, Patrick; Hunter, Richard; Kim, Soon; Dulik, George Michael; Reynolds, Kevin H.

    2015-01-01

    In October 2010, a series of benchmark experiments were conducted at the French Commissariat a l'Energie Atomique et aux Energies Alternatives (CEA) Valduc SILENE facility. These experiments were a joint effort between the United States Department of Energy Nuclear Criticality Safety Program and the CEA. The purpose of these experiments was to create three benchmarks for the verification and validation of radiation transport codes and evaluated nuclear data used in the analysis of criticality accident alarm systems. This series of experiments consisted of three single-pulsed experiments with the SILENE reactor. For the first experiment, the reactor was bare (unshielded), whereas in the second and third experiments, it was shielded by lead and polyethylene, respectively. The polyethylene shield of the third experiment had a cadmium liner on its internal and external surfaces, which vertically was located near the fuel region of SILENE. During each experiment, several neutron activation foils and thermoluminescent dosimeters (TLDs) were placed around the reactor. Nearly half of the foils and TLDs had additional high-density magnetite concrete, high-density barite concrete, standard concrete, and/or BoroBond shields. CEA Saclay provided all the concrete, and the US Y-12 National Security Complex provided the BoroBond. Measurement data from the experiments were published at the 2011 International Conference on Nuclear Criticality (ICNC 2011) and the 2013 Nuclear Criticality Safety Division (NCSD 2013) topical meeting. Preliminary computational results for the first experiment were presented in the ICNC 2011 paper, which showed poor agreement between the computational results and the measured values of the foils shielded by concrete. Recently the hydrogen content, boron content, and density of these concrete shields were further investigated within the constraints of the previously available data. New computational results for the first experiment are now available that

  3. FLOWTRAN benchmarking with onset of flow instability data from 1988 Columbia University single-tube OFI experiment

    SciTech Connect

    Chen, K.; Paul, P.K.; Barbour, K.L.

    1990-06-01

    Benchmarking FLOWTRAN, Version 16.2, with an Onset of Significant Voiding (OSV) criterion against measured Onset of Flow Instability (OFI) data from the 1988--89 Columbia University downflow tests has shown that FLOWTRAN with OSV is a conservative OFI predictor. Calculated limiting flow rates based on the Savannah River Site (SRS) OSV criterion were always higher than the measured flow rates at OFI. This work supplements recent FLOWTRAN benchmarking against 1963 downflow tests at Columbia University and 1988 downflow tests at the Heat Transfer Laboratory. These studies provide confidence that using FLOWTRAN with an OSV based criterion for SRS reactor limits analyses will generate operating limits that are conservative with respect to OFI, the criterion selected to prevent fuel damage.

  4. Applications of Integral Benchmark Data

    SciTech Connect

    Giuseppe Palmiotti; Teruhiko Kugo; Fitz Trumble; Albert C. Kahler; Dale Lancaster

    2014-10-09

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) provide evaluated integral benchmark data that may be used for validation of reactor physics / nuclear criticality safety analytical methods and data, nuclear data testing, advanced modeling and simulation, and safety analysis licensing activities. The handbooks produced by these programs are used in over 30 countries. Five example applications are presented in this paper: (1) Use of IRPhEP Data in Uncertainty Analyses and Cross Section Adjustment, (2) Uncertainty Evaluation Methods for Reactor Core Design at JAEA Using Reactor Physics Experimental Data, (3) Application of Benchmarking Data to a Broad Range of Criticality Safety Problems, (4) Cross Section Data Testing with ICSBEP Benchmarks, and (5) Use of the International Handbook of Evaluated Reactor Physics Benchmark Experiments to Support the Power Industry.

  5. DOES CRITICAL MASS DECREASE AS TEMPERATURE INCREASES: A REVIEW OF FIVE BENCHMARK EXPERIMENTS THAT SPAN A RANGE OF ELEVATED TEMPERATURES AND CRITICAL CONFIGURATIONS

    SciTech Connect

    Yates, K.

    2009-06-10

    Five sets of benchmark experiments are reviewed herein that cover a diverse set of fissile system configurations. The review specifically focused on the change in critical mass of these systems at elevated temperatures and the temperature reactivity coefficient ({alpha}{sub T}) on the system. Because plutonium-based critical benchmark experiments at varying temperatures were not found at the time this review was prepared, only uranium-based systems are included, as follows: (1) HEU-SOL-THERM-010 - UO{sub 2}F{sub 2} solutions with high U{sup 235} enrichment; (2) HEU-COMP-THERM-016 - uranium-graphite blocks with low U concentration; (3) LEU-COMP-THERM-032 - water moderated lattices of UO{sub 2} with stainless steel cladding, and intermediate U{sup 235} enrichment; (4) IEU-COMP-THERM-002 - water moderated lattices of annular UO{sub 2} with/without absorbers, and intermediate U{sup 235} enrichment; and (5) LEU-COMP-THERM-026 - water moderated lattices of UO{sub 2} at different pitches, and low U{sup 235} enrichment. In three of the five benchmarks (1, 3 and 5), modeling of the critical system at room temperature is conservative compared to modeling the system at elevated temperatures, i.e., a greater fissile mass is required at elevated temperature. In one benchmark (4), there was no difference in the fissile mass between the room temperature system and the system at the examined elevated temperature. In benchmark (2), the system clearly had a negative temperature reactivity coefficient. Some of the high temperature benchmark experiments were treated with appropriate (and comprehensive) adjustments to the cross section sets and thermal expansion coefficients, while other experiments were treated with partial adjustments. Regardless of the temperature treatment, modeling the systems at room temperature was found to be conservative for the examined systems, i.e., a smaller critical mass was obtained. While the five benchmarks presented herein demonstrate that, for the

  6. Transition From Child to Adult Care--'It's Not a One-Off Event': Development of Benchmarks to Improve the Experience.

    PubMed

    Aldiss, Susie; Ellis, Judith; Cass, Hilary; Pettigrew, Tanya; Rose, Laura; Gibson, Faith

    2015-01-01

    The transition from child to adult services is a crucial time in the health of young people who may potentially fall into a poorly managed 'care gap'. A multi-site, multi-staged study was undertaken to identify the key aspects of a transitional programme of care for young people. Through a process of mapping, which involved drawing on primary and secondary data, a clinical practice-benchmark tool was developed. Benchmarks are a health care quality performance measurement 'tool'. They provide clinical teams with standards that services can measure themselves against to see how they are doing. They are used in a comparing and sharing activity, using a structured and systematic approach, to share best practice. They offer a mechanism to look at processes, and provide an opportunity to analyse skills and attitudes, which may be the hidden narrative in benchmarking. This paper describes steps in the development of benchmarks for transition to adult care, often associated with low patient and family satisfaction. Qualitative data were collected through focus groups, workshops and interviews from 13 young people with long-term health conditions, 11 parents, 36 professionals and 21 experts leading on transition within the United Kingdom. Transcripts were analysed using qualitative content analysis. For young people and their parents/carers to experience timely and effective transition, eight factors and their associated indicators of best practice were developed from the primary and secondary data and refined through an iterative process. We recommend their use to clinical teams to inform system level strategies as well as evaluation programmes. PMID:26209172

  7. Testing (Validating?) Cross Sections with ICSBEP Benchmarks

    SciTech Connect

    Kahler, Albert C. III

    2012-06-28

    We discuss how to use critical benchmarks from the International Handbook of Evaluated Criticality Safety Benchmark Experiments to determine the applicability of specific cross sections to the end-user's problem of interest. Particular attention is paid to making sure the selected suite of benchmarks includes the user's range of applicability (ROA).

  8. Phase-covariant quantum benchmarks

    NASA Astrophysics Data System (ADS)

    Calsamiglia, J.; Aspachs, M.; Muñoz-Tapia, R.; Bagan, E.

    2009-05-01

    We give a quantum benchmark for teleportation and quantum storage experiments suited for pure and mixed test states. The benchmark is based on the average fidelity over a family of phase-covariant states and certifies that an experiment cannot be emulated by a classical setup, i.e., by a measure-and-prepare scheme. We give an analytical solution for qubits, which shows important differences with standard state estimation approach, and compute the value of the benchmark for coherent and squeezed states, both pure and mixed.

  9. Phase-covariant quantum benchmarks

    SciTech Connect

    Calsamiglia, J.; Aspachs, M.; Munoz-Tapia, R.; Bagan, E.

    2009-05-15

    We give a quantum benchmark for teleportation and quantum storage experiments suited for pure and mixed test states. The benchmark is based on the average fidelity over a family of phase-covariant states and certifies that an experiment cannot be emulated by a classical setup, i.e., by a measure-and-prepare scheme. We give an analytical solution for qubits, which shows important differences with standard state estimation approach, and compute the value of the benchmark for coherent and squeezed states, both pure and mixed.

  10. The ORSphere Benchmark Evaluation and Its Potential Impact on Nuclear Criticality Safety

    SciTech Connect

    John D. Bess; Margaret A. Marshall; J. Blair Briggs

    2013-10-01

    In the early 1970’s, critical experiments using an unreflected metal sphere of highly enriched uranium (HEU) were performed with the focus to provide a “very accurate description…as an ideal benchmark for calculational methods and cross-section data files.” Two near-critical configurations of the Oak Ridge Sphere (ORSphere) were evaluated as acceptable benchmark experiments for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments (ICSBEP Handbook). The results from those benchmark experiments were then compared with additional unmoderated and unreflected HEU metal benchmark experiment configurations currently found in the ICSBEP Handbook. For basic geometries (spheres, cylinders, and slabs) the eigenvalues calculated using MCNP5 and ENDF/B-VII.0 were within 3 of their respective benchmark values. There appears to be generally a good agreement between calculated and benchmark values for spherical and slab geometry systems. Cylindrical geometry configurations tended to calculate low, including more complex bare HEU metal systems containing cylinders. The ORSphere experiments do not calculate within their 1s uncertainty and there is a possibility that the effect of the measured uncertainties for the GODIVA I benchmark may need reevaluated. There is significant scatter in the calculations for the highly-correlated ORCEF cylinder experiments, which are constructed from close-fitting HEU discs and annuli. Selection of a nuclear data library can have a larger impact on calculated eigenvalue results than the variation found within calculations of a given experimental series, such as the ORCEF cylinders, using a single nuclear data set.

  11. Vegetable and Fruit Acceptance during Infancy: Impact of Ontogeny, Genetics, and Early Experiences.

    PubMed

    Mennella, Julie A; Reiter, Ashley R; Daniels, Loran M

    2016-01-01

    Many of the chronic illnesses that plague modern society derive in large part from poor food choices. Thus, it is not surprising that the Dietary Guidelines for Americans, aimed at the population ≥2 y of age, recommends limiting consumption of salt, fat, and simple sugars, all of which have sensory properties that we humans find particularly palatable, and increasing the variety and contribution of fruits and vegetables in the diet, to promote health and prevent disease. Similar recommendations may soon be targeted at even younger Americans: the B-24 Project, led by the US Department of Health and Human Services and the USDA, is currently evaluating evidence to include infants and children from birth to 2 y of age in the dietary guidelines. This article reviews the underinvestigated behavioral phenomena surrounding how to introduce vegetables and fruits into infants' diets, for which there is much medical lore but, to our knowledge, little evidence-based research. Because the chemical senses are the major determinants of whether young children will accept a food (e.g., they eat only what they like), these senses take on even greater importance in understanding the bases for food choices in children. We focus on early life, in contrast with many other studies that attempt to modify food habits in older children and thus may miss sensitive periods that modulate long-term acceptance. Our review also takes into consideration ontogeny and sources of individual differences in taste perception, in particular, the role of genetic variation in bitter taste perception.

  12. Effects of azimuth-symmetric acceptance cutoffs on the measured asymmetry in unpolarized Drell-Yan fixed-target experiments

    NASA Astrophysics Data System (ADS)

    Bianconi, A.; Bussa, M. P.; Destefanis, M.; Ferrero, L.; Greco, M.; Maggiora, M.; Spataro, S.

    2013-04-01

    Fixed-target unpolarized Drell-Yan experiments often feature an acceptance depending on the polar angle of the lepton tracks in the laboratory frame. Typically leptons are detected in a defined angular range, with a dead zone in the forward region. If the cutoffs imposed by the angular acceptance are independent of the azimuth, at first sight they do not appear dangerous for a measurement of the cos(2 φ) asymmetry, which is relevant because of its association with the violation of the Lam-Tung rule and with the Boer-Mulders function. On the contrary, direct simulations show that up to 10 percent asymmetries are produced by these cutoffs. These artificial asymmetries present qualitative features that allow them to mimic the physical ones. They introduce some model dependence in the measurements of the cos(2 φ) asymmetry, since a precise reconstruction of the acceptance in the Collins-Soper frame requires a Monte Carlo simulation, that in turn requires some detailed physical input to generate event distributions. Although experiments in the eighties seem to have been aware of this problem, the possibility of using the Boer-Mulders function as an input parameter in the extraction of transversity has much increased the requirements of precision on this measurement. Our simulations show that the safest approach to these measurements is a strong cutoff on the Collins-Soper polar angle. This reduces statistics, but does not necessarily decrease the precision in a measurement of the Boer-Mulders function.

  13. Benchmarking in Student Affairs.

    ERIC Educational Resources Information Center

    Mosier, Robert E.; Schwarzmueller, Gary J.

    2002-01-01

    Discusses the use of benchmarking in student affairs, focusing on issues related to student housing. Provides examples of how benchmarking has influenced administrative practice at many institutions. (EV)

  14. Vegetable and Fruit Acceptance during Infancy: Impact of Ontogeny, Genetics, and Early Experiences.

    PubMed

    Mennella, Julie A; Reiter, Ashley R; Daniels, Loran M

    2016-01-01

    Many of the chronic illnesses that plague modern society derive in large part from poor food choices. Thus, it is not surprising that the Dietary Guidelines for Americans, aimed at the population ≥2 y of age, recommends limiting consumption of salt, fat, and simple sugars, all of which have sensory properties that we humans find particularly palatable, and increasing the variety and contribution of fruits and vegetables in the diet, to promote health and prevent disease. Similar recommendations may soon be targeted at even younger Americans: the B-24 Project, led by the US Department of Health and Human Services and the USDA, is currently evaluating evidence to include infants and children from birth to 2 y of age in the dietary guidelines. This article reviews the underinvestigated behavioral phenomena surrounding how to introduce vegetables and fruits into infants' diets, for which there is much medical lore but, to our knowledge, little evidence-based research. Because the chemical senses are the major determinants of whether young children will accept a food (e.g., they eat only what they like), these senses take on even greater importance in understanding the bases for food choices in children. We focus on early life, in contrast with many other studies that attempt to modify food habits in older children and thus may miss sensitive periods that modulate long-term acceptance. Our review also takes into consideration ontogeny and sources of individual differences in taste perception, in particular, the role of genetic variation in bitter taste perception. PMID:26773029

  15. Laser-plasma interaction in ignition relevant plasmas: benchmarking our 3D modelling capabilities versus recent experiments

    SciTech Connect

    Divol, L; Froula, D H; Meezan, N; Berger, R; London, R A; Michel, P; Glenzer, S H

    2007-09-27

    We have developed a new target platform to study Laser Plasma Interaction in ignition-relevant condition at the Omega laser facility (LLE/Rochester)[1]. By shooting an interaction beam along the axis of a gas-filled hohlraum heated by up to 17 kJ of heater beam energy, we were able to create a millimeter-scale underdense uniform plasma at electron temperatures above 3 keV. Extensive Thomson scattering measurements allowed us to benchmark our hydrodynamic simulations performed with HYDRA [1]. As a result of this effort, we can use with much confidence these simulations as input parameters for our LPI simulation code pF3d [2]. In this paper, we show that by using accurate hydrodynamic profiles and full three-dimensional simulations including a realistic modeling of the laser intensity pattern generated by various smoothing options, fluid LPI theory reproduces the SBS thresholds and absolute reflectivity values and the absence of measurable SRS. This good agreement was made possible by the recent increase in computing power routinely available for such simulations.

  16. Assessment of the available {sup 233}U cross-section evaluations in the calculation of critical benchmark experiments

    SciTech Connect

    Leal, L.C.; Wright, R.Q.

    1996-10-01

    In this report we investigate the adequacy of the available {sup 233}U cross-section data for calculation of experimental critical systems. The {sup 233}U evaluations provided in two evaluated nuclear data libraries, the U.S. Data Bank [ENDF/B (Evaluated Nuclear Data Files)] and the Japanese Data Bank [JENDL (Japanese Evaluated Nuclear Data Library)] are examined. Calculations were performed for six thermal and ten fast experimental critical systems using the S{sub n} transport XSDRNPM code. To verify the performance of the {sup 233}U cross-section data for nuclear criticality safety application in which the neutron energy spectrum is predominantly in the epithermal energy range, calculations of four numerical benchmark systems with energy spectra in the intermediate energy range were done. These calculations serve only as an indication of the difference in calculated results that may be expected when the two {sup 233}U cross-section evaluations are used for problems with neutron spectra in the intermediate energy range. Additionally, comparisons of experimental and calculated central fission rate ratios were also made. The study has suggested that an ad hoc {sup 233}U evaluation based on the JENDL library provides better overall results for both fast and thermal experimental critical systems.

  17. Assessment of the Available (Sup 233)U Cross Sections Evaluations in the Calculation of Critical Benchmark Experiments

    SciTech Connect

    Leal, L.C.

    1993-01-01

    In this report we investigate the adequacy of the available {sup 233}U cross-section data for calculation of experimental critical systems. The {sup 233}U evaluations provided in two evaluated nuclear data libraries, the U. S. Data Bank [ENDF/B (Evaluated Nuclear Data Files)] and the Japanese Data Bank [JENDL (Japanese Evaluated Nuclear Data Library)] are examined. Calculations were performed for six thermal and ten fast experimental critical systems using the Sn transport XSDRNPM code. To verify the performance of the {sup 233}U cross-section data for nuclear criticality safety application in which the neutron energy spectrum is predominantly in the epithermal energy range, calculations of four numerical benchmark systems with energy spectra in the intermediate energy range were done. These calculations serve only as an indication of the difference in calculated results that may be expected when the two {sup 233}U cross-section evaluations are used for problems with neutron spectra in the intermediate energy range. Additionally, comparisons of experimental and calculated central fission rate ratios were also made. The study has suggested that an ad hoc {sup 233}U evaluation based on the JENDL library provides better overall results for both fast and thermal experimental critical systems.

  18. Neutronics Benchmarks for the Utilization of Mixed-Oxide Fuel: Joint US/Russian Progress Report for Fiscal Year 1997, Volume 4, part 4-ESADA Plutonium Program Critical Experiments: Single-Region Core Configurations

    SciTech Connect

    Akkurt, H.; Abdurrahman, N.M.

    1999-05-01

    The purpose of this study is to simulate and assess the findings from selected ESADA experiments. It is presented in the format prescribed by the Nuclear Energy Agency Nuclear Science Committee for material to be included in the International Handbook of Evaluated Criticality Safety Benchmark Experiments.

  19. Evaluating acceptance and user experience of a guideline-based clinical decision support system execution platform.

    PubMed

    Buenestado, David; Elorz, Javier; Pérez-Yarza, Eduardo G; Iruetaguena, Ander; Segundo, Unai; Barrena, Raúl; Pikatza, Juan M

    2013-04-01

    This study aims to determine what the initial disposition of physicians towards the use of Clinical Decision Support Systems (CDSS) based on Computerised Clinical Guidelines and Protocols (CCGP) is; and whether their prolonged utilisation has a positive effect on their intention to adopt them in the future. For a period of 3 months, 8 volunteer paediatricians monitored each up to 10 asthmatic patients using two CCGPs deployed in the-GuidesMed CDSS. A Technology Acceptance Model (TAM) questionnaire was supplied to them before and after using the system. Results from both questionnaires are analysed searching for significant improvements in opinion between them. An additional survey was performed to analyse the usability of the system. It was found that initial disposition of physicians towards e-GuidesMed is good. Improvement between the pre and post iterations of the TAM questionnaire has been found to be statistically significant. Nonetheless, slightly lower values in the Compatibility and Habit variables show that participants perceive possible difficulties to integrate e-GuidesMed into their daily routine. The variable Facilitators shows the highest correlation with the Intention to Use. Usability of the system has also been rated very high and, in this regard, no fundamental flaw has been detected. Initial views towards e-GuidesMed are positive, and become reinforced after continued utilisation of the system. In order to achieve an effective implementation, it becomes essential to facilitate conditions to integrate the system into the physician's daily routine.

  20. Acceptance testing of the prototype electrometer for the SAMPIE flight experiment

    NASA Technical Reports Server (NTRS)

    Hillard, G. Barry

    1992-01-01

    The Solar Array Module Plasma Interaction Experiment (SAMPIE) has two key instruments at the heart of its data acquisition capability. One of these, the electrometer, is designed to measure both ion and electron current from most of the samples included in the experiment. The accuracy requirement, specified by the project's Principal Investigator, is for agreement within 10 percent with a calibrated laboratory instrument. Plasma chamber testing was performed to assess the capabilities of the prototype design. Agreement was determined to be within 2 percent for electron collection and within 3 percent for ion collection.

  1. In situ and real time characterization of interface microstructure in 3D alloy solidification: benchmark microgravity experiments in the DECLIC-Directional Solidification Insert on ISS

    NASA Astrophysics Data System (ADS)

    Ramirez, A.; Chen, L.; Bergeon, N.; Billia, B.; Gu, Jiho; Trivedi, R.

    2012-01-01

    Dynamical microstructure formation and selection during solidification processing, which has a major influence on the properties in the use of elaborated materials, occur during the growth process. In situ observation of the solid-liquid interface morphology evolution is thus necessary. On earth, convection effects dominate in bulk samples and may strongly interact with microstructure dynamics and alter pattern characterization. Series of solidification experiments with 3D cylindrical sample geometry were conducted in succinonitrile (SCN) -0.24 wt%camphor (model transparent system), in microgravity environment in the Directional Solidification Insert of the DECLIC facility of CNES (French space agency) on the International Space Station (ISS). Microgravity enabled homogeneous values of control parameters over the whole interface allowing the obtaining of homogeneous patterns suitable to get quantitative benchmark data. First analyses of the characteristics of the pattern (spacing, order, etc.) and of its dynamics in microgravity will be presented.

  2. The Role of Age and Motivation for the Experience of Social Acceptance and Rejection

    ERIC Educational Resources Information Center

    Nikitin, Jana; Schoch, Simone; Freund, Alexandra M.

    2014-01-01

    A study with n = 55 younger (18-33 years, M = 23.67) and n = 58 older (61-85 years, M = 71.44) adults investigated age-related differences in social approach and avoidance motivation and their consequences for the experience of social interactions. Results confirmed the hypothesis that a predominant habitual approach motivation in younger adults…

  3. Benchmarking and Performance Measurement.

    ERIC Educational Resources Information Center

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  4. The effects of video compression on acceptability of images for monitoring life sciences' experiments

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Chuang, Sherry L.

    1993-01-01

    Current plans indicate that there will be a large number of life science experiments carried out during the thirty year-long mission of the Biological Flight Research Laboratory (BFRL) on board Space Station Freedom (SSF). Non-human life science experiments will be performed in the BFRL. Two distinct types of activities have already been identified for this facility: (1) collect, store, distribute, analyze and manage engineering and science data from the Habitats, Glovebox and Centrifuge, (2) perform a broad range of remote science activities in the Glovebox and Habitat chambers in conjunction with the remotely located principal investigator (PI). These activities require extensive video coverage, viewing and/or recording and distribution to video displays on board SSF and to the ground. This paper concentrates mainly on the second type of activity. Each of the two BFRL habitat racks are designed to be configurable for either six rodent habitats per rack, four plant habitats per rack, or a combination of the above. Two video cameras will be installed in each habitat with a spare attachment for a third camera when needed. Therefore, a video system that can accommodate up to 12-18 camera inputs per habitat rack must be considered.

  5. Use of Sensitivity and Uncertainty Analysis in the Design of Reactor Physics and Criticality Benchmark Experiments for Advanced Nuclear Fuel

    SciTech Connect

    Rearden, B.T.; Anderson, W.J.; Harms, G.A.

    2005-08-15

    Framatome ANP, Sandia National Laboratories (SNL), Oak Ridge National Laboratory (ORNL), and the University of Florida are cooperating on the U.S. Department of Energy Nuclear Energy Research Initiative (NERI) project 2001-0124 to design, assemble, execute, analyze, and document a series of critical experiments to validate reactor physics and criticality safety codes for the analysis of commercial power reactor fuels consisting of UO{sub 2} with {sup 235}U enrichments {>=}5 wt%. The experiments will be conducted at the SNL Pulsed Reactor Facility.Framatome ANP and SNL produced two series of conceptual experiment designs based on typical parameters, such as fuel-to-moderator ratios, that meet the programmatic requirements of this project within the given restraints on available materials and facilities. ORNL used the Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) to assess, from a detailed physics-based perspective, the similarity of the experiment designs to the commercial systems they are intended to validate. Based on the results of the TSUNAMI analysis, one series of experiments was found to be preferable to the other and will provide significant new data for the validation of reactor physics and criticality safety codes.

  6. Implementation of Benchmarking Transportation Logistics Practices and Future Benchmarking Organizations

    SciTech Connect

    Thrower, A.W.; Patric, J.; Keister, M.

    2008-07-01

    The purpose of the Office of Civilian Radioactive Waste Management's (OCRWM) Logistics Benchmarking Project is to identify established government and industry practices for the safe transportation of hazardous materials which can serve as a yardstick for design and operation of OCRWM's national transportation system for shipping spent nuclear fuel and high-level radioactive waste to the proposed repository at Yucca Mountain, Nevada. The project will present logistics and transportation practices and develop implementation recommendations for adaptation by the national transportation system. This paper will describe the process used to perform the initial benchmarking study, highlight interim findings, and explain how these findings are being implemented. It will also provide an overview of the next phase of benchmarking studies. The benchmarking effort will remain a high-priority activity throughout the planning and operational phases of the transportation system. The initial phase of the project focused on government transportation programs to identify those practices which are most clearly applicable to OCRWM. These Federal programs have decades of safe transportation experience, strive for excellence in operations, and implement effective stakeholder involvement, all of which parallel OCRWM's transportation mission and vision. The initial benchmarking project focused on four business processes that are critical to OCRWM's mission success, and can be incorporated into OCRWM planning and preparation in the near term. The processes examined were: transportation business model, contract management/out-sourcing, stakeholder relations, and contingency planning. More recently, OCRWM examined logistics operations of AREVA NC's Business Unit Logistics in France. The next phase of benchmarking will focus on integrated domestic and international commercial radioactive logistic operations. The prospective companies represent large scale shippers and have vast experience in

  7. SAS Code for Calculating Intraclass Correlation Coefficients and Effect Size Benchmarks for Site-Randomized Education Experiments

    ERIC Educational Resources Information Center

    Brandon, Paul R.; Harrison, George M.; Lawton, Brian E.

    2013-01-01

    When evaluators plan site-randomized experiments, they must conduct the appropriate statistical power analyses. These analyses are most likely to be valid when they are based on data from the jurisdictions in which the studies are to be conducted. In this method note, we provide software code, in the form of a SAS macro, for producing statistical…

  8. Impact of Dialectical Behavior Therapy versus Community Treatment by Experts on Emotional Experience, Expression, and Acceptance in Borderline Personality Disorder

    PubMed Central

    Neacsiu, Andrada D.; Lungu, Anita; Harned, Melanie S.; Rizvi, Shireen L.; Linehan, Marsha M.

    2014-01-01

    Evidence suggests that heightened negative affectivity is a prominent feature of Borderline Personality Disorder (BPD) that often leads to maladaptive behaviors. Nevertheless, there is little research examining treatment effects on the experience and expression of specific negative emotions. Dialectical Behavior Therapy (DBT) is an effective treatment for BPD, hypothesized to reduce negative affectivity (Linehan, 1993a). The present study analyzes secondary data from a randomized controlled trial with the aim to assess the unique effectiveness of DBT when compared to Community Treatment by Experts (CTBE) in changing the experience, expression, and acceptance of negative emotions. Suicidal and/or self-injuring women with BPD (n = 101) were randomly assigned to DBT or CTBE for one year of treatment and one year of follow-up. Several indices of emotional experience and expression were assessed. Results indicate that DBT decreased experiential avoidance and expressed anger significantly more than CTBE. No differences between DBT and CTBE were found in improving guilt, shame, anxiety, or anger suppression, trait, and control. These results suggest that DBT has unique effects on improving the expression of anger and experiential avoidance, whereas changes in the experience of specific negative emotions may be accounted for by general factors associated with expert therapy. Implications of the findings are discussed. PMID:24418652

  9. Benchmark experiment for the cross section of the 100Mo(p,2n)99mTc and 100Mo(p,pn)99Mo reactions

    NASA Astrophysics Data System (ADS)

    Takács, S.; Ditrói, F.; Aikawa, M.; Haba, H.; Otuka, N.

    2016-05-01

    As nuclear medicine community has shown an increasing interest in accelerator produced 99mTc radionuclide, the possible alternative direct production routes for producing 99mTc were investigated intensively. One of these accelerator production routes is based on the 100Mo(p,2n)99mTc reaction. The cross section of this nuclear reaction was studied by several laboratories earlier but the available data-sets are not in good agreement. For large scale accelerator production of 99mTc based on the 100Mo(p,2n)99mTc reaction, a well-defined excitation function is required to optimise the production process effectively. One of our recent publications pointed out that most of the available experimental excitation functions for the 100Mo(p,2n)99mTc reaction have the same general shape while their amplitudes are different. To confirm the proper amplitude of the excitation function, results of three independent experiments were presented (Takács et al., 2015). In this work we present results of a thick target count rate measurement of the Eγ = 140.5 keV gamma-line from molybdenum irradiated by Ep = 17.9 MeV proton beam, as an integral benchmark experiment, to prove the cross section data reported for the 100Mo(p,2n)99mTc and 100Mo(p,pn)99Mo reactions in Takács et al. (2015).

  10. Benchmark experiment for electron-impact ionization of argon: Absolute triple-differential cross sections via three-dimensional electron emission images

    SciTech Connect

    Ren Xueguang; Senftleben, Arne; Pflueger, Thomas; Dorn, Alexander; Ullrich, Joachim; Bartschat, Klaus

    2011-05-15

    Single ionization of argon by 195-eV electron impact is studied in an experiment, where the absolute triple-differential cross sections are presented as three-dimensional electron emission images for a series of kinematic conditions. Thereby a comprehensive set of experimental data for electron-impact ionization of a many-electron system is produced to provide a benchmark for comparison with theoretical predictions. Theoretical models using a hybrid first-order and second-order distorted-wave Born plus R-matrix approach are employed to compare their predictions with the experimental data. While the relative shape of the calculated cross section is generally in reasonable agreement with experiment, the magnitude appears to be the most significant problem with the theoretical treatment for the conditions studied in the present work. This suggests that the most significant challenge in the further development of theory for this process may lie in the reproduction of the absolute scale rather than the angular dependence of the cross section.

  11. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  12. A Uranium Bioremediation Reactive Transport Benchmark

    SciTech Connect

    Yabusaki, Steven B.; Sengor, Sevinc; Fang, Yilin

    2015-06-01

    A reactive transport benchmark problem set has been developed based on in situ uranium bio-immobilization experiments that have been performed at a former uranium mill tailings site in Rifle, Colorado, USA. Acetate-amended groundwater stimulates indigenous microorganisms to catalyze the reduction of U(VI) to a sparingly soluble U(IV) mineral. The interplay between the flow, acetate loading periods and rates, microbially-mediated and geochemical reactions leads to dynamic behavior in metal- and sulfate-reducing bacteria, pH, alkalinity, and reactive mineral surfaces. The benchmark is based on an 8.5 m long one-dimensional model domain with constant saturated flow and uniform porosity. The 159-day simulation introduces acetate and bromide through the upgradient boundary in 14-day and 85-day pulses separated by a 10 day interruption. Acetate loading is tripled during the second pulse, which is followed by a 50 day recovery period. Terminal electron accepting processes for goethite, phyllosilicate Fe(III), U(VI), and sulfate are modeled using Monod-type rate laws. Major ion geochemistry modeled includes mineral reactions, as well as aqueous and surface complexation reactions for UO2++, Fe++, and H+. In addition to the dynamics imparted by the transport of the acetate pulses, U(VI) behavior involves the interplay between bioreduction, which is dependent on acetate availability, and speciation-controlled surface complexation, which is dependent on pH, alkalinity and available surface complexation sites. The general difficulty of this benchmark is the large number of reactions (74), multiple rate law formulations, a multisite uranium surface complexation model, and the strong interdependency and sensitivity of the reaction processes. Results are presented for three simulators: HYDROGEOCHEM, PHT3D, and PHREEQC.

  13. A Blind Test Experiment in Volcano Geodesy: a Benchmark for Inverse Methods of Ground Deformation and Gravity Data

    NASA Astrophysics Data System (ADS)

    D'Auria, Luca; Fernandez, Jose; Puglisi, Giuseppe; Rivalta, Eleonora; Camacho, Antonio; Nikkhoo, Mehdi; Walter, Thomas

    2016-04-01

    The inversion of ground deformation and gravity data is affected by an intrinsic ambiguity because of the mathematical formulation of the inverse problem. Current methods for the inversion of geodetic data rely on both parametric (i.e. assuming a source geometry) and non-parametric approaches. The former are able to catch the fundamental features of the ground deformation source but, if the assumptions are wrong or oversimplified, they could provide misleading results. On the other hand, the latter class of methods, even if not relying on stringent assumptions, could suffer from artifacts, especially when dealing with poor datasets. In the framework of the EC-FP7 MED-SUV project we aim at comparing different inverse approaches to verify how they cope with basic goals of Volcano Geodesy: determining the source depth, the source shape (size and geometry), the nature of the source (magmatic/hydrothermal) and hinting the complexity of the source. Other aspects that are important in volcano monitoring are: volume/mass transfer toward shallow depths, propagation of dikes/sills, forecasting the opening of eruptive vents. On the basis of similar experiments already done in the fields of seismic tomography and geophysical imaging, we have devised a bind test experiment. Our group was divided into one model design team and several inversion teams. The model design team devised two physical models representing volcanic events at two distinct volcanoes (one stratovolcano and one caldera). They provided the inversion teams with: the topographic reliefs, the calculated deformation field (on a set of simulated GPS stations and as InSAR interferograms) and the gravity change (on a set of simulated campaign stations). The nature of the volcanic events remained unknown to the inversion teams until after the submission of the inversion results. Here we present the preliminary results of this comparison in order to determine which features of the ground deformation and gravity source

  14. Benchmark Data Set for Wheat Growth Models: Field Experiments and AgMIP Multi-Model Simulations.

    NASA Technical Reports Server (NTRS)

    Asseng, S.; Ewert, F.; Martre, P.; Rosenzweig, C.; Jones, J. W.; Hatfield, J. L.; Ruane, A. C.; Boote, K. J.; Thorburn, P.J.; Rotter, R. P.

    2015-01-01

    The data set includes a current representative management treatment from detailed, quality-tested sentinel field experiments with wheat from four contrasting environments including Australia, The Netherlands, India and Argentina. Measurements include local daily climate data (solar radiation, maximum and minimum temperature, precipitation, surface wind, dew point temperature, relative humidity, and vapor pressure), soil characteristics, frequent growth, nitrogen in crop and soil, crop and soil water and yield components. Simulations include results from 27 wheat models and a sensitivity analysis with 26 models and 30 years (1981-2010) for each location, for elevated atmospheric CO2 and temperature changes, a heat stress sensitivity analysis at anthesis, and a sensitivity analysis with soil and crop management variations and a Global Climate Model end-century scenario.

  15. Methodology of full-core Monte Carlo calculations with leakage parameter evaluations for benchmark critical experiment analysis

    NASA Astrophysics Data System (ADS)

    Sboev, A. G.; Ilyashenko, A. S.; Vetrova, O. A.

    1997-02-01

    The method of bucking evaluation, realized in the MOnte Carlo code MCS, is described. This method was applied for calculational analysis of well known light water experiments TRX-1 and TRX-2. The analysis of this comparison shows, that there is no coincidence between Monte Carlo calculations, obtained by different ways: the MCS calculations with given experimental bucklings; the MCS calculations with given bucklings evaluated on base of full core MCS direct simulations; the full core MCNP and MCS direct simulations; the MCNP and MCS calculations, where the results of cell calculations are corrected by the coefficients taking into the account the leakage from the core. Also the buckling values evaluated by full core MCS calculations have differed from experimental ones, especially in the case of TRX-1, when this difference has corresponded to 0.5 percent increase of Keff value.

  16. NASA Software Engineering Benchmarking Study

    NASA Technical Reports Server (NTRS)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    .onsolidate, collect and, if needed, develop common processes principles and other assets across the Agency in order to provide more consistency in software development and acquisition practices and to reduce the overall cost of maintaining or increasing current NASA CMMI maturity levels. 6. Provide additional support for small projects that includes: (a) guidance for appropriate tailoring of requirements for small projects, (b) availability of suitable tools, including support tool set-up and training, and (c) training for small project personnel, assurance personnel and technical authorities on the acceptable options for tailoring requirements and performing assurance on small projects. 7. Develop software training classes for the more experienced software engineers using on-line training, videos, or small separate modules of training that can be accommodated as needed throughout a project. 8. Create guidelines to structure non-classroom training opportunities such as mentoring, peer reviews, lessons learned sessions, and on-the-job training. 9. Develop a set of predictive software defect data and a process for assessing software testing metric data against it. 10. Assess Agency-wide licenses for commonly used software tools. 11. Fill the knowledge gap in common software engineering practices for new hires and co-ops.12. Work through the Science, Technology, Engineering and Mathematics (STEM) program with universities in strengthening education in the use of common software engineering practices and standards. 13. Follow up this benchmark study with a deeper look into what both internal and external organizations perceive as the scope of software assurance, the value they expect to obtain from it, and the shortcomings they experience in the current practice. 14. Continue interactions with external software engineering environment through collaborations, knowledge sharing, and benchmarking.

  17. A comparison of five benchmarks

    NASA Technical Reports Server (NTRS)

    Huss, Janice E.; Pennline, James A.

    1987-01-01

    Five benchmark programs were obtained and run on the NASA Lewis CRAY X-MP/24. A comparison was made between the programs codes and between the methods for calculating performance figures. Several multitasking jobs were run to gain experience in how parallel performance is measured.

  18. A framework for benchmarking land models

    SciTech Connect

    Luo, Yiqi; Randerson, James T.; Hoffman, Forrest; Norby, Richard J

    2012-01-01

    Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1) targeted aspects of model performance to be evaluated, (2) a set of benchmarks as defined references to test model performance, (3) metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4) model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1) a priori thresholds of acceptable model performance and (2) a scoring system to combine data model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties of land models

  19. Nomenclatural Benchmarking: The roles of digital typification and telemicroscopy

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The process of nomenclatural benchmarking is the examination of type specimens of all available names to ascertain which currently accepted species the specimen bearing the name falls within. We propose a strategy for addressing four challenges for nomenclatural benchmarking. First, there is the mat...

  20. Benchmark test of 14-MeV neutron-induced gamma-ray production data in JENDL-3.2 and FENDL/E-1.0 through analysis of the OKTAVIAN experiments

    SciTech Connect

    Maekawa, F.; Oyama, F.

    1996-06-01

    Secondary gamma rays play an important role along with neutrons in influencing nuclear design parameters, such as nuclear heating, radiation dose, and material damage on the plasma-facing components, vacuum vessel, and superconducting magnets, of fusion devices. Because evaluated nuclear data libraries are used in the designs, one must examine the accuracy of secondary gamma-ray data in these libraries through benchmark tests of existing experiments. The validity of the data should be confirmed, or problems with the data should be pointed out through these benchmark tests to ensure the quality of the design. Here, gamma-ray production data of carbon, fluorine, aluminum, silicon, titanium, chromium, manganese, cobalt, copper, niobium, molybdenum, tungsten, and lead in JENDL-3.2 and FENDL/E-1.0 induced by 14-MeV neutrons are tested through benchmark analyses of leakage gamma-ray spectrum measurements conducted at the OKTAVIAN deuterium-tritium neutron source facility. The MCNP transport code is used along with the flagging method for detailed analyses of the spectra. As a result, several moderate problems are pointed out for secondary gamma-ray data of titanium, chromium, manganese, and lead in FENDL/E-1.0. Because no fatal errors are found, however, secondary gamma-ray data for the 13 elements in both libraries are reasonably well validated through these benchmark tests as far as 14-MeV neutron incidence is concerned.

  1. Acceptability of Interventions Delivered Online and Through Mobile Phones for People Who Experience Severe Mental Health Problems: A Systematic Review

    PubMed Central

    Lobban, Fiona; Emsley, Richard; Bucci, Sandra

    2016-01-01

    Background Psychological interventions are recommended for people with severe mental health problems (SMI). However, barriers exist in the provision of these services and access is limited. Therefore, researchers are beginning to develop and deliver interventions online and via mobile phones. Previous research has indicated that interventions delivered in this format are acceptable for people with SMI. However, a comprehensive systematic review is needed to investigate the acceptability of online and mobile phone-delivered interventions for SMI in depth. Objective This systematic review aimed to 1) identify the hypothetical acceptability (acceptability prior to or without the delivery of an intervention) and actual acceptability (acceptability where an intervention was delivered) of online and mobile phone-delivered interventions for SMI, 2) investigate the impact of factors such as demographic and clinical characteristics on acceptability, and 3) identify common participant views in qualitative studies that pinpoint factors influencing acceptability. Methods We conducted a systematic search of the databases PubMed, Embase, PsycINFO, CINAHL, and Web of Science in April 2015, which yielded a total of 8017 search results, with 49 studies meeting the full inclusion criteria. Studies were included if they measured acceptability through participant views, module completion rates, or intervention use. Studies delivering interventions were included if the delivery method was online or via mobile phones. Results The hypothetical acceptability of online and mobile phone-delivered interventions for SMI was relatively low, while actual acceptability tended to be high. Hypothetical acceptability was higher for interventions delivered via text messages than by emails. The majority of studies that assessed the impact of demographic characteristics on acceptability reported no significant relationships between the two. Additionally, actual acceptability was higher when

  2. RISKIND verification and benchmark comparisons

    SciTech Connect

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models.

  3. ATLAS ACCEPTANCE TEST

    SciTech Connect

    Cochrane, J. C. , Jr.; Parker, J. V.; Hinckley, W. B.; Hosack, K. W.; Mills, D.; Parsons, W. M.; Scudder, D. W.; Stokes, J. L.; Tabaka, L. J.; Thompson, M. C.; Wysocki, Frederick Joseph; Campbell, T. N.; Lancaster, D. L.; Tom, C. Y.

    2001-01-01

    The acceptance test program for Atlas, a 23 MJ pulsed power facility for use in the Los Alamos High Energy Density Hydrodynamics program, has been completed. Completion of this program officially releases Atlas from the construction phase and readies it for experiments. Details of the acceptance test program results and of machine capabilities for experiments will be presented.

  4. Enhancing user acceptance of mandated mobile health information systems: the ePOC (electronic point-of-care project) experience.

    PubMed

    Burgess, Lois; Sargent, Jason

    2007-01-01

    From a clinical perspective, the use of mobile technologies, such as Personal Digital Assistants (PDAs) within hospital environments is not new. A paradigm shift however is underway towards the acceptance and utility of these systems within mobile-based healthcare environments. Introducing new technologies and associated work practices has intrinsic risks which must be addressed. This paper contends that intervening to address user concerns as they arise throughout the system development lifecycle will lead to greater levels of user acceptance, while ultimately enhancing the deliverability of a system that provides a best fit with end user needs. It is envisaged this research will lead to the development of a formalised user acceptance framework based on an agile approach to user acceptance measurement. The results of an ongoing study of user perceptions towards a mandated electronic point-of-care information system in the Northern Illawarra Ambulatory Care Team (TACT) are presented. PMID:17911883

  5. Benchmarks for industrial energy efficiency

    SciTech Connect

    Amarnath, K.R.; Kumana, J.D.; Shah, J.V.

    1996-12-31

    What are the standards for improving energy efficiency for industries such as petroleum refining, chemicals, and glass manufacture? How can different industries in emerging markets and developing accelerate the pace of improvements? This paper discusses several case studies and experiences relating to this subject emphasizing the use of energy efficiency benchmarks. Two important benchmarks are discussed. The first is based on a track record of outstanding performers in the related industry segment; the second benchmark is based on site specific factors. Using energy use reduction targets or benchmarks, projects have been implemented in Mexico, Poland, India, Venezuela, Brazil, China, Thailand, Malaysia, Republic of South Africa and Russia. Improvements identified through these projects include a variety of recommendations. The use of oxy-fuel and electric furnaces in the glass industry in Poland; reconfiguration of process heat recovery systems for refineries in China, Malaysia, and Russia; recycling and reuse of process wastewater in Republic of South Africa; cogeneration plant in Venezuela. The paper will discuss three case studies of efforts undertaken in emerging market countries to improve energy efficiency.

  6. Emotion regulation in unipolar depression: the effects of acceptance and suppression of subjective emotional experience on the intensity and duration of sadness and negative affect.

    PubMed

    Liverant, Gabrielle I; Brown, Timothy A; Barlow, David H; Roemer, Lizabeth

    2008-11-01

    This study examined the effects of emotional suppression and acceptance in a depressed sample. Sixty participants with diagnoses of unipolar depression completed a questionnaire packet and participated in an experiment. The experiment utilized two conditions to explore correlates of the spontaneous use of emotion regulation strategies and the effects of an experimental manipulation of acceptance and suppression. Results demonstrated that suppression produced short-term reductions in sadness. Notably, anxiety about the experience of depressed mood influenced the efficacy of emotional suppression with findings showing that suppression was no longer effective at moderate and higher levels of anxiety about the experience of depressed mood. Implications of study findings for understanding emotion dysregulation in depressive disorders and the treatment of depression are discussed.

  7. Benchmarking Tool Kit.

    ERIC Educational Resources Information Center

    Canadian Health Libraries Association.

    Nine Canadian health libraries participated in a pilot test of the Benchmarking Tool Kit between January and April, 1998. Although the Tool Kit was designed specifically for health libraries, the content and approach are useful to other types of libraries as well. Used to its full potential, benchmarking can provide a common measuring stick to…

  8. Thermal Performance Benchmarking (Presentation)

    SciTech Connect

    Moreno, G.

    2014-11-01

    This project will benchmark the thermal characteristics of automotive power electronics and electric motor thermal management systems. Recent vehicle systems will be benchmarked to establish baseline metrics, evaluate advantages and disadvantages of different thermal management systems, and identify areas of improvement to advance the state-of-the-art.

  9. A Seafloor Benchmark for 3-dimensional Geodesy

    NASA Astrophysics Data System (ADS)

    Chadwell, C. D.; Webb, S. C.; Nooner, S. L.

    2014-12-01

    We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone

  10. Performance Evaluation and Benchmarking of Intelligent Systems

    SciTech Connect

    Madhavan, Raj; Messina, Elena; Tunstel, Edward

    2009-09-01

    To design and develop capable, dependable, and affordable intelligent systems, their performance must be measurable. Scientific methodologies for standardization and benchmarking are crucial for quantitatively evaluating the performance of emerging robotic and intelligent systems technologies. There is currently no accepted standard for quantitatively measuring the performance of these systems against user-defined requirements; and furthermore, there is no consensus on what objective evaluation procedures need to be followed to understand the performance of these systems. The lack of reproducible and repeatable test methods has precluded researchers working towards a common goal from exchanging and communicating results, inter-comparing system performance, and leveraging previous work that could otherwise avoid duplication and expedite technology transfer. Currently, this lack of cohesion in the community hinders progress in many domains, such as manufacturing, service, healthcare, and security. By providing the research community with access to standardized tools, reference data sets, and open source libraries of solutions, researchers and consumers will be able to evaluate the cost and benefits associated with intelligent systems and associated technologies. In this vein, the edited book volume addresses performance evaluation and metrics for intelligent systems, in general, while emphasizing the need and solutions for standardized methods. To the knowledge of the editors, there is not a single book on the market that is solely dedicated to the subject of performance evaluation and benchmarking of intelligent systems. Even books that address this topic do so only marginally or are out of date. The research work presented in this volume fills this void by drawing from the experiences and insights of experts gained both through theoretical development and practical implementation of intelligent systems in a variety of diverse application domains. The book presents

  11. Exploiting Cloud Radar Doppler Spectra of Mixed-Phase Clouds during ACCEPT Field Experiment to Identify Microphysical Processes

    NASA Astrophysics Data System (ADS)

    Kalesse, H.; Myagkov, A.; Seifert, P.; Buehl, J.

    2015-12-01

    Clouds with Extended Polarization Techniques (ACCEPT) field experiment in Cabauw, Netherlands in Fall 2014. There, another MIRA-35 was operated in simultaneous transmission and simultaneous reception (STSR) mode for obtaining measurements of differential reflectivity (ZDR) and correlation coefficient ρhv.

  12. Numerical methods: Analytical benchmarking in transport theory

    SciTech Connect

    Ganapol, B.D. )

    1988-01-01

    Numerical methods applied to reactor technology have reached a high degree of maturity. Certainly one- and two-dimensional neutron transport calculations have become routine, with several programs available on personal computer and the most widely used programs adapted to workstation and minicomputer computational environments. With the introduction of massive parallelism and as experience with multitasking increases, even more improvement in the development of transport algorithms can be expected. Benchmarking an algorithm is usually not a very pleasant experience for the code developer. Proper algorithmic verification by benchmarking involves the following considerations: (1) conservation of particles, (2) confirmation of intuitive physical behavior, and (3) reproduction of analytical benchmark results. By using today's computational advantages, new basic numerical methods have been developed that allow a wider class of benchmark problems to be considered.

  13. Benchmark Evaluation of Plutonium Nitrate Solution Arrays

    SciTech Connect

    M. A. Marshall; J. D. Bess

    2011-09-01

    In October and November of 1981 thirteen approach-to-critical experiments were performed on a remote split table machine (RSTM) in the Critical Mass Laboratory of Pacific Northwest Laboratory (PNL) in Richland, Washington, using planar arrays of polyethylene bottles filled with plutonium (Pu) nitrate solution. Arrays of up to sixteen bottles were used to measure the critical number of bottles and critical array spacing with a tight fitting Plexiglas{reg_sign} reflector on all sides of the arrays except the top. Some experiments used Plexiglas shells fitted around each bottles to determine the effect of moderation on criticality. Each bottle contained approximately 2.4 L of Pu(NO3)4 solution with a Pu content of 105 g Pu/L and a free acid molarity H+ of 5.1. The plutonium was of low 240Pu (2.9 wt.%) content. These experiments were performed to fill a gap in experimental data regarding criticality limits for storing and handling arrays of Pu solution in reprocessing facilities. Of the thirteen approach-to-critical experiments eleven resulted in extrapolations to critical configurations. Four of the approaches were extrapolated to the critical number of bottles; these were not evaluated further due to the large uncertainty associated with the modeling of a fraction of a bottle. The remaining seven approaches were extrapolated to critical array spacing of 3-4 and 4-4 arrays; these seven critical configurations were evaluation for inclusion as acceptable benchmark experiments in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbook. Detailed and simple models of these configurations were created and the associated bias of these simplifications was determined to range from 0.00116 and 0.00162 {+-} 0.00006 ?keff. Monte Carlo analysis of all models was completed using MCNP5 with ENDF/BVII.0 neutron cross section libraries. A thorough uncertainty analysis of all critical, geometric, and material parameters was performed using parameter

  14. Custodial Homes, Therapeutic Homes, and Parental Acceptance: Parental Experiences of Autism in Kerala, India and Atlanta, GA USA.

    PubMed

    Sarrett, Jennifer C

    2015-06-01

    The home is a critical place to learn about cultural values of childhood disability, including autism and intellectual disabilities. The current article describes how the introduction of autism into a home and the availability of intervention options change the structure and meaning of a home and reflect parental acceptance of a child's autistic traits. Using ethnographic data from Kerala, India and Atlanta, GA USA, a description of two types of homes are developed: the custodial home, which is primarily focused on caring for basic needs, and the therapeutic home, which is focused on changing a child's autistic traits. The type of home environment is respondent to cultural practices of child rearing in the home and influences daily activities, management, and care in the home. Further, these homes differ in parental acceptance of their autistic children's disabilities, which is critical to understand when engaging in international work related to autism and intellectual disability. It is proposed that parental acceptance can be fostered through the use of neurodiverse notions that encourage autism acceptance.

  15. 2016 Senior Researcher Award Acceptance Address: Developing Productive Researchers Through Mentoring, Rethinking Doctoral Dissertations, and Facilitating Positive Publishing Experiences

    ERIC Educational Resources Information Center

    Sims, Wendy L.

    2016-01-01

    In her acceptance address, Wendy Sims provides a unique perspective based on thoughts and reflections resulting from her 8 years of service as the ninth Editor of the "Journal of Research in Music Education" ("JRME"). Specifically, she addresses how college-level music education researchers can promote positive attitudes toward…

  16. Geothermal Heat Pump Benchmarking Report

    SciTech Connect

    1997-01-17

    A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.

  17. Toxicological Benchmarks for Wildlife

    SciTech Connect

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red-tailed hawk

  18. Pregnant and Postpartum Women's Experiences and Perspectives on the Acceptability and Feasibility of Copackaged Medicine for Antenatal Care and PMTCT in Lesotho

    PubMed Central

    Gill, Michelle M.; Hoffman, Heather J.; Tiam, Appolinaire; Mohai, Florence M.; Mokone, Majoalane; Isavwa, Anthony; Mohale, Sesomo; Makhohlisa, Matela; Ankrah, Victor; Luo, Chewe; Guay, Laura

    2015-01-01

    Objective. To improve PMTCT and antenatal care-related service delivery, a pack with centrally prepackaged medicine was rolled out to all pregnant women in Lesotho in 2011. This study assessed acceptability and feasibility of this copackaging mechanism for drug delivery among pregnant and postpartum women. Methods. Acceptability and feasibility were assessed in a mixed method, cross-sectional study through structured interviews (SI) and semistructured interviews (SSI) conducted in 2012 and 2013. Results. 290 HIV-negative women and 437 HIV-positive women (n = 727) participated. Nearly all SI participants found prepackaged medicines acceptable, though modifications such as size reduction of the pack were suggested. Positive experiences included that the pack helped women take pills as instructed and contents promoted healthy pregnancies. Negative experiences included inadvertent pregnancy disclosure and discomfort carrying the pack in communities. Implementation was also feasible; 85.2% of SI participants reported adequate counseling time, though 37.8% felt pack use caused clinic delays. SSI participants reported improvement in service quality following pack introduction, due to more comprehensive counseling. Conclusions. A prepackaged drug delivery mechanism for ANC/PMTCT medicines was acceptable and feasible. Findings support continued use of this approach in Lesotho with improved design modifications to reflect the current PMTCT program of lifelong treatment for all HIV-positive pregnant women. PMID:26649193

  19. Relationships between early experience to dietary diversity, acceptance of novel flavors, and open field behavior in sheep.

    PubMed

    Villalba, Juan J; Catanese, Francisco; Provenza, Frederick D; Distel, Roberto A

    2012-01-18

    This study determined whether early experiences by sheep to monotonous or diverse diets influence: (1) plasmatic profiles of cortisol, a hormone involved in stress responses by mammals, before and after an ACTH challenge, (2) the readiness to eat new foods in a new environment, (3) general fearfulness and response to separation--as measured by the open field test (OFT) and stress induced hyperthermia (SIH)--and (4) the link between (2) and (3). Thirty, 2-mo-old lambs were randomly assigned to 3 treatments (10 lambs/treatment). Lambs in one treatment (Diversity--DV) received in successive periods of exposure all possible 4-way choice combinations of 2 foods high in energy and 2 foods high in protein from an array of 6 foods: 3 high in energy (beet pulp, oat grain, and a mix of grape pomace:milo [40:60]) and 3 high in protein (soybean meal, alfalfa, corn gluten meal). Lambs in another treatment (DV+T) received the same exposure described for DV but two phytochemicals, oxalic acid (1.5%) and quebracho tannins (10%) were randomly added within any period of exposure to foods high in energy or to foods high in protein. Lambs in the third treatment (Monotony--MO) received a monotonous balanced ration containing all 6 foods fed to the other groups. After exposure, lambs were offered a choice of the aforementioned 6 foods (DV; DV+T) or the monotonous diet (MO). Lambs were intravenously injected with ACTH 1 h after food presentation, and sampled at 1, 2, and 3 h post feeding for determinations of plasma cortisol concentrations. Reluctance to eat novel flavored foods (onion-, coconut- and cinnamon-flavored wheat bran), open field behavior, and SIH was assessed in all treatments. Lambs in MO showed greater concentrations of plasma cortisol 1 h after food presentation than lambs in the DV or DV+T treatments (P=0.04). However, the difference was small and no differences among treatments were detected after an ACTH challenge (P>0.1). Lambs in DV consumed more onion-flavored wheat

  20. HLW Return from France to Germany - 15 Years of Experience in Public Acceptance and Technical Aspects - 12149

    SciTech Connect

    Graf, Wilhelm

    2012-07-01

    Germany over the whole 15-year long project running time could be faced efficiently. It has to be concluded that despite of all problems the anti-nuclear activities have caused so far, all transports of vitrified HLW have always been completed successfully by adapting the commonly established safety, security and public acceptance measures to the special conditions and needs in Germany and coordinating the activities of all parties involved but at the expense of high costs for industry and government and a challenging operational complexity. Apart from an anticipatory project planning a good communication between all involved industrial parties and the French and the German government was the key to the effective management of such shipments and to minimize the radiological, economic, environmental, public and political impact. The future will show how efficiently the gained experience can be used for further return projects which are to be realized since no reprocessed waste has yet been returned from UK and neither the medium-level nor the low-level radioactive waste has been transferred from France to Germany. (author)

  1. Benchmarking for strategic action.

    PubMed

    Jennings, K; Westfall, F

    1992-01-01

    By focusing on three key elements--customer expectations, competitor strengths and vulnerabilities, and organizational competencies--a company's benchmarking effort can be designed to drive the strategic planning process.

  2. Increasing willingness to experience obsessions: acceptance and commitment therapy as a treatment for obsessive-compulsive disorder.

    PubMed

    Twohig, Michael P; Hayes, Steven C; Masuda, Akihiko

    2006-03-01

    This study evaluated the effectiveness of an 8-session Acceptance and Commitment Therapy for OCD intervention in a nonconcurrent multiple-baseline, across-participants design. Results on self-reported compulsions showed that the intervention produced clinically significant reductions in compulsions by the end of treatment for all participants, with results maintained at 3-month follow-up. Self-monitoring was supported with similar decreases in scores on standardized measures of OCD. Positive changes in anxiety and depression were found for all participants as well as expected process changes in the form of decreased experiential avoidance, believability of obsessions, and need to respond to obsessions. All participants found the treatment to be highly acceptable. Implications and future directions are discussed.

  3. Benchmark problems and solutions

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.

    1995-01-01

    The scientific committee, after careful consideration, adopted six categories of benchmark problems for the workshop. These problems do not cover all the important computational issues relevant to Computational Aeroacoustics (CAA). The deciding factor to limit the number of categories to six was the amount of effort needed to solve these problems. For reference purpose, the benchmark problems are provided here. They are followed by the exact or approximate analytical solutions. At present, an exact solution for the Category 6 problem is not available.

  4. Application of surface-harmonics code SUHAM-U and Monte-Carlo code UNK-MC for calculations of 2D light water benchmark-experiment VENUS-2 with UO{sub 2} and MOX fuel

    SciTech Connect

    Boyarinov, V. F.; Davidenko, V. D.; Nevinitsa, V. A.; Tsibulsky, V. F.

    2006-07-01

    Verification of the SUHAM-U code has been carried out by the calculation of two-dimensional benchmark-experiment on critical light-water facility VENUS-2. Comparisons with experimental data and calculations by Monte-Carlo code UNK with the same nuclear data library B645 for basic isotopes have been fulfilled. Calculations of two-dimensional facility were carried out with using experimentally measured buckling values. Possibility of SUHAM code application for computations of PWR reactor with uranium and MOX fuel has been demonstrated. (authors)

  5. The KMAT: Benchmarking Knowledge Management.

    ERIC Educational Resources Information Center

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  6. Benchmarking ENDF/B-VII.0

    NASA Astrophysics Data System (ADS)

    van der Marck, Steven C.

    2006-12-01

    The new major release VII.0 of the ENDF/B nuclear data library has been tested extensively using benchmark calculations. These were based upon MCNP-4C3 continuous-energy Monte Carlo neutronics simulations, together with nuclear data processed using the code NJOY. Three types of benchmarks were used, viz., criticality safety benchmarks, (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 700 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6Li, 7Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D 2O, H 2O, concrete, polyethylene and teflon). For testing delayed neutron data more than thirty measurements in widely varying systems were used. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, and two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. In criticality safety, many benchmarks were chosen from the category with a thermal spectrum, low-enriched uranium, compound fuel (LEU-COMP-THERM), because this is typical of most current-day reactors, and because these benchmarks were previously underpredicted by as much as 0.5% by most nuclear data libraries (such as ENDF/B-VI.8, JEFF-3.0). The calculated results presented here show that this underprediction is no longer there for ENDF/B-VII.0. The average over 257

  7. [Acceptance- and mindfulness-based group intervention in advanced type 2 diabetes patients: therapeutic concept and practical experiences].

    PubMed

    Faude-Lang, Verena; Hartmann, Mechthild; Schmidt, Eva-Maria; Humpert, Per; Nawroth, Peter; Herzog, Wolfgang

    2010-05-01

    Patients with type 2 diabetes mellitus and early diabetic nephropathy have a poor disease-related prognosis; furthermore these patients are often also mentally stressed. We investigated an acceptance- and mindfulness-based group intervention for these patients in addition to regular medical therapy. Both intervention program and descriptive outcomes of patients' evaluation are presented. A total of 51 patients attended the groups. Patients reported developing a mindfulness attitude towards life during the group process as well as an improvement in pain, sleep and worrying.

  8. The role of integral experiments and nuclear cross section evaluations in space nuclear reactor design

    NASA Astrophysics Data System (ADS)

    Moses, David L.; McKnight, Richard D.

    The importance of the nuclear and neutronic properties of candidate space reactor materials to the design process has been acknowledged as has been the use of benchmark reactor physics experiments to verify and qualify analytical tools used in design, safety, and performance evaluation. Since June 1966, the Cross Section Evaluation Working Group (CSEWG) has acted as an interagency forum for the assessment and evaluation of nuclear reaction data used in the nuclear design process. CSEWG data testing has involved the specification and calculation of benchmark experiments which are used widely for commercial reactor design and safety analysis. These benchmark experiments preceded the issuance of the industry standards for acceptance, but the benchmarks exceed the minimum acceptance criteria for such data. Thus, a starting place has been provided in assuring the accuracy and uncertainty of nuclear data important to space reactor applications.

  9. From traditional cognitive-behavioural therapy to acceptance and commitment therapy for chronic pain: a mixed-methods study of staff experiences of change.

    PubMed

    Barker, Estelle; McCracken, Lance M

    2014-08-01

    Health care organizations, both large and small, frequently undergo processes of change. In fact, if health care organizations are to improve over time, they must change; this includes pain services. The purpose of the present study was to examine a process of change in treatment model within a specialty interdisciplinary pain service in the UK. This change entailed a switch from traditional cognitive-behavioural therapy to a form of cognitive-behavioural therapy called acceptance and commitment therapy. An anonymous online survey, including qualitative and quantitative components, was carried out approximately 15 months after the initial introduction of the new treatment model and methods. Fourteen out of 16 current clinical staff responded to the survey. Three themes emerged in qualitative analyses: positive engagement in change; uncertainty and discomfort; and group cohesion versus discord. Quantitative results from closed questions showed a pattern of uncertainty about the superiority of one model over the other, combined with more positive views on progress reflected, and the experience of personal benefits, from adopting the new model. The psychological flexibility model, the model behind acceptance and commitment therapy, may clarify both processes in patient behaviour and processes of staff experience and skilful treatment delivery. This integration of processes on both sides of treatment delivery may be a strength of acceptance and commitment therapy. PMID:26516541

  10. From traditional cognitive-behavioural therapy to acceptance and commitment therapy for chronic pain: a mixed-methods study of staff experiences of change.

    PubMed

    Barker, Estelle; McCracken, Lance M

    2014-08-01

    Health care organizations, both large and small, frequently undergo processes of change. In fact, if health care organizations are to improve over time, they must change; this includes pain services. The purpose of the present study was to examine a process of change in treatment model within a specialty interdisciplinary pain service in the UK. This change entailed a switch from traditional cognitive-behavioural therapy to a form of cognitive-behavioural therapy called acceptance and commitment therapy. An anonymous online survey, including qualitative and quantitative components, was carried out approximately 15 months after the initial introduction of the new treatment model and methods. Fourteen out of 16 current clinical staff responded to the survey. Three themes emerged in qualitative analyses: positive engagement in change; uncertainty and discomfort; and group cohesion versus discord. Quantitative results from closed questions showed a pattern of uncertainty about the superiority of one model over the other, combined with more positive views on progress reflected, and the experience of personal benefits, from adopting the new model. The psychological flexibility model, the model behind acceptance and commitment therapy, may clarify both processes in patient behaviour and processes of staff experience and skilful treatment delivery. This integration of processes on both sides of treatment delivery may be a strength of acceptance and commitment therapy.

  11. The Lasting Influences of Early Food-Related Variety Experience: A Longitudinal Study of Vegetable Acceptance from 5 Months to 6 Years in Two Populations.

    PubMed

    Maier-Nöth, Andrea; Schaal, Benoist; Leathwood, Peter; Issanchou, Sylvie

    2016-01-01

    Children's vegetable consumption falls below current recommendations, highlighting the need to identify strategies that can successfully promote better acceptance of vegetables. Recently, experimental studies have reported promising interventions that increase acceptance of vegetables. The first, offering infants a high variety of vegetables at weaning, increased acceptance of new foods, including vegetables. The second, offering an initially disliked vegetable at 8 subsequent meals markedly increased acceptance for that vegetable. So far, these effects have been shown to persist for at least several weeks. We now present follow-up data at 15 months, 3 and 6 years obtained through questionnaire (15 mo, 3y) and experimental (6y) approaches. At 15 months, participants who had been breast-fed were reported as eating and liking more vegetables than those who had been formula-fed. The initially disliked vegetable that became accepted after repeated exposure was still liked and eaten by 79% of the children. At 3 years, the initially disliked vegetable was still liked and eaten by 73% of the children. At 6 years, observations in an experimental setting showed that children who had been breast-fed and children who had experienced high vegetable variety at the start of weaning ate more of new vegetables and liked them more. They were also more willing to taste vegetables than formula-fed children or the no or low variety groups. The initially disliked vegetable was still liked by 57% of children. This follow-up study suggests that experience with chemosensory variety in the context of breastfeeding or at the onset of complementary feeding can influence chemosensory preferences for vegetables into childhood. PMID:26968029

  12. The Lasting Influences of Early Food-Related Variety Experience: A Longitudinal Study of Vegetable Acceptance from 5 Months to 6 Years in Two Populations

    PubMed Central

    Maier-Nöth, Andrea; Schaal, Benoist; Leathwood, Peter; Issanchou, Sylvie

    2016-01-01

    Children’s vegetable consumption falls below current recommendations, highlighting the need to identify strategies that can successfully promote better acceptance of vegetables. Recently, experimental studies have reported promising interventions that increase acceptance of vegetables. The first, offering infants a high variety of vegetables at weaning, increased acceptance of new foods, including vegetables. The second, offering an initially disliked vegetable at 8 subsequent meals markedly increased acceptance for that vegetable. So far, these effects have been shown to persist for at least several weeks. We now present follow-up data at 15 months, 3 and 6 years obtained through questionnaire (15 mo, 3y) and experimental (6y) approaches. At 15 months, participants who had been breast-fed were reported as eating and liking more vegetables than those who had been formula-fed. The initially disliked vegetable that became accepted after repeated exposure was still liked and eaten by 79% of the children. At 3 years, the initially disliked vegetable was still liked and eaten by 73% of the children. At 6 years, observations in an experimental setting showed that children who had been breast-fed and children who had experienced high vegetable variety at the start of weaning ate more of new vegetables and liked them more. They were also more willing to taste vegetables than formula-fed children or the no or low variety groups. The initially disliked vegetable was still liked by 57% of children. This follow-up study suggests that experience with chemosensory variety in the context of breastfeeding or at the onset of complementary feeding can influence chemosensory preferences for vegetables into childhood. PMID:26968029

  13. The Lasting Influences of Early Food-Related Variety Experience: A Longitudinal Study of Vegetable Acceptance from 5 Months to 6 Years in Two Populations.

    PubMed

    Maier-Nöth, Andrea; Schaal, Benoist; Leathwood, Peter; Issanchou, Sylvie

    2016-01-01

    Children's vegetable consumption falls below current recommendations, highlighting the need to identify strategies that can successfully promote better acceptance of vegetables. Recently, experimental studies have reported promising interventions that increase acceptance of vegetables. The first, offering infants a high variety of vegetables at weaning, increased acceptance of new foods, including vegetables. The second, offering an initially disliked vegetable at 8 subsequent meals markedly increased acceptance for that vegetable. So far, these effects have been shown to persist for at least several weeks. We now present follow-up data at 15 months, 3 and 6 years obtained through questionnaire (15 mo, 3y) and experimental (6y) approaches. At 15 months, participants who had been breast-fed were reported as eating and liking more vegetables than those who had been formula-fed. The initially disliked vegetable that became accepted after repeated exposure was still liked and eaten by 79% of the children. At 3 years, the initially disliked vegetable was still liked and eaten by 73% of the children. At 6 years, observations in an experimental setting showed that children who had been breast-fed and children who had experienced high vegetable variety at the start of weaning ate more of new vegetables and liked them more. They were also more willing to taste vegetables than formula-fed children or the no or low variety groups. The initially disliked vegetable was still liked by 57% of children. This follow-up study suggests that experience with chemosensory variety in the context of breastfeeding or at the onset of complementary feeding can influence chemosensory preferences for vegetables into childhood.

  14. Workshops and problems for benchmarking eddy current codes

    SciTech Connect

    Turner, L.R.; Davey, K.; Ida, N.; Rodger, D.; Kameari, A.; Bossavit, A.; Emson, C.R.I.

    1988-08-01

    A series of six workshops was held in 1986 and 1987 to compare eddy current codes, using six benchmark problems. The problems included transient and steady-state ac magnetic fields, close and far boundary conditions, magnetic and non-magnetic materials. All the problems were based either on experiments or on geometries that can be solved analytically. The workshops and solutions to the problems are described. Results show that many different methods and formulations give satisfactory solutions, and that in many cases reduced dimensionality or coarse discretization can give acceptable results while reducing the computer time required. A second two-year series of TEAM (Testing Electromagnetic Analysis Methods) workshops, using six more problems, is underway. 12 refs., 15 figs., 4 tabs.

  15. Development of a HEX-Z Partially Homogenized Benchmark Model for the FFTF Isothermal Physics Measurements

    SciTech Connect

    John D. Bess

    2012-05-01

    A series of isothermal physics measurements were performed as part of an acceptance testing program for the Fast Flux Test Facility (FFTF). A HEX-Z partially-homogenized benchmark model of the FFTF fully-loaded core configuration was developed for evaluation of these measurements. Evaluated measurements include the critical eigenvalue of the fully-loaded core, two neutron spectra, 32 reactivity effects measurements, an isothermal temperature coefficient, and low-energy gamma and electron spectra. Dominant uncertainties in the critical configuration include the placement of radial shielding around the core, reactor core assembly pitch, composition of the stainless steel components, plutonium content in the fuel pellets, and boron content in the absorber pellets. Calculations of criticality, reactivity effects measurements, and the isothermal temperature coefficient using MCNP5 and ENDF/B-VII.0 cross sections with the benchmark model are in good agreement with the benchmark experiment measurements. There is only some correlation between calculated and measured spectral measurements; homogenization of many of the core components may have impacted computational assessment of these measurements. This benchmark evaluation has been added to the IRPhEP Handbook.

  16. COG validation: SINBAD Benchmark Problems

    SciTech Connect

    Lent, E M; Sale, K E; Buck, R M; Descalle, M

    2004-02-23

    We validated COG, a 3D Monte Carlo radiation transport code, against experimental data and MNCP4C simulations from the Shielding Integral Benchmark Archive Database (SINBAD) compiled by RSICC. We modeled three experiments: the Osaka Nickel and Aluminum sphere experiments conducted at the OKTAVIAN facility, and the liquid oxygen experiment conducted at the FNS facility. COG results are in good agreement with experimental data and generally within a few % of MCNP results. There are several possible sources of discrepancy between MCNP and COG results: (1) the cross-section database versions are different, MCNP uses ENDFB VI 1.1 while COG uses ENDFB VIR7, (2) the code implementations are different, and (3) the models may differ slightly. We also limited the use of variance reduction methods when running the COG version of the problems.

  17. KRITZ-2 Experimental Benchmark Analysis

    SciTech Connect

    Pavlovichev, A.M.

    2001-09-28

    The KRITZ-2 experiment has been adopted by the OECD/NEA Task Force on Reactor-Based Plutonium Disposition for use as a benchmark exercise. The KRITZ-2 experiment consists of three different core configurations (one with near-weapons-grade MOX) with critical conditions a 20 C and 245 C. The KRITZ-2 experiment has calculated the MCU-REA code, which is a continuous energy Monte Carlo code system developed at the Russian Research Center--Kurchatov Institute and is used extensively in the Fissile Materials Disposition Program. The calculated results for k{sub eff} and fission rate distributions are compared with the experimental data and results of other codes. The results are in good agreement with the experimental values.

  18. Insecticide-impregnated bed nets for malaria control: varying experiences from Ecuador, Colombia, and Peru concerning acceptability and effectiveness.

    PubMed

    Kroeger, A; Mancheno, M; Alarcon, J; Pesse, K

    1995-10-01

    Between 1991 and 1994, an intervention program with permethrin- and lambdacyhalothrin-impregnated bed nets was carried out over a period of nine months in each of five endemic, malarious areas of Ecuador, Peru, and Colombia. This program was evaluated through household surveys, blood sampling, in-depth longitudinal studies, and entomologic analysis. Eighty-four communities (including approximately 35,000 individuals) were paired according to malaria incidence, size, and coverage with bed nets and then randomly allocated to intervention and control groups. The results showed that peoples' acceptance of the measure was related to their perception of an immediate protective effect against insects. The effectiveness of the bed nets, measured as a reduction of malaria incidence in intervention communities as against control communities, showed large variations between and within the study areas. The protective efficacy varied between 0% and 70% when looking only at the postintervention differences between intervention and control groups. The average protection was 40.8% when considering a four-month incidence of clinical malaria attacks and 28.3% when considering a two-week malaria incidence. Important factors for the success of the bed net program were insect susceptibility to pyrethroids, high coverage with impregnated bed nets, high malaria incidence, good community participation, high mosquito densities when people go to bed, and a high proportion of Plasmodium falciparum. In one area, where DDT spraying in the control communities was executed, the effectiveness of bed net impregnation was slightly better than that of spraying.

  19. Surveys and Benchmarks

    ERIC Educational Resources Information Center

    Bers, Trudy

    2012-01-01

    Surveys and benchmarks continue to grow in importance for community colleges in response to several factors. One is the press for accountability, that is, for colleges to report the outcomes of their programs and services to demonstrate their quality and prudent use of resources, primarily to external constituents and governing boards at the state…

  20. Python/Lua Benchmarks

    SciTech Connect

    Busby, L.

    2014-08-01

    This is an adaptation of the pre-existing Scimark benchmark code to a variety of Python and Lua implementations. It also measures performance of the Fparser expression parser and C and C++ code on a variety of simple scientific expressions.

  1. Benchmarking the World's Best

    ERIC Educational Resources Information Center

    Tucker, Marc S.

    2012-01-01

    A century ago, the United States was a world leader in industrial benchmarking. However, after World War II, once no one could compete with the U.S., it became complacent. Many industrialized countries now have higher student achievement and more equitable and efficient education systems. A higher proportion of young people in their workforces…

  2. Monte Carlo Benchmark

    2010-10-20

    The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.

  3. Benchmarks: WICHE Region 2012

    ERIC Educational Resources Information Center

    Western Interstate Commission for Higher Education, 2013

    2013-01-01

    Benchmarks: WICHE Region 2012 presents information on the West's progress in improving access to, success in, and financing of higher education. The information is updated annually to monitor change over time and encourage its use as a tool for informed discussion in policy and education communities. To establish a general context for the…

  4. From Being Bullied to Being Accepted: The Lived Experiences of a Student with Asperger's Enrolled in a Christian University

    ERIC Educational Resources Information Center

    Reid, Denise P.

    2015-01-01

    Thirteen participants from two private universities located in the western region of the United States shared their lived experiences of being a college student who does not request accommodations. In one's educational pursuit, bullying is often experienced. While the rates of bullying have increased, students with disabilities are more likely to…

  5. Simple benchmark for complex dose finding studies.

    PubMed

    Cheung, Ying Kuen

    2014-06-01

    While a general goal of early phase clinical studies is to identify an acceptable dose for further investigation, modern dose finding studies and designs are highly specific to individual clinical settings. In addition, as outcome-adaptive dose finding methods often involve complex algorithms, it is crucial to have diagnostic tools to evaluate the plausibility of a method's simulated performance and the adequacy of the algorithm. In this article, we propose a simple technique that provides an upper limit, or a benchmark, of accuracy for dose finding methods for a given design objective. The proposed benchmark is nonparametric optimal in the sense of O'Quigley et al. (2002, Biostatistics 3, 51-56), and is demonstrated by examples to be a practical accuracy upper bound for model-based dose finding methods. We illustrate the implementation of the technique in the context of phase I trials that consider multiple toxicities and phase I/II trials where dosing decisions are based on both toxicity and efficacy, and apply the benchmark to several clinical examples considered in the literature. By comparing the operating characteristics of a dose finding method to that of the benchmark, we can form quick initial assessments of whether the method is adequately calibrated and evaluate its sensitivity to the dose-outcome relationships.

  6. Patients' experience of a telephone booster intervention to support weight management in Type 2 diabetes and its acceptability.

    PubMed

    Wu, Lihua; Forbes, Angus; While, Alison

    2010-01-01

    We studied the patient experience of a telephone booster intervention, i.e. weekly reinforcement of the clinic advice regarding lifestyle modification advice to support weight loss. Forty six adults with Type 2 diabetes and a body mass index >28 kg/m(2) were randomised into either intervention (n = 25) or control (n = 21) groups. Semi-structured interviews were conducted with the intervention group participants to explore their views and experiences. The patients were satisfied or very satisfied with the telephone calls and most would recommend the intervention to others in a similar situation. The content of the telephone follow-up met their need for on-going support. The benefits arising from the telephone calls included: being reminded to comply with their regimen; prompting and motivating adherence to diabetes self-care behaviours; improved self-esteem; and feeling 'worthy of interest'. The convenience and low cost of telephone support has much potential in chronic disease management.

  7. Inventory of Safety-related Codes and Standards for Energy Storage Systems with some Experiences related to Approval and Acceptance

    SciTech Connect

    Conover, David R.

    2014-09-11

    The purpose of this document is to identify laws, rules, model codes, codes, standards, regulations, specifications (CSR) related to safety that could apply to stationary energy storage systems (ESS) and experiences to date securing approval of ESS in relation to CSR. This information is intended to assist in securing approval of ESS under current CSR and to identification of new CRS or revisions to existing CRS and necessary supporting research and documentation that can foster the deployment of safe ESS.

  8. Storage-Intensive Supercomputing Benchmark Study

    SciTech Connect

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    DBE Xeon Dual Socket Blackford Server Motherboard; 2 Intel Xeon Dual-Core 2.66 GHz processors; 1 GB DDR2 PC2-5300 RAM (2 x 512); 80GB Hard Drive (Seagate SATA II Barracuda). The Fusion board is presently capable of 4X in a PCIe slot. The image resampling benchmark was run on a dual Xeon workstation with NVIDIA graphics card (see Chapter 5 for full specification). An XtremeData Opteron+FPGA was used for the language classification application. We observed that these benchmarks are not uniformly I/O intensive. The only benchmark that showed greater that 50% of the time in I/O was the graph algorithm when it accessed data files over NFS. When local disk was used, the graph benchmark spent at most 40% of its time in I/O. The other benchmarks were CPU dominated. The image resampling benchmark and language classification showed order of magnitude speedup over software by using co-processor technology to offload the CPU-intensive kernels. Our experiments to date suggest that emerging hardware technologies offer significant benefit to boosting the performance of data-intensive algorithms. Using GPU and FPGA co-processors, we were able to improve performance by more than an order of magnitude on the benchmark algorithms, eliminating the processor bottleneck of CPU-bound tasks. Experiments with a prototype solid state nonvolative memory available today show 10X better throughput on random reads than disk, with a 2X speedup on a graph processing benchmark when compared to the use of local SATA disk.

  9. Principles for an ETL Benchmark

    NASA Astrophysics Data System (ADS)

    Wyatt, Len; Caufield, Brian; Pol, Daniel

    Conditions in the marketplace for ETL tools suggest that an industry standard benchmark is needed. The benchmark should provide useful data for comparing the performance of ETL systems, be based on a meaningful scenario, and be scalable over a wide range of data set sizes. This paper gives a general scoping of the proposed benchmark and outlines some key decision points. The Transaction Processing Performance Council (TPC) has formed a development subcommittee to define and produce such a benchmark.

  10. Quality Benchmarks in Undergraduate Psychology Programs

    ERIC Educational Resources Information Center

    Dunn, Dana S.; McCarthy, Maureen A.; Baker, Suzanne; Halonen, Jane S.; Hill, G. William, IV

    2007-01-01

    Performance benchmarks are proposed to assist undergraduate psychology programs in defining their missions and goals as well as documenting their effectiveness. Experienced academic program reviewers compared their experiences to formulate a developmental framework of attributes of undergraduate programs focusing on activity in 8 domains:…

  11. Effectiveness and acceptability of parental financial incentives and quasi-mandatory schemes for increasing uptake of vaccinations in preschool children: systematic review, qualitative study and discrete choice experiment.

    PubMed Central

    Adams, Jean; Bateman, Belinda; Becker, Frauke; Cresswell, Tricia; Flynn, Darren; McNaughton, Rebekah; Oluboyede, Yemi; Robalino, Shannon; Ternent, Laura; Sood, Benjamin Gardner; Michie, Susan; Shucksmith, Janet; Sniehotta, Falko F; Wigham, Sarah

    2015-01-01

    BACKGROUND Uptake of preschool vaccinations is less than optimal. Financial incentives and quasi-mandatory policies (restricting access to child care or educational settings to fully vaccinated children) have been used to increase uptake internationally, but not in the UK. OBJECTIVE To provide evidence on the effectiveness, acceptability and economic costs and consequences of parental financial incentives and quasi-mandatory schemes for increasing the uptake of preschool vaccinations. DESIGN Systematic review, qualitative study and discrete choice experiment (DCE) with questionnaire. SETTING Community, health and education settings in England. PARTICIPANTS Qualitative study - parents and carers of preschool children, health and educational professionals. DCE - parents and carers of preschool children identified as 'at high risk' and 'not at high risk' of incompletely vaccinating their children. DATA SOURCES Qualitative study - focus groups and individual interviews. DCE - online questionnaire. REVIEW METHODS The review included studies exploring the effectiveness, acceptability or economic costs and consequences of interventions that offered contingent rewards or penalties with real material value for preschool vaccinations, or quasi-mandatory schemes that restricted access to 'universal' services, compared with usual care or no intervention. Electronic database, reference and citation searches were conducted. RESULTS Systematic review - there was insufficient evidence to conclude that the interventions considered are effective. There was some evidence that the quasi-mandatory interventions were acceptable. There was insufficient evidence to draw conclusions on economic costs and consequences. Qualitative study - there was little appetite for parental financial incentives. Quasi-mandatory schemes were more acceptable. Optimising current services was consistently preferred to the interventions proposed. DCE and questionnaire - universal parental financial incentives

  12. Benchmark testing of {sup 233}U evaluations

    SciTech Connect

    Wright, R.Q.; Leal, L.C.

    1997-07-01

    In this paper we investigate the adequacy of available {sup 233}U cross-section data (ENDF/B-VI and JENDL-3) for calculation of critical experiments. An ad hoc revised {sup 233}U evaluation is also tested and appears to give results which are improved relative to those obtained with either ENDF/B-VI or JENDL-3 cross sections. Calculations of k{sub eff} were performed for ten fast benchmarks and six thermal benchmarks using the three cross-section sets. Central reaction-rate-ratio calculations were also performed.

  13. Radiography benchmark 2014

    SciTech Connect

    Jaenisch, G.-R. Deresch, A. Bellon, C.; Schumm, A.; Lucet-Sanchez, F.; Guerin, P.

    2015-03-31

    The purpose of the 2014 WFNDEC RT benchmark study was to compare predictions of various models of radiographic techniques, in particular those that predict the contribution of scattered radiation. All calculations were carried out for homogenous materials and a mono-energetic X-ray point source in the energy range between 100 keV and 10 MeV. The calculations were to include the best physics approach available considering electron binding effects. Secondary effects like X-ray fluorescence and bremsstrahlung production were to be taken into account if possible. The problem to be considered had two parts. Part I examined the spectrum and the spatial distribution of radiation behind a single iron plate. Part II considered two equally sized plates, made of iron and aluminum respectively, only evaluating the spatial distribution. Here we present the results of above benchmark study, comparing them to MCNP as the assumed reference model. The possible origins of the observed deviations are discussed.

  14. Benchmark Airport Charges

    NASA Technical Reports Server (NTRS)

    de Wit, A.; Cohn, N.

    1999-01-01

    The Netherlands Directorate General of Civil Aviation (DGCA) commissioned Hague Consulting Group (HCG) to complete a benchmark study of airport charges at twenty eight airports in Europe and around the world, based on 1996 charges. This study followed previous DGCA research on the topic but included more airports in much more detail. The main purpose of this new benchmark study was to provide insight into the levels and types of airport charges worldwide and into recent changes in airport charge policy and structure. This paper describes the 1996 analysis. It is intended that this work be repeated every year in order to follow developing trends and provide the most up-to-date information possible.

  15. Sequoia Messaging Rate Benchmark

    2008-01-22

    The purpose of this benchmark is to measure the maximal message rate of a single compute node. The first num_cores ranks are expected to reside on the 'core' compute node for which message rate is being tested. After that, the next num_nbors ranks are neighbors for the first core rank, the next set of num_nbors ranks are neighbors for the second core rank, and so on. For example, testing an 8-core node (num_cores = 8)more » with 4 neighbors (num_nbors = 4) requires 8 + 8 * 4 - 40 ranks. The first 8 of those 40 ranks are expected to be on the 'core' node being benchmarked, while the rest of the ranks are on separate nodes.« less

  16. Radiography benchmark 2014

    NASA Astrophysics Data System (ADS)

    Jaenisch, G.-R.; Deresch, A.; Bellon, C.; Schumm, A.; Lucet-Sanchez, F.; Guerin, P.

    2015-03-01

    The purpose of the 2014 WFNDEC RT benchmark study was to compare predictions of various models of radiographic techniques, in particular those that predict the contribution of scattered radiation. All calculations were carried out for homogenous materials and a mono-energetic X-ray point source in the energy range between 100 keV and 10 MeV. The calculations were to include the best physics approach available considering electron binding effects. Secondary effects like X-ray fluorescence and bremsstrahlung production were to be taken into account if possible. The problem to be considered had two parts. Part I examined the spectrum and the spatial distribution of radiation behind a single iron plate. Part II considered two equally sized plates, made of iron and aluminum respectively, only evaluating the spatial distribution. Here we present the results of above benchmark study, comparing them to MCNP as the assumed reference model. The possible origins of the observed deviations are discussed.

  17. MPI Multicore Linktest Benchmark

    2008-01-25

    The MPI Multicore Linktest (LinkTest) measures the aggregate bandwidth from/to a multicore node in a parallel system. It allows the user to specify a variety of different node layout and communication routine variations and reports the maximal observed bandwidth across all specified options. In particular, this benchmark is able to vary the number of tasks on the root node and thereby allows users to study the impact of multicore architectures on MPI communication performance.

  18. Algebraic Multigrid Benchmark

    SciTech Connect

    2013-05-06

    AMG2013 is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. It has been derived directly from the Boomer AMG solver in the hypre library, a large linear solvers library that is being developed in the Center for Applied Scientific Computing (CASC) at LLNL. The driver provided in the benchmark can build various test problems. The default problem is a Laplace type problem on an unstructured domain with various jumps and an anisotropy in one part.

  19. Benchmarking HIPAA compliance.

    PubMed

    Wagner, James R; Thoman, Deborah J; Anumalasetty, Karthikeyan; Hardre, Pat; Ross-Lazarov, Tsvetomir

    2002-01-01

    One of the nation's largest academic medical centers is benchmarking its operations using internally developed software to improve privacy/confidentiality of protected health information (PHI) and to enhance data security to comply with HIPAA regulations. It is also coordinating the development of a web-based interactive product that can help hospitals, physician practices, and managed care organizations measure their compliance with HIPAA regulations.

  20. The NAS Parallel Benchmarks

    SciTech Connect

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage

  1. Air Traffic Management Technology Demostration Phase 1 (ATD) Interval Management for Near-Term Operations Validation of Acceptability (IM-NOVA) Experiment

    NASA Technical Reports Server (NTRS)

    Kibler, Jennifer L.; Wilson, Sara R.; Hubbs, Clay E.; Smail, James W.

    2015-01-01

    The Interval Management for Near-term Operations Validation of Acceptability (IM-NOVA) experiment was conducted at the National Aeronautics and Space Administration (NASA) Langley Research Center (LaRC) in support of the NASA Airspace Systems Program's Air Traffic Management Technology Demonstration-1 (ATD-1). ATD-1 is intended to showcase an integrated set of technologies that provide an efficient arrival solution for managing aircraft using Next Generation Air Transportation System (NextGen) surveillance, navigation, procedures, and automation for both airborne and ground-based systems. The goal of the IMNOVA experiment was to assess if procedures outlined by the ATD-1 Concept of Operations were acceptable to and feasible for use by flight crews in a voice communications environment when used with a minimum set of Flight Deck-based Interval Management (FIM) equipment and a prototype crew interface. To investigate an integrated arrival solution using ground-based air traffic control tools and aircraft Automatic Dependent Surveillance-Broadcast (ADS-B) tools, the LaRC FIM system and the Traffic Management Advisor with Terminal Metering and Controller Managed Spacing tools developed at the NASA Ames Research Center (ARC) were integrated into LaRC's Air Traffic Operations Laboratory (ATOL). Data were collected from 10 crews of current 757/767 pilots asked to fly a high-fidelity, fixed-based simulator during scenarios conducted within an airspace environment modeled on the Dallas-Fort Worth (DFW) Terminal Radar Approach Control area. The aircraft simulator was equipped with the Airborne Spacing for Terminal Area Routes (ASTAR) algorithm and a FIM crew interface consisting of electronic flight bags and ADS-B guidance displays. Researchers used "pseudo-pilot" stations to control 24 simulated aircraft that provided multiple air traffic flows into the DFW International Airport, and recently retired DFW air traffic controllers served as confederate Center, Feeder, Final

  2. Benchmarking: applications to transfusion medicine.

    PubMed

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal.

  3. RECENT ADDITIONS OF CRITICALITY SAFETY RELATED INTEGRAL BENCHMARK DATA TO THE ICSBEP AND IRPHEP HANDBOOKS

    SciTech Connect

    J. Blair Briggs; Lori Scott; Yolanda Rugama; Enrico Sartori

    2009-09-01

    High-quality integral benchmark experiments have always been a priority for criticality safety. However, interest in integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of future criticality safety needs to support next generation reactor and advanced fuel cycle concepts. The importance of drawing upon existing benchmark data is becoming more apparent because of dwindling availability of critical facilities worldwide and the high cost of performing new experiments. Integral benchmark data from the International Handbook of Evaluated Criticality Safety Benchmark Experiments and the International Handbook of Reactor Physics Benchmark Experiments are widely used. Benchmark data have been added to these two handbooks since the last Nuclear Criticality Safety Division Topical Meeting in Knoxville, Tennessee (September 2005). This paper highlights these additions.

  4. Benchmark on outdoor scenes

    NASA Astrophysics Data System (ADS)

    Zhang, Hairong; Wang, Cheng; Chen, Yiping; Jia, Fukai; Li, Jonathan

    2016-03-01

    Depth super-resolution is becoming popular in computer vision, and most of test data is based on indoor data sets with ground-truth measurements such as Middlebury. However, indoor data sets mainly are acquired from structured light techniques under ideal conditions, which cannot represent the objective world with nature light. Unlike indoor scenes, the uncontrolled outdoor environment is much more complicated and is rich both in visual and depth texture. For that reason, we develop a more challenging and meaningful outdoor benchmark for depth super-resolution using the state-of-the-art active laser scanning system.

  5. Benchmarking emerging logic devices

    NASA Astrophysics Data System (ADS)

    Nikonov, Dmitri

    2014-03-01

    As complementary metal-oxide-semiconductor field-effect transistors (CMOS FET) are being scaled to ever smaller sizes by the semiconductor industry, the demand is growing for emerging logic devices to supplement CMOS in various special functions. Research directions and concepts of such devices are overviewed. They include tunneling, graphene based, spintronic devices etc. The methodology to estimate future performance of emerging (beyond CMOS) devices and simple logic circuits based on them is explained. Results of benchmarking are used to identify more promising concepts and to map pathways for improvement of beyond CMOS computing.

  6. Algebraic Multigrid Benchmark

    2013-05-06

    AMG2013 is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. It has been derived directly from the Boomer AMG solver in the hypre library, a large linear solvers library that is being developed in the Center for Applied Scientific Computing (CASC) at LLNL. The driver provided in the benchmark can build various test problems. The default problem is a Laplace type problem on an unstructured domain with various jumpsmore » and an anisotropy in one part.« less

  7. 2001 benchmarking guide.

    PubMed

    Hoppszallern, S

    2001-01-01

    Our fifth annual guide to benchmarking under managed care presents data that is a study in market dynamics and adaptation. New this year are financial indicators on HMOs exiting the market and those remaining. Hospital financial ratios and details on department performance are included. The physician group practice numbers show why physicians are scrutinizing capitated payments. Overall, hospitals in markets with high managed care penetration are more successful in managing labor costs and show productivity gains in imaging services, physical therapy and materials management.

  8. Self-benchmarking Guide for Data Centers: Metrics, Benchmarks, Actions

    SciTech Connect

    Mathew, Paul; Ganguly, Srirupa; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in data centers. This guide is primarily intended for personnel who have responsibility for managing energy use in existing data centers - including facilities managers, energy managers, and their engineering consultants. Additionally, data center designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior data center benchmarking studies supported by the California Energy Commission. Much of the benchmarking data are drawn from the LBNL data center benchmarking database that was developed from these studies. Additional benchmark data were obtained from engineering experts including facility designers and energy managers. This guide also builds on recent research supported by the U.S. Department of Energy's Save Energy Now program.

  9. Benchmarking concentrating photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Duerr, Fabian; Muthirayan, Buvaneshwari; Meuret, Youri; Thienpont, Hugo

    2010-08-01

    Integral to photovoltaics is the need to provide improved economic viability. To achieve this goal, photovoltaic technology has to be able to harness more light at less cost. A large variety of concentrating photovoltaic concepts has provided cause for pursuit. To obtain a detailed profitability analysis, a flexible evaluation is crucial for benchmarking the cost-performance of this variety of concentrating photovoltaic concepts. To save time and capital, a way to estimate the cost-performance of a complete solar energy system is to use computer aided modeling. In this work a benchmark tool is introduced based on a modular programming concept. The overall implementation is done in MATLAB whereas Advanced Systems Analysis Program (ASAP) is used for ray tracing calculations. This allows for a flexible and extendable structuring of all important modules, namely an advanced source modeling including time and local dependence, and an advanced optical system analysis of various optical designs to obtain an evaluation of the figure of merit. An important figure of merit: the energy yield for a given photovoltaic system at a geographical position over a specific period, can be calculated.

  10. A new numerical benchmark of a freshwater lens

    NASA Astrophysics Data System (ADS)

    Stoeckl, L.; Walther, M.; Graf, T.

    2016-04-01

    A numerical benchmark for 2-D variable-density flow and solute transport in a freshwater lens is presented. The benchmark is based on results of laboratory experiments conducted by Stoeckl and Houben (2012) using a sand tank on the meter scale. This benchmark describes the formation and degradation of a freshwater lens over time as it can be found under real-world islands. An error analysis gave the appropriate spatial and temporal discretization of 1 mm and 8.64 s, respectively. The calibrated parameter set was obtained using the parameter estimation tool PEST. Comparing density-coupled and density-uncoupled results showed that the freshwater-saltwater interface position is strongly dependent on density differences. A benchmark that adequately represents saltwater intrusion and that includes realistic features of coastal aquifers or freshwater lenses was lacking. This new benchmark was thus developed and is demonstrated to be suitable to test variable-density groundwater models applied to saltwater intrusion investigations.

  11. Quantum benchmarks for pure single-mode Gaussian states.

    PubMed

    Chiribella, Giulio; Adesso, Gerardo

    2014-01-10

    Teleportation and storage of continuous variable states of light and atoms are essential building blocks for the realization of large-scale quantum networks. Rigorous validation of these implementations require identifying, and surpassing, benchmarks set by the most effective strategies attainable without the use of quantum resources. Such benchmarks have been established for special families of input states, like coherent states and particular subclasses of squeezed states. Here we solve the longstanding problem of defining quantum benchmarks for general pure Gaussian single-mode states with arbitrary phase, displacement, and squeezing, randomly sampled according to a realistic prior distribution. As a special case, we show that the fidelity benchmark for teleporting squeezed states with totally random phase and squeezing degree is 1/2, equal to the corresponding one for coherent states. We discuss the use of entangled resources to beat the benchmarks in experiments. PMID:24483875

  12. A Benchmarking Model. Benchmarking Quality Performance in Vocational Technical Education.

    ERIC Educational Resources Information Center

    Losh, Charles

    The Skills Standards Projects have provided further emphasis on the need for benchmarking U.S. vocational-technical education (VTE) against international competition. Benchmarking is an ongoing systematic process designed to identify, as quantitatively as possible, those practices that produce world class performance. Metrics are those things that…

  13. Self-benchmarking Guide for Cleanrooms: Metrics, Benchmarks, Actions

    SciTech Connect

    Mathew, Paul; Sartor, Dale; Tschudi, William

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  14. Self-benchmarking Guide for Laboratory Buildings: Metrics, Benchmarks, Actions

    SciTech Connect

    Mathew, Paul; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  15. Aeroelasticity Benchmark Assessment: Subsonic Fixed Wing Program

    NASA Technical Reports Server (NTRS)

    Florance, Jennifer P.; Chwalowski, Pawel; Wieseman, Carol D.

    2010-01-01

    The fundamental technical challenge in computational aeroelasticity is the accurate prediction of unsteady aerodynamic phenomena and the effect on the aeroelastic response of a vehicle. Currently, a benchmarking standard for use in validating the accuracy of computational aeroelasticity codes does not exist. Many aeroelastic data sets have been obtained in wind-tunnel and flight testing throughout the world; however, none have been globally presented or accepted as an ideal data set. There are numerous reasons for this. One reason is that often, such aeroelastic data sets focus on the aeroelastic phenomena alone (flutter, for example) and do not contain associated information such as unsteady pressures and time-correlated structural dynamic deflections. Other available data sets focus solely on the unsteady pressures and do not address the aeroelastic phenomena. Other discrepancies can include omission of relevant data, such as flutter frequency and / or the acquisition of only qualitative deflection data. In addition to these content deficiencies, all of the available data sets present both experimental and computational technical challenges. Experimental issues include facility influences, nonlinearities beyond those being modeled, and data processing. From the computational perspective, technical challenges include modeling geometric complexities, coupling between the flow and the structure, grid issues, and boundary conditions. The Aeroelasticity Benchmark Assessment task seeks to examine the existing potential experimental data sets and ultimately choose the one that is viewed as the most suitable for computational benchmarking. An initial computational evaluation of that configuration will then be performed using the Langley-developed computational fluid dynamics (CFD) software FUN3D1 as part of its code validation process. In addition to the benchmarking activity, this task also includes an examination of future research directions. Researchers within the

  16. Benchmark Quantum Mechanical Calculations of Vibrationally Resolved Cross Sections and Rate Constants on ab Initio Potential Energy Surfaces for the F + HD Reaction: Comparisons with Experiments.

    PubMed

    De Fazio, Dario; Cavalli, Simonetta; Aquilanti, Vincenzo

    2016-07-14

    Quantum scattering calculations within the time-independent approach in an extended interval of energies were performed for the title reaction on four ab initio potential energy surfaces. The calculated integral cross sections, vibrational branching ratios, and rate constants are compared with scattering experiments as well as with chemical kinetics rate data available for this system for both the HF and DF channels. The calculations on the CSZ (J. Chem. Phys. 2015, 142, 024303) and LWAL (J. Chem. Phys. 2007, 127, 174302) surfaces are in close agreement between them and reproduce satisfactorily the experimental measurements. The agreement with the experiments is improved with respect to calculations on the earlier SW (J. Chem. Phys. 1996, 104, 6515) and FXZ (J. Chem. Phys. 2008, 129, 011103) surfaces. The results presented here witness the remarkable progress made by quantum chemistry calculations in describing the interatomic interactions governing the dynamics and kinetics of this reaction. They also suggest that comparison with translationally and rotationally averaged experimental observables is not sufficient to assess the relative accuracy of highly accurate potential energy surfaces. The dynamics and kinetics calculations show that temperatures lower than 50 K or molecular beam energy spread below 1 meV must be reached to discriminate the accuracy of the LWAL and the CSZ surfaces.

  17. SPEEDES benchmarking analysis

    NASA Astrophysics Data System (ADS)

    Capella, Sebastian J.; Steinman, Jeffrey S.; McGraw, Robert M.

    2002-07-01

    SPEEDES, the Synchronous Parallel Environment for Emulation and Discrete Event Simulation, is a software framework that supports simulation applications across parallel and distributed architectures. SPEEDES is used as a simulation engine in support of numerous defense projects including the Joint Simulation System (JSIMS), the Joint Modeling And Simulation System (JMASS), the High Performance Computing and Modernization Program's (HPCMP) development of a High Performance Computing (HPC) Run-time Infrastructure, and the Defense Modeling and Simulation Office's (DMSO) development of a Human Behavioral Representation (HBR) Testbed. This work documents some of the performance metrics obtained from benchmarking the SPEEDES Simulation Framework with respect to the functionality found in the summer of 2001. Specifically this papers the scalability of SPEEDES with respect to its time management algorithms and simulation object event queues with respect to the number of objects simulated and events processed.

  18. Benchmarking foreign electronics technologies

    SciTech Connect

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  19. Benchmarking monthly homogenization algorithms

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  20. Internal Benchmarking for Institutional Effectiveness

    ERIC Educational Resources Information Center

    Ronco, Sharron L.

    2012-01-01

    Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…

  1. Benchmark for Strategic Performance Improvement.

    ERIC Educational Resources Information Center

    Gohlke, Annette

    1997-01-01

    Explains benchmarking, a total quality management tool used to measure and compare the work processes in a library with those in other libraries to increase library performance. Topics include the main groups of upper management, clients, and staff; critical success factors for each group; and benefits of benchmarking. (Author/LRW)

  2. The NAS kernel benchmark program

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.; Barton, J. T.

    1985-01-01

    A collection of benchmark test kernels that measure supercomputer performance has been developed for the use of the NAS (Numerical Aerodynamic Simulation) program at the NASA Ames Research Center. This benchmark program is described in detail and the specific ground rules are given for running the program as a performance test.

  3. FireHose Streaming Benchmarks

    2015-01-27

    The FireHose Streaming Benchmarks are a suite of stream-processing benchmarks defined to enable comparison of streaming software and hardware, both quantitatively vis-a-vis the rate at which they can process data, and qualitatively by judging the effort involved to implement and run the benchmarks. Each benchmark has two parts. The first is a generator which produces and outputs datums at a high rate in a specific format. The second is an analytic which reads the streammore » of datums and is required to perform a well-defined calculation on the collection of datums, typically to find anomalous datums that have been created in the stream by the generator. The FireHose suite provides code for the generators, sample code for the analytics (which users are free to re-implement in their own custom frameworks), and a precise definition of each benchmark calculation.« less

  4. FireHose Streaming Benchmarks

    SciTech Connect

    Karl Anderson, Steve Plimpton

    2015-01-27

    The FireHose Streaming Benchmarks are a suite of stream-processing benchmarks defined to enable comparison of streaming software and hardware, both quantitatively vis-a-vis the rate at which they can process data, and qualitatively by judging the effort involved to implement and run the benchmarks. Each benchmark has two parts. The first is a generator which produces and outputs datums at a high rate in a specific format. The second is an analytic which reads the stream of datums and is required to perform a well-defined calculation on the collection of datums, typically to find anomalous datums that have been created in the stream by the generator. The FireHose suite provides code for the generators, sample code for the analytics (which users are free to re-implement in their own custom frameworks), and a precise definition of each benchmark calculation.

  5. International land Model Benchmarking (ILAMB) Package v001.00

    DOE Data Explorer

    Mu, Mingquan [University of California, Irvine; Randerson, James T. [University of California, Irvine; Riley, William J. [Lawrence Berkeley National Laboratory; Hoffman, Forrest M. [Oak Ridge National Laboratory

    2016-05-02

    As a contribution to International Land Model Benchmarking (ILAMB) Project, we are providing new analysis approaches, benchmarking tools, and science leadership. The goal of ILAMB is to assess and improve the performance of land models through international cooperation and to inform the design of new measurement campaigns and field studies to reduce uncertainties associated with key biogeochemical processes and feedbacks. ILAMB is expected to be a primary analysis tool for CMIP6 and future model-data intercomparison experiments. This team has developed initial prototype benchmarking systems for ILAMB, which will be improved and extended to include ocean model metrics and diagnostics.

  6. International land Model Benchmarking (ILAMB) Package v002.00

    DOE Data Explorer

    Collier, Nathaniel [Oak Ridge National Laboratory; Hoffman, Forrest M. [Oak Ridge National Laboratory; Mu, Mingquan [University of California, Irvine; Randerson, James T. [University of California, Irvine; Riley, William J. [Lawrence Berkeley National Laboratory

    2016-05-09

    As a contribution to International Land Model Benchmarking (ILAMB) Project, we are providing new analysis approaches, benchmarking tools, and science leadership. The goal of ILAMB is to assess and improve the performance of land models through international cooperation and to inform the design of new measurement campaigns and field studies to reduce uncertainties associated with key biogeochemical processes and feedbacks. ILAMB is expected to be a primary analysis tool for CMIP6 and future model-data intercomparison experiments. This team has developed initial prototype benchmarking systems for ILAMB, which will be improved and extended to include ocean model metrics and diagnostics.

  7. Three-Dimensional (X,Y,Z) Deterministic Analysis of the PCA-Replica Neutron Shielding Benchmark Experiment using the TORT-3.2 Code and Group Cross Section Libraries for LWR Shielding and Pressure Vessel Dosimetry

    NASA Astrophysics Data System (ADS)

    Pescarini, Massimo; Orsi, Roberto; Frisoni, Manuela

    2016-02-01

    The PCA-Replica 12/13 (H2O/Fe) neutron shielding benchmark experiment was analysed using the ORNL TORT-3.2 3D SN code. PCA-Replica, specifically conceived to test the accuracy of nuclear data and transport codes employed in LWR shielding and radiation damage calculations, reproduces a PWR ex-core radial geometry with alternate layers of water and steel including a PWR pressure vessel simulator. Three broad-group coupled neutron/photon working cross section libraries in FIDO-ANISN format with the same energy group structure (47 n + 20 γ) and based on different nuclear data were alternatively used: the ENEA BUGJEFF311.BOLIB (JEFF-3.1.1) and BUGENDF70.BOLIB (ENDF/B-VII.0) libraries and the ORNL BUGLE-96 (ENDF/B-VI.3) library. Dosimeter cross sections derived from the IAEA IRDF-2002 dosimetry file were employed. The calculated reaction rates for the Rh-103(n,n')Rh-103 m, In-115(n,n')In-115m and S-32(n,p)P-32 threshold activation dosimeters and the calculated neutron spectra are compared with the corresponding experimental results.

  8. Validation of the BUGJEFF311.BOLIB, BUGENDF70.BOLIB and BUGLE-B7 broad-group libraries on the PCA-Replica (H2O/Fe) neutron shielding benchmark experiment

    NASA Astrophysics Data System (ADS)

    Pescarini, Massimo; Orsi, Roberto; Frisoni, Manuela

    2016-03-01

    The PCA-Replica 12/13 (H2O/Fe) neutron shielding benchmark experiment was analysed using the TORT-3.2 3D SN code. PCA-Replica reproduces a PWR ex-core radial geometry with alternate layers of water and steel including a pressure vessel simulator. Three broad-group coupled neutron/photon working cross section libraries in FIDO-ANISN format with the same energy group structure (47 n + 20 γ) and based on different nuclear data were alternatively used: the ENEA BUGJEFF311.BOLIB (JEFF-3.1.1) and UGENDF70.BOLIB (ENDF/B-VII.0) libraries and the ORNL BUGLE-B7 (ENDF/B-VII.0) library. Dosimeter cross sections derived from the IAEA IRDF-2002 dosimetry file were employed. The calculated reaction rates for the Rh-103(n,n')Rh-103m, In-115(n,n')In-115m and S-32(n,p)P-32 threshold activation dosimeters and the calculated neutron spectra are compared with the corresponding experimental results.

  9. Benchmarking in water project analysis

    NASA Astrophysics Data System (ADS)

    Griffin, Ronald C.

    2008-11-01

    The with/without principle of cost-benefit analysis is examined for the possible bias that it brings to water resource planning. Theory and examples for this question are established. Because benchmarking against the demonstrably low without-project hurdle can detract from economic welfare and can fail to promote efficient policy, improvement opportunities are investigated. In lieu of the traditional, without-project benchmark, a second-best-based "difference-making benchmark" is proposed. The project authorizations and modified review processes instituted by the U.S. Water Resources Development Act of 2007 may provide for renewed interest in these findings.

  10. Issues in Benchmark Metric Selection

    NASA Astrophysics Data System (ADS)

    Crolotte, Alain

    It is true that a metric can influence a benchmark but will esoteric metrics create more problems than they will solve? We answer this question affirmatively by examining the case of the TPC-D metric which used the much debated geometric mean for the single-stream test. We will show how a simple choice influenced the benchmark and its conduct and, to some extent, DBMS development. After examining other alternatives our conclusion is that the “real” measure for a decision-support benchmark is the arithmetic mean.

  11. A Heterogeneous Medium Analytical Benchmark

    SciTech Connect

    Ganapol, B.D.

    1999-09-27

    A benchmark, called benchmark BLUE, has been developed for one-group neutral particle (neutron or photon) transport in a one-dimensional sub-critical heterogeneous plane parallel medium with surface illumination. General anisotropic scattering is accommodated through the Green's Function Method (GFM). Numerical Fourier transform inversion is used to generate the required Green's functions which are kernels to coupled integral equations that give the exiting angular fluxes. The interior scalar flux is then obtained through quadrature. A compound iterative procedure for quadrature order and slab surface source convergence provides highly accurate benchmark qualities (4- to 5- places of accuracy) results.

  12. California commercial building energy benchmarking

    SciTech Connect

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the identities of building owners might be revealed and

  13. The impact and applicability of critical experiment evaluations

    SciTech Connect

    Brewer, R.

    1997-06-01

    This paper very briefly describes a project to evaluate previously performed critical experiments. The evaluation is intended for use by criticality safety engineers to verify calculations, and may also be used to identify data which need further investigation. The evaluation process is briefly outlined; the accepted benchmark critical experiments will be used as a standard for verification and validation. The end result of the project will be a comprehensive reference document.

  14. Model Uncertainty and Bayesian Model Averaged Benchmark Dose Estimation for Continuous Data

    EPA Science Inventory

    The benchmark dose (BMD) approach has gained acceptance as a valuable risk assessment tool, but risk assessors still face significant challenges associated with selecting an appropriate BMD/BMDL estimate from the results of a set of acceptable dose-response models. Current approa...

  15. Benchmarking without ground truth

    NASA Astrophysics Data System (ADS)

    Santini, Simone

    2006-01-01

    Many evaluation techniques for content based image retrieval are based on the availability of a ground truth, that is on a "correct" categorization of images so that, say, if the query image is of category A, only the returned images in category A will be considered as "hits." Based on such a ground truth, standard information retrieval measures such as precision and recall and given and used to evaluate and compare retrieval algorithms. Coherently, the assemblers of benchmarking data bases go to a certain length to have their images categorized. The assumption of the existence of a ground truth is, in many respect, naive. It is well known that the categorization of the images depends on the a priori (from the point of view of such categorization) subdivision of the semantic field in which the images are placed (a trivial observation: a plant subdivision for a botanist is very different from that for a layperson). Even within a given semantic field, however, categorization by human subjects is subject to uncertainty, and it makes little statistical sense to consider the categorization given by one person as the unassailable ground truth. In this paper I propose two evaluation techniques that apply to the case in which the ground truth is subject to uncertainty. In this case, obviously, measures such as precision and recall as well will be subject to uncertainty. The paper will explore the relation between the uncertainty in the ground truth and that in the most commonly used evaluation measures, so that the measurements done on a given system can preserve statistical significance.

  16. Experience of maltreatment as a child and acceptance of violence in adult intimate relationships: mediating effects of distortions in cognitive schemas.

    PubMed

    Ponce, Allison N; Williams, Michelle K; Allen, George J

    2004-02-01

    Links exist between being subjected to maltreatment as a child and tendencies to accept violence as normative in adult relationships. Constructivist Self Development Theory suggests that such relationships may be affected by "cognitive disruptions" in "self" and "other" schemas. Mediating effects of distorted cognitive schemas on the association between history of child maltreatment and the acceptance of violence in intimate interpersonal relationships were investigated among 433 men and women. Outcomes indicated that individuals who reported childhood maltreatment were more likely to display distortions in their cognitive schemas and those individuals with disrupted schemas were more likely to accept relationship violence. Least-square multiple regression analyses revealed that distorted beliefs fully mediated the relationship between reporting childhood maltreatment and acceptance of violence, for both men and women. Subsidiary analyses suggested that this full mediation was replicated for schemas involving the self but not for schemas about others.

  17. Benchmarking hypercube hardware and software

    NASA Technical Reports Server (NTRS)

    Grunwald, Dirk C.; Reed, Daniel A.

    1986-01-01

    It was long a truism in computer systems design that balanced systems achieve the best performance. Message passing parallel processors are no different. To quantify the balance of a hypercube design, an experimental methodology was developed and the associated suite of benchmarks was applied to several existing hypercubes. The benchmark suite includes tests of both processor speed in the absence of internode communication and message transmission speed as a function of communication patterns.

  18. Data-Intensive Benchmarking Suite

    2008-11-26

    The Data-Intensive Benchmark Suite is a set of programs written for the study of data-or storage-intensive science and engineering problems, The benchmark sets cover: general graph searching (basic and Hadoop Map/Reduce breadth-first search), genome sequence searching, HTTP request classification (basic and Hadoop Map/Reduce), low-level data communication, and storage device micro-beachmarking

  19. Research on computer systems benchmarking

    NASA Technical Reports Server (NTRS)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  20. Issues in Benchmarking Human Reliability Analysis Methods: A Literature Review

    SciTech Connect

    Ronald L. Boring; Stacey M. L. Hendrickson; John A. Forester; Tuan Q. Tran; Erasmia Lois

    2010-06-01

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessments (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study comparing and evaluating HRA methods in assessing operator performance in simulator experiments is currently underway. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.

  1. Combining Phase Identification and Statistic Modeling for Automated Parallel Benchmark Generation

    SciTech Connect

    Jin, Ye; Ma, Xiaosong; Liu, Qing Gary; Liu, Mingliang; Logan, Jeremy S; Podhorszki, Norbert; Choi, Jong Youl; Klasky, Scott A

    2015-01-01

    Parallel application benchmarks are indispensable for evaluating/optimizing HPC software and hardware. However, it is very challenging and costly to obtain high-fidelity benchmarks reflecting the scale and complexity of state-of-the-art parallel applications. Hand-extracted synthetic benchmarks are time-and labor-intensive to create. Real applications themselves, while offering most accurate performance evaluation, are expensive to compile, port, reconfigure, and often plainly inaccessible due to security or ownership concerns. This work contributes APPRIME, a novel tool for trace-based automatic parallel benchmark generation. Taking as input standard communication-I/O traces of an application's execution, it couples accurate automatic phase identification with statistical regeneration of event parameters to create compact, portable, and to some degree reconfigurable parallel application benchmarks. Experiments with four NAS Parallel Benchmarks (NPB) and three real scientific simulation codes confirm the fidelity of APPRIME benchmarks. They retain the original applications' performance characteristics, in particular the relative performance across platforms.

  2. Benchmarking on Tsunami Currents with ComMIT

    NASA Astrophysics Data System (ADS)

    Sharghi vand, N.; Kanoglu, U.

    2015-12-01

    There were no standards for the validation and verification of tsunami numerical models before 2004 Indian Ocean tsunami. Even, number of numerical models has been used for inundation mapping effort, evaluation of critical structures, etc. without validation and verification. After 2004, NOAA Center for Tsunami Research (NCTR) established standards for the validation and verification of tsunami numerical models (Synolakis et al. 2008 Pure Appl. Geophys. 165, 2197-2228), which will be used evaluation of critical structures such as nuclear power plants against tsunami attack. NCTR presented analytical, experimental and field benchmark problems aimed to estimate maximum runup and accepted widely by the community. Recently, benchmark problems were suggested by the US National Tsunami Hazard Mitigation Program Mapping & Modeling Benchmarking Workshop: Tsunami Currents on February 9-10, 2015 at Portland, Oregon, USA (http://nws.weather.gov/nthmp/index.html). These benchmark problems concentrated toward validation and verification of tsunami numerical models on tsunami currents. Three of the benchmark problems were: current measurement of the Japan 2011 tsunami in Hilo Harbor, Hawaii, USA and in Tauranga Harbor, New Zealand, and single long-period wave propagating onto a small-scale experimental model of the town of Seaside, Oregon, USA. These benchmark problems were implemented in the Community Modeling Interface for Tsunamis (ComMIT) (Titov et al. 2011 Pure Appl. Geophys. 168, 2121-2131), which is a user-friendly interface to the validated and verified Method of Splitting Tsunami (MOST) (Titov and Synolakis 1995 J. Waterw. Port Coastal Ocean Eng. 121, 308-316) model and is developed by NCTR. The modeling results are compared with the required benchmark data, providing good agreements and results are discussed. Acknowledgment: The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant

  3. Improving Mass Balance Modeling of Benchmark Glaciers

    NASA Astrophysics Data System (ADS)

    van Beusekom, A. E.; March, R. S.; O'Neel, S.

    2009-12-01

    The USGS monitors long-term glacier mass balance at three benchmark glaciers in different climate regimes. The coastal and continental glaciers are represented by Wolverine and Gulkana Glaciers in Alaska, respectively. Field measurements began in 1966 and continue. We have reanalyzed the published balance time series with more modern methods and recomputed reference surface and conventional balances. Addition of the most recent data shows a continuing trend of mass loss. We compare the updated balances to the previously accepted balances and discuss differences. Not all balance quantities can be determined from the field measurements. For surface processes, we model missing information with an improved degree-day model. Degree-day models predict ablation from the sum of daily mean temperatures and an empirical degree-day factor. We modernize the traditional degree-day model as well as derive new degree-day factors in an effort to closer match the balance time series and thus better predict the future state of the benchmark glaciers. For subsurface processes, we model the refreezing of meltwater for internal accumulation. We examine the sensitivity of the balance time series to the subsurface process of internal accumulation, with the goal of determining the best way to include internal accumulation into balance estimates.

  4. NHT-1 I/O Benchmarks

    NASA Technical Reports Server (NTRS)

    Carter, Russell; Ciotti, Bob; Fineberg, Sam; Nitzbert, Bill

    1992-01-01

    The NHT-1 benchmarks am a set of three scalable I/0 benchmarks suitable for evaluating the I/0 subsystems of high performance distributed memory computer systems. The benchmarks test application I/0, maximum sustained disk I/0, and maximum sustained network I/0. Sample codes are available which implement the benchmarks.

  5. How to avoid 'death by benchmarking'.

    PubMed

    Wofford, Dave; Libby, Darin

    2015-08-01

    Hospitals and health systems should adopt four key principles and practices when applying benchmarks to determine physician compensation: Acknowledge that a lower percentile may be appropriate. Use the median as the all-in benchmark. Use peer benchmarks when available. Use alternative benchmarks.

  6. A Simplified HTTR Diffusion Theory Benchmark

    SciTech Connect

    Rodolfo M. Ferrer; Abderrafi M. Ougouag; Farzad Rahnema

    2010-10-01

    The Georgia Institute of Technology (GA-Tech) recently developed a transport theory benchmark based closely on the geometry and the features of the HTTR reactor that is operational in Japan. Though simplified, the benchmark retains all the principal physical features of the reactor and thus provides a realistic and challenging test for the codes. The purpose of this paper is twofold. The first goal is an extension of the benchmark to diffusion theory applications by generating the additional data not provided in the GA-Tech prior work. The second goal is to use the benchmark on the HEXPEDITE code available to the INL. The HEXPEDITE code is a Green’s function-based neutron diffusion code in 3D hexagonal-z geometry. The results showed that the HEXPEDITE code accurately reproduces the effective multiplication factor of the reference HELIOS solution. A secondary, but no less important, conclusion is that in the testing against actual HTTR data of a full sequence of codes that would include HEXPEDITE, in the apportioning of inevitable discrepancies between experiment and models, the portion of error attributable to HEXPEDITE would be expected to be modest. If large discrepancies are observed, they would have to be explained by errors in the data fed into HEXPEDITE. Results based on a fully realistic model of the HTTR reactor are presented in a companion paper. The suite of codes used in that paper also includes HEXPEDITE. The results shown here should help that effort in the decision making process for refining the modeling steps in the full sequence of codes.

  7. POTENTIAL BENCHMARKS FOR ACTINIDE PRODUCTION IN HANFORD REACTORS

    SciTech Connect

    PUIGH RJ; TOFFER H

    2011-10-19

    A significant experimental program was conducted in the early Hanford reactors to understand the reactor production of actinides. These experiments were conducted with sufficient rigor, in some cases, to provide useful information that can be utilized today in development of benchmark experiments that may be used for the validation of present computer codes for the production of these actinides in low enriched uranium fuel.

  8. High acceptance recoil polarimeter

    SciTech Connect

    The HARP Collaboration

    1992-12-05

    In order to detect neutrons and protons in the 50 to 600 MeV energy range and measure their polarization, an efficient, low-noise, self-calibrating device is being designed. This detector, known as the High Acceptance Recoil Polarimeter (HARP), is based on the recoil principle of proton detection from np[r arrow]n[prime]p[prime] or pp[r arrow]p[prime]p[prime] scattering (detected particles are underlined) which intrinsically yields polarization information on the incoming particle. HARP will be commissioned to carry out experiments in 1994.

  9. Closed-Loop Neuromorphic Benchmarks.

    PubMed

    Stewart, Terrence C; DeWolf, Travis; Kleinhans, Ashley; Eliasmith, Chris

    2015-01-01

    Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is even more difficult when the task of interest is a closed-loop task; that is, a task where the output from the neuromorphic hardware affects some environment, which then in turn affects the hardware's future input. However, closed-loop situations are one of the primary potential uses of neuromorphic hardware. To address this, we present a methodology for generating closed-loop benchmarks that makes use of a hybrid of real physical embodiment and a type of "minimal" simulation. Minimal simulation has been shown to lead to robust real-world performance, while still maintaining the practical advantages of simulation, such as making it easy for the same benchmark to be used by many researchers. This method is flexible enough to allow researchers to explicitly modify the benchmarks to identify specific task domains where particular hardware excels. To demonstrate the method, we present a set of novel benchmarks that focus on motor control for an arbitrary system with unknown external forces. Using these benchmarks, we show that an error-driven learning rule can consistently improve motor control performance across a randomly generated family of closed-loop simulations, even when there are up to 15 interacting joints to be controlled. PMID:26696820

  10. Benchmark simulation models, quo vadis?

    PubMed

    Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.

  11. Closed-Loop Neuromorphic Benchmarks

    PubMed Central

    Stewart, Terrence C.; DeWolf, Travis; Kleinhans, Ashley; Eliasmith, Chris

    2015-01-01

    Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is even more difficult when the task of interest is a closed-loop task; that is, a task where the output from the neuromorphic hardware affects some environment, which then in turn affects the hardware's future input. However, closed-loop situations are one of the primary potential uses of neuromorphic hardware. To address this, we present a methodology for generating closed-loop benchmarks that makes use of a hybrid of real physical embodiment and a type of “minimal” simulation. Minimal simulation has been shown to lead to robust real-world performance, while still maintaining the practical advantages of simulation, such as making it easy for the same benchmark to be used by many researchers. This method is flexible enough to allow researchers to explicitly modify the benchmarks to identify specific task domains where particular hardware excels. To demonstrate the method, we present a set of novel benchmarks that focus on motor control for an arbitrary system with unknown external forces. Using these benchmarks, we show that an error-driven learning rule can consistently improve motor control performance across a randomly generated family of closed-loop simulations, even when there are up to 15 interacting joints to be controlled. PMID:26696820

  12. Benchmarking boiler tube failures - Part 1

    SciTech Connect

    Patrick, J.; Oldani, R.; von Behren, D.

    2005-10-01

    Boiler tube failures continue to be the leading cause of downtime for steam power plants. That should not be a surprise; a typical steam generator has miles of tubes that operate at high temperatures and pressures. Are your experiences comparable to those of your peers? Could you learn something from tube-leak benchmarking data that could improve the operation of your plant? The Electric Utility Cost Group (EUCG) recently completed a boiler-tube failure study that is available only to its members. But Power magazine has been given exclusive access to some of the results, published in this article. 4 figs.

  13. 2008 ULTRASONIC BENCHMARK STUDIES OF INTERFACE CURVATURE--A SUMMARY

    SciTech Connect

    Schmerr, L. W.; Huang, R.; Raillon, R.; Mahaut, S.; Leymarie, N.; Lonne, S.; Spies, M.; Lupien, V.

    2009-03-03

    In the 2008 QNDE ultrasonic benchmark session researchers from five different institutions around the world examined the influence that the curvature of a cylindrical fluid-solid interface has on the measured NDE immersion pulse-echo response of a flat-bottom hole (FBH) reflector. This was a repeat of a study conducted in the 2007 benchmark to try to determine the sources of differences seen in 2007 between model-based predictions and experiments. Here, we will summarize the results obtained in 2008 and analyze the model-based results and the experiments.

  14. Integral Benchmark Data for Nuclear Data Testing Through the ICSBEP & IRPhEP

    SciTech Connect

    J. Blair Briggs; John D. Bess; Jim Gulliford; Ian Hill

    2013-10-01

    The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and International Reactor Physics Experiment Evaluation Project (IRPhEP) was last discussed directly with the nuclear data community at ND2007. Since ND2007, integral benchmark data that are available for nuclear data testing have increased significantly. The status of the ICSBEP and the IRPhEP is discussed and selected benchmark configurations that have been added to the ICSBEP and IRPhEP Handbooks since ND2007 are highlighted.

  15. Integral Benchmark Data for Nuclear Data Testing Through the ICSBEP & IRPhEP

    NASA Astrophysics Data System (ADS)

    Briggs, J. B.; Bess, J. D.; Gulliford, J.

    2014-04-01

    The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and International Reactor Physics Experiment Evaluation Project (IRPhEP) was last discussed directly with the nuclear data community at ND2007. Since ND2007, integral benchmark data that are available for nuclear data testing have increased significantly. The status of the ICSBEP and the IRPhEP is discussed and selected benchmark configurations that have been added to the ICSBEP and IRPhEP Handbooks since ND2007 are highlighted.

  16. SPICE benchmark for global tomographic methods

    NASA Astrophysics Data System (ADS)

    Qin, Yilong; Capdeville, Yann; Maupin, Valerie; Montagner, Jean-Paul; Lebedev, Sergei; Beucler, Eric

    2008-11-01

    The existing global tomographic methods result in different models due to different parametrization, scale resolution and theoretical approach. To test how current imaging techniques are limited by approximations in theory and by the inadequacy of data quality and coverage, it is necessary to perform a global-scale benchmark to understand the resolving properties of each specific imaging algorithm. In the framework of the Seismic wave Propagation and Imaging in Complex media: a European network (SPICE) project, it was decided to perform a benchmark experiment of global inversion algorithms. First, a preliminary benchmark with a simple isotropic model is carried out to check the feasibility in terms of acquisition geometry and numerical accuracy. Then, to fully validate tomographic schemes with a challenging synthetic data set, we constructed one complex anisotropic global model, which is characterized by 21 elastic constants and includes 3-D heterogeneities in velocity, anisotropy (radial and azimuthal anisotropy), attenuation, density, as well as surface topography and bathymetry. The intermediate-period (>32 s), high fidelity anisotropic modelling was performed by using state-of-the-art anisotropic anelastic modelling code, that is, coupled spectral element method (CSEM), on modern massively parallel computing resources. The benchmark data set consists of 29 events and three-component seismograms are recorded by 256 stations. Because of the limitation of the available computing power, synthetic seismograms have a minimum period of 32 s and a length of 10 500 s. The inversion of the benchmark data set demonstrates several well-known problems of classical surface wave tomography, such as the importance of crustal correction to recover the shallow structures, the loss of resolution with depth, the smearing effect, both horizontal and vertical, the inaccuracy of amplitude of isotropic S-wave velocity variation, the difficulty of retrieving the magnitude of azimuthal

  17. Radiation Detection Computational Benchmark Scenarios

    SciTech Connect

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  18. Current Reactor Physics Benchmark Activities at the Idaho National Laboratory

    SciTech Connect

    John D. Bess; Margaret A. Marshall; Mackenzie L. Gorham; Joseph Christensen; James C. Turnbull; Kim Clark

    2011-11-01

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) [1] and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) [2] were established to preserve integral reactor physics and criticality experiment data for present and future research. These valuable assets provide the basis for recording, developing, and validating our integral nuclear data, and experimental and computational methods. These projects are managed through the Idaho National Laboratory (INL) and the Organisation for Economic Co-operation and Development Nuclear Energy Agency (OECD-NEA). Staff and students at the Department of Energy - Idaho (DOE-ID) and INL are engaged in the development of benchmarks to support ongoing research activities. These benchmarks include reactors or assemblies that support Next Generation Nuclear Plant (NGNP) research, space nuclear Fission Surface Power System (FSPS) design validation, and currently operational facilities in Southeastern Idaho.

  19. Simplified two and three dimensional HTTR benchmark problems

    SciTech Connect

    Zhan Zhang; Dingkang Zhang; Justin M. Pounders; Abderrafi M. Ougouag

    2011-05-01

    To assess the accuracy of diffusion or transport methods for reactor calculations, it is desirable to create heterogeneous benchmark problems that are typical of whole core configurations. In this paper we have created two and three dimensional numerical benchmark problems typical of high temperature gas cooled prismatic cores. Additionally, a single cell and single block benchmark problems are also included. These problems were derived from the HTTR start-up experiment. Since the primary utility of the benchmark problems is in code-to-code verification, minor details regarding geometry and material specification of the original experiment have been simplified while retaining the heterogeneity and the major physics properties of the core from a neutronics viewpoint. A six-group material (macroscopic) cross section library has been generated for the benchmark problems using the lattice depletion code HELIOS. Using this library, Monte Carlo solutions are presented for three configurations (all-rods-in, partially-controlled and all-rods-out) for both the 2D and 3D problems. These solutions include the core eigenvalues, the block (assembly) averaged fission densities, local peaking factors, the absorption densities in the burnable poison and control rods, and pin fission density distribution for selected blocks. Also included are the solutions for the single cell and single block problems.

  20. Symbolic manipulation and transport benchmarks

    SciTech Connect

    Ganapol, B.D.

    1986-01-01

    The establishment of reliable benchmark solutions is an integral part of the development of computational algorithms to solve the Boltzmann equation of particle motion. These solutions provide standards by which code developers can assess new numerical algorithms as well as ensure proper programming. A transport benchmark solution, as defined here, is the accurate numerical evaluation (3 to 5 digits) of an analytical solution to the transport equation. The basic elements of such a solution are an analytical representation free from discretization and a numerical evaluation for which an error estimate can be obtained. Symbolic manipulation software such as REDUCE, MACSYMA, and SMP can greatly aid in the generation of benchmark solutions. The benefit of these manipulators lies both in their ability to perform lengthy algebraic calculations and to write a code that can be incorporated directly into existing programs. Using two fundamental problems from particle transport theory, the author explores the advantages and limitations of the application of the REDUCE software package in generating time dependent benchmark solutions.

  1. Processor Emulator with Benchmark Applications

    SciTech Connect

    Lloyd, G. Scott; Pearce, Roger; Gokhale, Maya

    2015-11-13

    A processor emulator and a suite of benchmark applications have been developed to assist in characterizing the performance of data-centric workloads on current and future computer architectures. Some of the applications have been collected from other open source projects. For more details on the emulator and an example of its usage, see reference [1].

  2. PRISMATIC CORE COUPLED TRANSIENT BENCHMARK

    SciTech Connect

    J. Ortensi; M.A. Pope; G. Strydom; R.S. Sen; M.D. DeHart; H.D. Gougar; C. Ellis; A. Baxter; V. Seker; T.J. Downar; K. Vierow; K. Ivanov

    2011-06-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  3. PyMPI Dynamic Benchmark

    2007-02-16

    Pynamic is a benchmark designed to test a system's ability to handle the Dynamic Linking and Loading (DLL) requirements of Python-based scientific applications. This benchmark is developed to add a workload to our testing environment, a workload that represents a newly emerging class of DLL behaviors. Pynamic buildins on pyMPI, and MPI extension to Python C-extension dummy codes and a glue layer that facilitates linking and loading of the generated dynamic modules into the resultingmore » pyMPI. Pynamic is configurable, enabling modeling the static properties of a specific code as described in section 5. It does not, however, model any significant computationss of the target and hence, it is not subjected to the same level of control as the target code. In fact, HPC computer vendors and tool developers will be encouraged to add it to their tesitn suite once the code release is completed. an ability to produce and run this benchmark is an effective test for valifating the capability of a compiler and linker/loader as well as an OS kernel and other runtime system of HPC computer vendors. In addition, the benchmark is designed as a test case for stressing code development tools. Though Python has recently gained popularity in the HPC community, it heavy DLL operations have hindered certain HPC code development tools, notably parallel debuggers, from performing optimally.« less

  4. Real-Time Benchmark Suite

    1992-01-17

    This software provides a portable benchmark suite for real time kernels. It tests the performance of many of the system calls, as well as the interrupt response time and task response time to interrupts. These numbers provide a baseline for comparing various real-time kernels and hardware platforms.

  5. Benchmark Lisp And Ada Programs

    NASA Technical Reports Server (NTRS)

    Davis, Gloria; Galant, David; Lim, Raymond; Stutz, John; Gibson, J.; Raghavan, B.; Cheesema, P.; Taylor, W.

    1992-01-01

    Suite of nonparallel benchmark programs, ELAPSE, designed for three tests: comparing efficiency of computer processing via Lisp vs. Ada; comparing efficiencies of several computers processing via Lisp; or comparing several computers processing via Ada. Tests efficiency which computer executes routines in each language. Available for computer equipped with validated Ada compiler and/or Common Lisp system.

  6. Austin Community College Benchmarking Update.

    ERIC Educational Resources Information Center

    Austin Community Coll., TX. Office of Institutional Effectiveness.

    Austin Community College contracted with MGT of America, Inc. in spring 1999 to develop a peer and benchmark (best) practices analysis on key indicators. These indicators were updated in spring 2002 using data from eight Texas community colleges and four non-Texas institutions that represent large, comprehensive, urban community colleges, similar…

  7. Benchmarking ETL Workflows

    NASA Astrophysics Data System (ADS)

    Simitsis, Alkis; Vassiliadis, Panos; Dayal, Umeshwar; Karagiannis, Anastasios; Tziovara, Vasiliki

    Extraction-Transform-Load (ETL) processes comprise complex data workflows, which are responsible for the maintenance of a Data Warehouse. A plethora of ETL tools is currently available constituting a multi-million dollar market. Each ETL tool uses its own technique for the design and implementation of an ETL workflow, making the task of assessing ETL tools extremely difficult. In this paper, we identify common characteristics of ETL workflows in an effort of proposing a unified evaluation method for ETL. We also identify the main points of interest in designing, implementing, and maintaining ETL workflows. Finally, we propose a principled organization of test suites based on the TPC-H schema for the problem of experimenting with ETL workflows.

  8. Benchmarking of the FENDL-3 Neutron Cross-section Data Starter Library for Fusion Applications

    SciTech Connect

    Fischer, U.; Angelone, M.; Bohm, T.; Kondo, K.; Konno, C.; Sawan, M.; Villari, R.; Walker, B.

    2014-06-15

    This paper summarizes the benchmark analyses performed in a joint effort of ENEA (Italy), JAEA (Japan), KIT (Germany), and the University of Wisconsin (USA) on a computational ITER benchmark and a series of 14 MeV neutron benchmark experiments. The computational benchmark revealed a modest increase of the neutron flux levels in the deep penetration regions and a substantial increase of the gas production in steel components. The comparison to experimental results showed good agreement with no substantial differences between FENDL-3.0 and FENDL-2.1 for most of the responses. In general, FENDL-3 shows an improved performance for fusion neutronics applications.

  9. Benchmarking short sequence mapping tools

    PubMed Central

    2013-01-01

    Background The development of next-generation sequencing instruments has led to the generation of millions of short sequences in a single run. The process of aligning these reads to a reference genome is time consuming and demands the development of fast and accurate alignment tools. However, the current proposed tools make different compromises between the accuracy and the speed of mapping. Moreover, many important aspects are overlooked while comparing the performance of a newly developed tool to the state of the art. Therefore, there is a need for an objective evaluation method that covers all the aspects. In this work, we introduce a benchmarking suite to extensively analyze sequencing tools with respect to various aspects and provide an objective comparison. Results We applied our benchmarking tests on 9 well known mapping tools, namely, Bowtie, Bowtie2, BWA, SOAP2, MAQ, RMAP, GSNAP, Novoalign, and mrsFAST (mrFAST) using synthetic data and real RNA-Seq data. MAQ and RMAP are based on building hash tables for the reads, whereas the remaining tools are based on indexing the reference genome. The benchmarking tests reveal the strengths and weaknesses of each tool. The results show that no single tool outperforms all others in all metrics. However, Bowtie maintained the best throughput for most of the tests while BWA performed better for longer read lengths. The benchmarking tests are not restricted to the mentioned tools and can be further applied to others. Conclusion The mapping process is still a hard problem that is affected by many factors. In this work, we provided a benchmarking suite that reveals and evaluates the different factors affecting the mapping process. Still, there is no tool that outperforms all of the others in all the tests. Therefore, the end user should clearly specify his needs in order to choose the tool that provides the best results. PMID:23758764

  10. 42 CFR 440.385 - Delivery of benchmark and benchmark-equivalent coverage through managed care entities.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Delivery of benchmark and benchmark-equivalent...: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.385 Delivery of benchmark and benchmark-equivalent coverage through managed care entities. In implementing benchmark or...

  11. 42 CFR 440.385 - Delivery of benchmark and benchmark-equivalent coverage through managed care entities.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 4 2013-10-01 2013-10-01 false Delivery of benchmark and benchmark-equivalent...: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.385 Delivery of benchmark and benchmark-equivalent coverage through managed care entities. In implementing benchmark or...

  12. 42 CFR 440.385 - Delivery of benchmark and benchmark-equivalent coverage through managed care entities.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 4 2012-10-01 2012-10-01 false Delivery of benchmark and benchmark-equivalent...: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.385 Delivery of benchmark and benchmark-equivalent coverage through managed care entities. In implementing benchmark or...

  13. 42 CFR 440.385 - Delivery of benchmark and benchmark-equivalent coverage through managed care entities.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false Delivery of benchmark and benchmark-equivalent...: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.385 Delivery of benchmark and benchmark-equivalent coverage through managed care entities. In implementing benchmark or...

  14. 42 CFR 440.385 - Delivery of benchmark and benchmark-equivalent coverage through managed care entities.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 4 2014-10-01 2014-10-01 false Delivery of benchmark and benchmark-equivalent...: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.385 Delivery of benchmark and benchmark-equivalent coverage through managed care entities. In implementing benchmark or...

  15. Criticality safety benchmark evaluation project: Recovering the past

    SciTech Connect

    Trumble, E.F.

    1997-06-01

    A very brief summary of the Criticality Safety Benchmark Evaluation Project of the Westinghouse Savannah River Company is provided in this paper. The purpose of the project is to provide a source of evaluated criticality safety experiments in an easily usable format. Another project goal is to search for any experiments that may have been lost or contain discrepancies, and to determine if they can be used. Results of evaluated experiments are being published as US DOE handbooks.

  16. Benchmarking neuromorphic vision: lessons learnt from computer vision

    PubMed Central

    Tan, Cheston; Lallee, Stephane; Orchard, Garrick

    2015-01-01

    Neuromorphic Vision sensors have improved greatly since the first silicon retina was presented almost three decades ago. They have recently matured to the point where they are commercially available and can be operated by laymen. However, despite improved availability of sensors, there remains a lack of good datasets, while algorithms for processing spike-based visual data are still in their infancy. On the other hand, frame-based computer vision algorithms are far more mature, thanks in part to widely accepted datasets which allow direct comparison between algorithms and encourage competition. We are presented with a unique opportunity to shape the development of Neuromorphic Vision benchmarks and challenges by leveraging what has been learnt from the use of datasets in frame-based computer vision. Taking advantage of this opportunity, in this paper we review the role that benchmarks and challenges have played in the advancement of frame-based computer vision, and suggest guidelines for the creation of Neuromorphic Vision benchmarks and challenges. We also discuss the unique challenges faced when benchmarking Neuromorphic Vision algorithms, particularly when attempting to provide direct comparison with frame-based computer vision. PMID:26528120

  17. Benchmarking neuromorphic vision: lessons learnt from computer vision.

    PubMed

    Tan, Cheston; Lallee, Stephane; Orchard, Garrick

    2015-01-01

    Neuromorphic Vision sensors have improved greatly since the first silicon retina was presented almost three decades ago. They have recently matured to the point where they are commercially available and can be operated by laymen. However, despite improved availability of sensors, there remains a lack of good datasets, while algorithms for processing spike-based visual data are still in their infancy. On the other hand, frame-based computer vision algorithms are far more mature, thanks in part to widely accepted datasets which allow direct comparison between algorithms and encourage competition. We are presented with a unique opportunity to shape the development of Neuromorphic Vision benchmarks and challenges by leveraging what has been learnt from the use of datasets in frame-based computer vision. Taking advantage of this opportunity, in this paper we review the role that benchmarks and challenges have played in the advancement of frame-based computer vision, and suggest guidelines for the creation of Neuromorphic Vision benchmarks and challenges. We also discuss the unique challenges faced when benchmarking Neuromorphic Vision algorithms, particularly when attempting to provide direct comparison with frame-based computer vision.

  18. Benchmarking Evaluation Results for Prototype Extravehicular Activity Gloves

    NASA Technical Reports Server (NTRS)

    Aitchison, Lindsay; McFarland, Shane

    2012-01-01

    The Space Suit Assembly (SSA) Development Team at NASA Johnson Space Center has invested heavily in the advancement of rear-entry planetary exploration suit design but largely deferred development of extravehicular activity (EVA) glove designs, and accepted the risk of using the current flight gloves, Phase VI, for unique mission scenarios outside the Space Shuttle and International Space Station (ISS) Program realm of experience. However, as design reference missions mature, the risks of using heritage hardware have highlighted the need for developing robust new glove technologies. To address the technology gap, the NASA Game-Changing Technology group provided start-up funding for the High Performance EVA Glove (HPEG) Project in the spring of 2012. The overarching goal of the HPEG Project is to develop a robust glove design that increases human performance during EVA and creates pathway for future implementation of emergent technologies, with specific aims of increasing pressurized mobility to 60% of barehanded capability, increasing the durability by 100%, and decreasing the potential of gloves to cause injury during use. The HPEG Project focused initial efforts on identifying potential new technologies and benchmarking the performance of current state of the art gloves to identify trends in design and fit leading to establish standards and metrics against which emerging technologies can be assessed at both the component and assembly levels. The first of the benchmarking tests evaluated the quantitative mobility performance and subjective fit of four prototype gloves developed by Flagsuit LLC, Final Frontier Designs, LLC Dover, and David Clark Company as compared to the Phase VI. All of the companies were asked to design and fabricate gloves to the same set of NASA provided hand measurements (which corresponded to a single size of Phase Vi glove) and focus their efforts on improving mobility in the metacarpal phalangeal and carpometacarpal joints. Four test

  19. The Impact Hydrocode Benchmark and Validation Project

    NASA Astrophysics Data System (ADS)

    Pierazzo, E.; Artemieva, N.; Asphaug, E.; Baldwin, E. C.; Cazamias, J.; Coker, R.; Collins, G. S.; Crawford, D. A.; Davison, T.; Elbeshausen, D.; Holsapple, K. A.; Housen, K. R.; Korycansky, D. G.; Wünnemann, K.

    When properly benchmarked and validated against observations computer models offer a powerful tool for understanding the mechanics of impact crater formation. We present results from a project to benchmark and validate shock physics codes.

  20. How do I know if my forecasts are better? Using benchmarks in hydrological ensemble prediction

    NASA Astrophysics Data System (ADS)

    Pappenberger, F.; Ramos, M. H.; Cloke, H. L.; Wetterhall, F.; Alfieri, L.; Bogner, K.; Mueller, A.; Salamon, P.

    2015-03-01

    The skill of a forecast can be assessed by comparing the relative proximity of both the forecast and a benchmark to the observations. Example benchmarks include climatology or a naïve forecast. Hydrological ensemble prediction systems (HEPS) are currently transforming the hydrological forecasting environment but in this new field there is little information to guide researchers and operational forecasters on how benchmarks can be best used to evaluate their probabilistic forecasts. In this study, it is identified that the forecast skill calculated can vary depending on the benchmark selected and that the selection of a benchmark for determining forecasting system skill is sensitive to a number of hydrological and system factors. A benchmark intercomparison experiment is then undertaken using the continuous ranked probability score (CRPS), a reference forecasting system and a suite of 23 different methods to derive benchmarks. The benchmarks are assessed within the operational set-up of the European Flood Awareness System (EFAS) to determine those that are 'toughest to beat' and so give the most robust discrimination of forecast skill, particularly for the spatial average fields that EFAS relies upon. Evaluating against an observed discharge proxy the benchmark that has most utility for EFAS and avoids the most naïve skill across different hydrological situations is found to be meteorological persistency. This benchmark uses the latest meteorological observations of precipitation and temperature to drive the hydrological model. Hydrological long term average benchmarks, which are currently used in EFAS, are very easily beaten by the forecasting system and the use of these produces much naïve skill. When decomposed into seasons, the advanced meteorological benchmarks, which make use of meteorological observations from the past 20 years at the same calendar date, have the most skill discrimination. They are also good at discriminating skill in low flows and for all

  1. Benchmarking in Czech Higher Education: The Case of Schools of Economics

    ERIC Educational Resources Information Center

    Placek, Michal; Ochrana, František; Pucek, Milan

    2015-01-01

    This article describes the use of benchmarking in universities in the Czech Republic and academics' experiences with it. It is based on research conducted among academics from economics schools in Czech public and private universities. The results identified several issues regarding the utilisation and understanding of benchmarking in the Czech…

  2. Towards Systematic Benchmarking of Climate Model Performance

    NASA Astrophysics Data System (ADS)

    Gleckler, P. J.

    2014-12-01

    The process by which climate models are evaluated has evolved substantially over the past decade, with the Coupled Model Intercomparison Project (CMIP) serving as a centralizing activity for coordinating model experimentation and enabling research. Scientists with a broad spectrum of expertise have contributed to the CMIP model evaluation process, resulting in many hundreds of publications that have served as a key resource for the IPCC process. For several reasons, efforts are now underway to further systematize some aspects of the model evaluation process. First, some model evaluation can now be considered routine and should not require "re-inventing the wheel" or a journal publication simply to update results with newer models. Second, the benefit of CMIP research to model development has not been optimal because the publication of results generally takes several years and is usually not reproducible for benchmarking newer model versions. And third, there are now hundreds of model versions and many thousands of simulations, but there is no community-based mechanism for routinely monitoring model performance changes. An important change in the design of CMIP6 can help address these limitations. CMIP6 will include a small set standardized experiments as an ongoing exercise (CMIP "DECK": ongoing Diagnostic, Evaluation and Characterization of Klima), so that modeling groups can submit them at any time and not be overly constrained by deadlines. In this presentation, efforts to establish routine benchmarking of existing and future CMIP simulations will be described. To date, some benchmarking tools have been made available to all CMIP modeling groups to enable them to readily compare with CMIP5 simulations during the model development process. A natural extension of this effort is to make results from all CMIP simulations widely available, including the results from newer models as soon as the simulations become available for research. Making the results from routine

  3. Benchmarking clinical photography services in the NHS.

    PubMed

    Arbon, Giles

    2015-01-01

    Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession. PMID:26828540

  4. Internal Quality Assurance Benchmarking. ENQA Workshop Report 20

    ERIC Educational Resources Information Center

    Blackstock, Douglas; Burquel, Nadine; Comet, Nuria; Kajaste, Matti; dos Santos, Sergio Machado; Marcos, Sandra; Moser, Marion; Ponds, Henri; Scheuthle, Harald; Sixto, Luis Carlos Velon

    2012-01-01

    The Internal Quality Assurance group of ENQA (IQA Group) has been organising a yearly seminar for its members since 2007. The main objective is to share experiences concerning the internal quality assurance of work processes in the participating agencies. The overarching theme of the 2011 seminar was how to use benchmarking as a tool for…

  5. Gatemon Benchmarking and Two-Qubit Operation

    NASA Astrophysics Data System (ADS)

    Casparis, Lucas; Larsen, Thorvald; Olsen, Michael; Petersson, Karl; Kuemmeth, Ferdinand; Krogstrup, Peter; Nygard, Jesper; Marcus, Charles

    Recent experiments have demonstrated superconducting transmon qubits with semiconductor nanowire Josephson junctions. These hybrid gatemon qubits utilize field effect tunability singular to semiconductors to allow complete qubit control using gate voltages, potentially a technological advantage over conventional flux-controlled transmons. Here, we present experiments with a two-qubit gatemon circuit. We characterize qubit coherence and stability and use randomized benchmarking to demonstrate single-qubit gate errors of ~0.5 % for all gates, including voltage-controlled Z rotations. We show coherent capacitive coupling between two gatemons and coherent SWAP operations. Finally, we perform a two-qubit controlled-phase gate with an estimated fidelity of ~91 %, demonstrating the potential of gatemon qubits for building scalable quantum processors. We acknowledge financial support from Microsoft Project Q and the Danish National Research Foundation.

  6. Gatemon Benchmarking and Two-Qubit Operations.

    PubMed

    Casparis, L; Larsen, T W; Olsen, M S; Kuemmeth, F; Krogstrup, P; Nygård, J; Petersson, K D; Marcus, C M

    2016-04-15

    Recent experiments have demonstrated superconducting transmon qubits with semiconductor nanowire Josephson junctions. These hybrid gatemon qubits utilize field effect tunability characteristic of semiconductors to allow complete qubit control using gate voltages, potentially a technological advantage over conventional flux-controlled transmons. Here, we present experiments with a two-qubit gatemon circuit. We characterize qubit coherence and stability and use randomized benchmarking to demonstrate single-qubit gate errors below 0.7% for all gates, including voltage-controlled Z rotations. We show coherent capacitive coupling between two gatemons and coherent swap operations. Finally, we perform a two-qubit controlled-phase gate with an estimated fidelity of 91%, demonstrating the potential of gatemon qubits for building scalable quantum processors. PMID:27127949

  7. Gatemon Benchmarking and Two-Qubit Operations

    NASA Astrophysics Data System (ADS)

    Casparis, L.; Larsen, T. W.; Olsen, M. S.; Kuemmeth, F.; Krogstrup, P.; Nygârd, J.; Petersson, K. D.; Marcus, C. M.

    2016-04-01

    Recent experiments have demonstrated superconducting transmon qubits with semiconductor nanowire Josephson junctions. These hybrid gatemon qubits utilize field effect tunability characteristic of semiconductors to allow complete qubit control using gate voltages, potentially a technological advantage over conventional flux-controlled transmons. Here, we present experiments with a two-qubit gatemon circuit. We characterize qubit coherence and stability and use randomized benchmarking to demonstrate single-qubit gate errors below 0.7% for all gates, including voltage-controlled Z rotations. We show coherent capacitive coupling between two gatemons and coherent swap operations. Finally, we perform a two-qubit controlled-phase gate with an estimated fidelity of 91%, demonstrating the potential of gatemon qubits for building scalable quantum processors.

  8. BENCHMARKING ORTEC ISOTOPIC MEASUREMENTS AND CALCULATIONS

    SciTech Connect

    Dewberry, R; Raymond Sigg, R; Vito Casella, V; Nitin Bhatt, N

    2008-09-29

    these cases the ISOTOPIC analysis program is especially valuable because it allows a rapid, defensible, reproducible analysis of radioactive content without tedious and repetitive experimental measurement of {gamma}-ray transmission through the sample and container at multiple photon energies. The ISOTOPIC analysis technique is also especially valuable in facility holdup measurements where the acquisition configuration does not fit the accepted generalized geometries where detector efficiencies have been solved exactly with good calculus. Generally in facility passive {gamma}-ray holdup measurements the acquisition geometry is only approximately reproducible, and the sample (object) is an extensive glovebox or HEPA filter component. In these cases accuracy of analyses is rarely possible, however demonstrating fissile Pu and U content within criticality safety guidelines yields valuable operating information. Demonstrating such content can be performed with broad assumptions and within broad factors (e.g. 2-8) of conservatism. The ISOTOPIC analysis program yields rapid defensible analyses of content within acceptable uncertainty and within acceptable conservatism without extensive repetitive experimental measurements. In addition to transmission correction determinations based on the mass and composition of objects, the ISOTOPIC program performs finite geometry corrections based on object shape and dimensions. These geometry corrections are based upon finite element summation to approximate exact closed form calculus. In this report we provide several benchmark comparisons to the same technique provided by the Canberra In Situ Object Counting System (ISOCS) and to the finite thickness calculations described by Russo in reference 10. This report describes the benchmark comparisons we have performed to demonstrate and to document that the ISOTOPIC analysis program yields the results we claim to our customers.

  9. How Benchmarking and Higher Education Came Together

    ERIC Educational Resources Information Center

    Levy, Gary D.; Ronco, Sharron L.

    2012-01-01

    This chapter introduces the concept of benchmarking and how higher education institutions began to use benchmarking for a variety of purposes. Here, benchmarking is defined as a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to…

  10. Performance Benchmarking Tsunami Models for NTHMP's Inundation Mapping Activities

    NASA Astrophysics Data System (ADS)

    Horrillo, Juan; Grilli, Stéphan T.; Nicolsky, Dmitry; Roeber, Volker; Zhang, Joseph

    2015-03-01

    The coastal states and territories of the United States (US) are vulnerable to devastating tsunamis from near-field or far-field coseismic and underwater/subaerial landslide sources. Following the catastrophic 2004 Indian Ocean tsunami, the National Tsunami Hazard Mitigation Program (NTHMP) accelerated the development of public safety products for the mitigation of these hazards. In response to this initiative, US coastal states and territories speeded up the process of developing/enhancing/adopting tsunami models that can be used for developing inundation maps and evacuation plans. One of NTHMP's requirements is that all operational and inundation-based numerical (O&I) models used for such purposes be properly validated against established standards to ensure the reliability of tsunami inundation maps as well as to achieve a basic level of consistency between parallel efforts. The validation of several O&I models was considered during a workshop held in 2011 at Texas A&M University (Galveston). This validation was performed based on the existing standard (OAR-PMEL-135), which provides a list of benchmark problems (BPs) covering various tsunami processes that models must meet to be deemed acceptable. Here, we summarize key approaches followed, results, and conclusions of the workshop. Eight distinct tsunami models were validated and cross-compared by using a subset of the BPs listed in the OAR-PMEL-135 standard. Of the several BPs available, only two based on laboratory experiments are detailed here for sake of brevity; since they are considered as sufficiently comprehensive. Average relative errors associated with expected parameters values such as maximum surface amplitude/runup are estimated. The level of agreement with the reference data, reasons for discrepancies between model results, and some of the limitations are discussed. In general, dispersive models were found to perform better than nondispersive models, but differences were relatively small, in part

  11. Statistical Analysis of NAS Parallel Benchmarks and LINPACK Results

    NASA Technical Reports Server (NTRS)

    Meuer, Hans-Werner; Simon, Horst D.; Strohmeier, Erich; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    In the last three years extensive performance data have been reported for parallel machines both based on the NAS Parallel Benchmarks, and on LINPACK. In this study we have used the reported benchmark results and performed a number of statistical experiments using factor, cluster, and regression analyses. In addition to the performance results of LINPACK and the eight NAS parallel benchmarks, we have also included peak performance of the machine, and the LINPACK n and n(sub 1/2) values. Some of the results and observations can be summarized as follows: 1) All benchmarks are strongly correlated with peak performance. 2) LINPACK and EP have each a unique signature. 3) The remaining NPB can grouped into three groups as follows: (CG and IS), (LU and SP), and (MG, FT, and BT). Hence three (or four with EP) benchmarks are sufficient to characterize the overall NPB performance. Our poster presentation will follow a standard poster format, and will present the data of our statistical analysis in detail.

  12. Analyzing the BBOB results by means of benchmarking concepts.

    PubMed

    Mersmann, O; Preuss, M; Trautmann, H; Bischl, B; Weihs, C

    2015-01-01

    We present methods to answer two basic questions that arise when benchmarking optimization algorithms. The first one is: which algorithm is the "best" one? and the second one is: which algorithm should I use for my real-world problem? Both are connected and neither is easy to answer. We present a theoretical framework for designing and analyzing the raw data of such benchmark experiments. This represents a first step in answering the aforementioned questions. The 2009 and 2010 BBOB benchmark results are analyzed by means of this framework and we derive insight regarding the answers to the two questions. Furthermore, we discuss how to properly aggregate rankings from algorithm evaluations on individual problems into a consensus, its theoretical background and which common pitfalls should be avoided. Finally, we address the grouping of test problems into sets with similar optimizer rankings and investigate whether these are reflected by already proposed test problem characteristics, finding that this is not always the case.

  13. Benchmarking neuromorphic systems with Nengo

    PubMed Central

    Bekolay, Trevor; Stewart, Terrence C.; Eliasmith, Chris

    2015-01-01

    Nengo is a software package for designing and simulating large-scale neural models. Nengo is architected such that the same Nengo model can be simulated on any of several Nengo backends with few to no modifications. Backends translate a model to specific platforms, which include GPUs and neuromorphic hardware. Nengo also contains a large test suite that can be run with any backend and focuses primarily on functional performance. We propose that Nengo's large test suite can be used to benchmark neuromorphic hardware's functional performance and simulation speed in an efficient, unbiased, and future-proof manner. We implement four benchmark models and show that Nengo can collect metrics across five different backends that identify situations in which some backends perform more accurately or quickly. PMID:26539076

  14. No free lunch and benchmarks.

    PubMed

    Duéñez-Guzmán, Edgar A; Vose, Michael D

    2013-01-01

    We extend previous results concerning black box search algorithms, presenting new theoretical tools related to no free lunch (NFL) where functions are restricted to some benchmark (that need not be permutation closed), algorithms are restricted to some collection (that need not be permutation closed) or limited to some number of steps, or the performance measure is given. Minimax distinctions are considered from a geometric perspective, and basic results on performance matching are also presented.

  15. Restaurant Energy Use Benchmarking Guideline

    SciTech Connect

    Hedrick, R.; Smith, V.; Field, K.

    2011-07-01

    A significant operational challenge for food service operators is defining energy use benchmark metrics to compare against the performance of individual stores. Without metrics, multiunit operators and managers have difficulty identifying which stores in their portfolios require extra attention to bring their energy performance in line with expectations. This report presents a method whereby multiunit operators may use their own utility data to create suitable metrics for evaluating their operations.

  16. MPI Multicore Torus Communication Benchmark

    2008-02-05

    The MPI Multicore Torus Communications Benchmark (TorusTest) measues the aggegate bandwidth across all six links from/to any multicore node in a logical torus. It can run in wo modi: using a static or a random mapping of tasks to torus locations. The former can be used to achieve optimal mappings and aggregate bandwidths that can be achieved with varying node mappings.

  17. HS06 Benchmark for an ARM Server

    NASA Astrophysics Data System (ADS)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  18. An introduction to benchmarking in healthcare.

    PubMed

    Benson, H R

    1994-01-01

    Benchmarking--the process of establishing a standard of excellence and comparing a business function or activity, a product, or an enterprise as a whole with that standard--will be used increasingly by healthcare institutions to reduce expenses and simultaneously improve product and service quality. As a component of total quality management, benchmarking is a continuous process by which an organization can measure and compare its own processes with those of organizations that are leaders in a particular area. Benchmarking should be viewed as a part of quality management programs, not as a replacement. There are four kinds of benchmarking: internal, competitive, functional and generic. With internal benchmarking, functions within an organization are compared with each other. Competitive benchmarking partners do business in the same market and provide a direct comparison of products or services. Functional and generic benchmarking are performed with organizations which may have a specific similar function, such as payroll or purchasing, but which otherwise are in a different business. Benchmarking must be a team process because the outcome will involve changing current practices, with effects felt throughout the organization. The team should include members who have subject knowledge; communications and computer proficiency; skills as facilitators and outside contacts; and sponsorship of senior management. Benchmarking requires quantitative measurement of the subject. The process or activity that you are attempting to benchmark will determine the types of measurements used. Benchmarking metrics usually can be classified in one of four categories: productivity, quality, time and cost-related.

  19. Offer/Acceptance Ratio.

    ERIC Educational Resources Information Center

    Collins, Mimi

    1997-01-01

    Explores how human resource professionals, with above average offer/acceptance ratios, streamline their recruitment efforts. Profiles company strategies with internships, internal promotion, cooperative education programs, and how to get candidates to accept offers. Also discusses how to use the offer/acceptance ratio as a measure of program…

  20. The Impact of Previous Schooling Experiences on a Quaker High School's Graduating Students' College Entrance Exam Scores, Parents' Expectations, and College Acceptance Outcomes

    ERIC Educational Resources Information Center

    Galusha, Debbie K.

    2010-01-01

    The purpose of the study is to determine the impact of previous private, public, home, or international schooling experiences on a Quaker high school's graduating students' college entrance composite exam scores, parents' expectations, and college attendance outcomes. The study's results suggest that regardless of previous private, public, home,…

  1. NASA Software Engineering Benchmarking Effort

    NASA Technical Reports Server (NTRS)

    Godfrey, Sally; Rarick, Heather

    2012-01-01

    Benchmarking was very interesting and provided a wealth of information (1) We did see potential solutions to some of our "top 10" issues (2) We have an assessment of where NASA stands with relation to other aerospace/defense groups We formed new contacts and potential collaborations (1) Several organizations sent us examples of their templates, processes (2) Many of the organizations were interested in future collaboration: sharing of training, metrics, Capability Maturity Model Integration (CMMI) appraisers, instructors, etc. We received feedback from some of our contractors/ partners (1) Desires to participate in our training; provide feedback on procedures (2) Welcomed opportunity to provide feedback on working with NASA

  2. Benchmarking for the competitive marketplace.

    PubMed

    Clarke, R W; Sucher, T O

    1999-07-01

    One would get little argument these days regarding the importance of performance measurement in the health care industry. The traditional approach has been the straightforward use of measurable units such as financial comparisons and clinical indicators (e.g., length of stay). Also we in the health care industry have traditionally benchmarked our performance and strategies against those most like ourselves. Today's competitive market demands a more customer-focused set of performance measures that go beyond traditional approaches such as customer service. The most important task in today's environment is to study the customers' emerging priorities and adjust our business to meet those priorities. PMID:11184882

  3. Benchmarking thermal neutron scattering in graphite

    NASA Astrophysics Data System (ADS)

    Zhou, Tong

    A Slowing-Down-Time experiment was designed and performed at the Oak Ridge National Laboratory (ORNL) by using the Oak Ridge Electron Linear Accelerator (ORELA) as a neutron source to study the neutron thermalization in graphite at room and higher temperatures. The MCNP5 code was utilized to simulate the detector responses and help optimize the experimental design including the size of the graphite assembly, furnace, shielding system and detector position. To facilitate such analysis, MCNP5 version 1.30 was modified to enable perturbation calculation using point detector type tallies. By using the modified MCNP5 code, the sensitivity of the experimental models to the graphite total thermal neutron cross-sections was studied to optimize the design of the experiment. Measurements of slowing-down-time spectrum in graphite were performed at room temperature for a 70x70x70 cm graphite pile by using a Li-6 scintillator and a U-235 fission counter at different locations. The measurements were directly compared to Monte Carlo simulations that use different graphite thermal neutron scattering cross-section libraries. Simulations based on the ENDF/B-VI graphite library were found to have a 30%-40% disagreement with the measurements. In addition to the graphite SDT experiment, which provided the data in the energy region above the graphite Bragg-cutoff energy, transmission experiments were performed for different types of graphite samples using the NIST 8.9 A beam (located at NG-6) to investigating the energy region below the Bragg-cutoff energy. Measurements confirmed that reactor grade graphite, which is a two phase material (crystalline graphite and binder (amorphous-like) carbon), has different thermal neutron scattering cross section from pyrolytic graphite (crystalline graphite). The experiments presented in this work compliment each other and provide an experimental data set which can be used to benchmark graphite thermal neutron scattering cross section libraries that

  4. Pynamic: the Python Dynamic Benchmark

    SciTech Connect

    Lee, G L; Ahn, D H; de Supinksi, B R; Gyllenhaal, J C; Miller, P J

    2007-07-10

    Python is widely used in scientific computing to facilitate application development and to support features such as computational steering. Making full use of some of Python's popular features, which improve programmer productivity, leads to applications that access extremely high numbers of dynamically linked libraries (DLLs). As a result, some important Python-based applications severely stress a system's dynamic linking and loading capabilities and also cause significant difficulties for most development environment tools, such as debuggers. Furthermore, using the Python paradigm for large scale MPI-based applications can create significant file IO and further stress tools and operating systems. In this paper, we present Pynamic, the first benchmark program to support configurable emulation of a wide-range of the DLL usage of Python-based applications for large scale systems. Pynamic has already accurately reproduced system software and tool issues encountered by important large Python-based scientific applications on our supercomputers. Pynamic provided insight for our system software and tool vendors, and our application developers, into the impact of several design decisions. As we describe the Pynamic benchmark, we will highlight some of the issues discovered in our large scale system software and tools using Pynamic.

  5. INTEGRAL BENCHMARK DATA FOR NUCLEAR DATA TESTING THROUGH THE ICSBEP AND THE NEWLY ORGANIZED IRPHEP

    SciTech Connect

    J. Blair Briggs; Lori Scott; Yolanda Rugama; Enrico Satori

    2007-04-01

    The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) was last reported in a nuclear data conference at the International Conference on Nuclear Data for Science and Technology, ND-2004, in Santa Fe, New Mexico. Since that time the number and type of integral benchmarks have increased significantly. Included in the ICSBEP Handbook are criticality-alarm / shielding and fundamental physic benchmarks in addition to the traditional critical / subcritical benchmark data. Since ND 2004, a reactor physics counterpart to the ICSBEP, the International Reactor Physics Experiment Evaluation Project (IRPhEP) was initiated. The IRPhEP is patterned after the ICSBEP, but focuses on other integral measurements, such as buckling, spectral characteristics, reactivity effects, reactivity coefficients, kinetics measurements, reaction-rate and power distributions, nuclide compositions, and other miscellaneous-type measurements in addition to the critical configuration. The status of these two projects is discussed and selected benchmarks highlighted in this paper.

  6. Computational Chemistry Comparison and Benchmark Database

    National Institute of Standards and Technology Data Gateway

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  7. Benchmarking for Excellence and the Nursing Process

    NASA Technical Reports Server (NTRS)

    Sleboda, Claire

    1999-01-01

    Nursing is a service profession. The services provided are essential to life and welfare. Therefore, setting the benchmark for high quality care is fundamental. Exploring the definition of a benchmark value will help to determine a best practice approach. A benchmark is the descriptive statement of a desired level of performance against which quality can be judged. It must be sufficiently well understood by managers and personnel in order that it may serve as a standard against which to measure value.

  8. Data Acceptance Criteria for Standardized Human-Associated Fecal Source Identification Quantitative Real-Time PCR Methods.

    PubMed

    Shanks, Orin C; Kelty, Catherine A; Oshiro, Robin; Haugland, Richard A; Madi, Tania; Brooks, Lauren; Field, Katharine G; Sivaganesan, Mano

    2016-05-01

    There is growing interest in the application of human-associated fecal source identification quantitative real-time PCR (qPCR) technologies for water quality management. The transition from a research tool to a standardized protocol requires a high degree of confidence in data quality across laboratories. Data quality is typically determined through a series of specifications that ensure good experimental practice and the absence of bias in the results due to DNA isolation and amplification interferences. However, there is currently a lack of consensus on how best to evaluate and interpret human fecal source identification qPCR experiments. This is, in part, due to the lack of standardized protocols and information on interlaboratory variability under conditions for data acceptance. The aim of this study is to provide users and reviewers with a complete series of conditions for data acceptance derived from a multiple laboratory data set using standardized procedures. To establish these benchmarks, data from HF183/BacR287 and HumM2 human-associated qPCR methods were generated across 14 laboratories. Each laboratory followed a standardized protocol utilizing the same lot of reference DNA materials, DNA isolation kits, amplification reagents, and test samples to generate comparable data. After removal of outliers, a nested analysis of variance (ANOVA) was used to establish proficiency metrics that include lab-to-lab, replicate testing within a lab, and random error for amplification inhibition and sample processing controls. Other data acceptance measurements included extraneous DNA contamination assessments (no-template and extraction blank controls) and calibration model performance (correlation coefficient, amplification efficiency, and lower limit of quantification). To demonstrate the implementation of the proposed standardized protocols and data acceptance criteria, comparable data from two additional laboratories were reviewed. The data acceptance criteria

  9. Data Acceptance Criteria for Standardized Human-Associated Fecal Source Identification Quantitative Real-Time PCR Methods

    PubMed Central

    Kelty, Catherine A.; Oshiro, Robin; Haugland, Richard A.; Madi, Tania; Brooks, Lauren; Field, Katharine G.; Sivaganesan, Mano

    2016-01-01

    There is growing interest in the application of human-associated fecal source identification quantitative real-time PCR (qPCR) technologies for water quality management. The transition from a research tool to a standardized protocol requires a high degree of confidence in data quality across laboratories. Data quality is typically determined through a series of specifications that ensure good experimental practice and the absence of bias in the results due to DNA isolation and amplification interferences. However, there is currently a lack of consensus on how best to evaluate and interpret human fecal source identification qPCR experiments. This is, in part, due to the lack of standardized protocols and information on interlaboratory variability under conditions for data acceptance. The aim of this study is to provide users and reviewers with a complete series of conditions for data acceptance derived from a multiple laboratory data set using standardized procedures. To establish these benchmarks, data from HF183/BacR287 and HumM2 human-associated qPCR methods were generated across 14 laboratories. Each laboratory followed a standardized protocol utilizing the same lot of reference DNA materials, DNA isolation kits, amplification reagents, and test samples to generate comparable data. After removal of outliers, a nested analysis of variance (ANOVA) was used to establish proficiency metrics that include lab-to-lab, replicate testing within a lab, and random error for amplification inhibition and sample processing controls. Other data acceptance measurements included extraneous DNA contamination assessments (no-template and extraction blank controls) and calibration model performance (correlation coefficient, amplification efficiency, and lower limit of quantification). To demonstrate the implementation of the proposed standardized protocols and data acceptance criteria, comparable data from two additional laboratories were reviewed. The data acceptance criteria

  10. Mindfulness, Acceptance and Catastrophizing in Chronic Pain

    PubMed Central

    de Boer, Maaike J.; Steinhagen, Hannemike E.; Versteegen, Gerbrig J.; Struys, Michel M. R. F.; Sanderman, Robbert

    2014-01-01

    Objectives Catastrophizing is often the primary target of the cognitive-behavioral treatment of chronic pain. Recent literature on acceptance and commitment therapy (ACT) suggests an important role in the pain experience for the concepts mindfulness and acceptance. The aim of this study is to examine the influence of mindfulness and general psychological acceptance on pain-related catastrophizing in patients with chronic pain. Methods A cross-sectional survey was conducted, including 87 chronic pain patients from an academic outpatient pain center. Results The results show that general psychological acceptance (measured with the AAQ-II) is a strong predictor of pain-related catastrophizing, independent of gender, age and pain intensity. Mindfulness (measured with the MAAS) did not predict levels of pain-related catastrophizing. Discussion Acceptance of psychological experiences outside of pain itself is related to catastrophizing. Thus, acceptance seems to play a role in the pain experience and should be part of the treatment of chronic pain. The focus of the ACT treatment of chronic pain does not necessarily have to be on acceptance of pain per se, but may be aimed at acceptance of unwanted experiences in general. Mindfulness in the sense of “acting with awareness” is however not related to catastrophizing. Based on our research findings in comparisons with those of other authors, we recommend a broader conceptualization of mindfulness and the use of a multifaceted questionnaire for mindfulness instead of the unidimensional MAAS. PMID:24489915

  11. Method and system for benchmarking computers

    DOEpatents

    Gustafson, John L.

    1993-09-14

    A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

  12. Benchmarks for acute stroke care delivery

    PubMed Central

    Hall, Ruth E.; Khan, Ferhana; Bayley, Mark T.; Asllani, Eriola; Lindsay, Patrice; Hill, Michael D.; O'Callaghan, Christina; Silver, Frank L.; Kapral, Moira K.

    2013-01-01

    Objective Despite widespread interest in many jurisdictions in monitoring and improving the quality of stroke care delivery, benchmarks for most stroke performance indicators have not been established. The objective of this study was to develop data-derived benchmarks for acute stroke quality indicators. Design Nine key acute stroke quality indicators were selected from the Canadian Stroke Best Practice Performance Measures Manual. Participants A population-based retrospective sample of patients discharged from 142 hospitals in Ontario, Canada, between 1 April 2008 and 31 March 2009 (N = 3191) was used to calculate hospital rates of performance and benchmarks. Intervention The Achievable Benchmark of Care (ABC™) methodology was used to create benchmarks based on the performance of the upper 15% of patients in the top-performing hospitals. Main Outcome Measures Benchmarks were calculated for rates of neuroimaging, carotid imaging, stroke unit admission, dysphasia screening and administration of stroke-related medications. Results The following benchmarks were derived: neuroimaging within 24 h, 98%; admission to a stroke unit, 77%; thrombolysis among patients arriving within 2.5 h, 59%; carotid imaging, 93%; dysphagia screening, 88%; antithrombotic therapy, 98%; anticoagulation for atrial fibrillation, 94%; antihypertensive therapy, 92% and lipid-lowering therapy, 77%. ABC™ acute stroke care benchmarks achieve or exceed the consensus-based targets required by Accreditation Canada, with the exception of dysphagia screening. Conclusions Benchmarks for nine hospital-based acute stroke care quality indicators have been established. These can be used in the development of standards for quality improvement initiatives. PMID:24141011

  13. The Zoo, Benchmarks & You: How To Reach the Oregon State Benchmarks with Zoo Resources.

    ERIC Educational Resources Information Center

    2002

    This document aligns Oregon state educational benchmarks and standards with Oregon Zoo resources. Benchmark areas examined include English, mathematics, science, social studies, and career and life roles. Brief descriptions of the programs offered by the zoo are presented. (SOE)

  14. Benchmark Evaluation of Fuel Effect and Material Worth Measurements for a Beryllium-Reflected Space Reactor Mockup

    SciTech Connect

    Marshall, Margaret A.; Bess, John D.

    2015-02-01

    The critical configuration of the small, compact critical assembly (SCCA) experiments performed at the Oak Ridge Critical Experiments Facility (ORCEF) in 1962-1965 have been evaluated as acceptable benchmark experiments for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. The initial intent of these experiments was to support the design of the Medium Power Reactor Experiment (MPRE) program, whose purpose was to study “power plants for the production of electrical power in space vehicles.” The third configuration in this series of experiments was a beryllium-reflected assembly of stainless-steel-clad, highly enriched uranium (HEU)-O2 fuel mockup of a potassium-cooled space power reactor. Reactivity measurements cadmium ratio spectral measurements and fission rate measurements were measured through the core and top reflector. Fuel effect worth measurements and neutron moderating and absorbing material worths were also measured in the assembly fuel region. The cadmium ratios, fission rate, and worth measurements were evaluated for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. The fuel tube effect and neutron moderating and absorbing material worth measurements are the focus of this paper. Additionally, a measurement of the worth of potassium filling the core region was performed but has not yet been evaluated Pellets of 93.15 wt.% enriched uranium dioxide (UO2) were stacked in 30.48 cm tall stainless steel fuel tubes (0.3 cm tall end caps). Each fuel tube had 26 pellets with a total mass of 295.8 g UO2 per tube. 253 tubes were arranged in 1.506-cm triangular lattice. An additional 7-tube cluster critical configuration was also measured but not used for any physics measurements. The core was surrounded on all side by a beryllium reflector. The fuel effect worths were measured by removing fuel tubes at various radius. An accident scenario

  15. Ground truth and benchmarks for performance evaluation

    NASA Astrophysics Data System (ADS)

    Takeuchi, Ayako; Shneier, Michael; Hong, Tsai Hong; Chang, Tommy; Scrapper, Christopher; Cheok, Geraldine S.

    2003-09-01

    Progress in algorithm development and transfer of results to practical applications such as military robotics requires the setup of standard tasks, of standard qualitative and quantitative measurements for performance evaluation and validation. Although the evaluation and validation of algorithms have been discussed for over a decade, the research community still faces a lack of well-defined and standardized methodology. The range of fundamental problems include a lack of quantifiable measures of performance, a lack of data from state-of-the-art sensors in calibrated real-world environments, and a lack of facilities for conducting realistic experiments. In this research, we propose three methods for creating ground truth databases and benchmarks using multiple sensors. The databases and benchmarks will provide researchers with high quality data from suites of sensors operating in complex environments representing real problems of great relevance to the development of autonomous driving systems. At NIST, we have prototyped a High Mobility Multi-purpose Wheeled Vehicle (HMMWV) system with a suite of sensors including a Riegl ladar, GDRS ladar, stereo CCD, several color cameras, Global Position System (GPS), Inertial Navigation System (INS), pan/tilt encoders, and odometry . All sensors are calibrated with respect to each other in space and time. This allows a database of features and terrain elevation to be built. Ground truth for each sensor can then be extracted from the database. The main goal of this research is to provide ground truth databases for researchers and engineers to evaluate algorithms for effectiveness, efficiency, reliability, and robustness, thus advancing the development of algorithms.

  16. Sieve of Eratosthenes benchmarks for the Z8 FORTH microcontroller

    SciTech Connect

    Edwards, R.

    1989-02-01

    This report presents benchmarks for the Z8 FORTH microcontroller system that ORNL uses extensively in proving concepts and developing prototype test equipment for the Smart House Project. The results are based on the sieve of Eratosthenes algorithm, a calculation used extensively to rate computer systems and programming languages. Three benchmark refinements are presented,each showing how the execution speed of a FORTH program can be improved by use of a particular optimization technique. The last version of the FORTH benchmark shows that optimization is worth the effort: It executes 20 times faster than the Gilbreaths' widely-published FORTH benchmark program. The National Association of Home Builders Smart House Project is a cooperative research and development effort being undertaken by American home builders and a number of major corporations serving the home building industry. The major goal of the project is to help the participating organizations incorporate advanced technology in communications,energy distribution, and appliance control products for American homes. This information is provided to help project participants use the Z8 FORTH prototyping microcontroller in developing Smart House concepts and equipment. The discussion is technical in nature and assumes some experience with microcontroller devices and the techniques used to develop software for them. 7 refs., 5 tabs.

  17. Acceptability of BCG vaccination.

    PubMed

    Mande, R

    1977-01-01

    The acceptability of BCG vaccination varies a great deal according to the country and to the period when the vaccine is given. The incidence of complications has not always a direct influence on this acceptability, which depends, for a very large part, on the risk of tuberculosis in a given country at a given time.

  18. Acceptability of blood and blood substitutes.

    PubMed

    Ferguson, E; Prowse, C; Townsend, E; Spence, A; Hilten, J A van; Lowe, K

    2008-03-01

    Alternatives to donor blood have been developed in part to meet increasing demand. However, new biotechnologies are often associated with increased perceptions of risk and low acceptance. This paper reviews developments of alternatives and presents data, from a field-based experiment in the UK and Holland, on the risks and acceptance of donor blood and alternatives (chemical, genetically modified and bovine). UK groups perceived all substitutes as riskier than the Dutch. There is a negative association between perceived risk and acceptability. Solutions to increasing acceptance are discussed in terms of implicit attitudes, product naming and emotional responses.

  19. Benchmarking Outcomes in the Critically Injured Burn Patient

    PubMed Central

    Klein, Matthew B.; Goverman, Jeremy; Hayden, Douglas L.; Fagan, Shawn P.; McDonald-Smith, Grace P.; Alexander, Andrew K.; Gamelli, Richard L.; Gibran, Nicole S.; Finnerty, Celeste C.; Jeschke, Marc G.; Arnoldo, Brett; Wispelwey, Bram; Mindrinos, Michael N.; Xiao, Wenzhong; Honari, Shari E.; Mason, Philip H.; Schoenfeld, David A.; Herndon, David N.; Tompkins, Ronald G.

    2014-01-01

    Objective To determine and compare outcomes with accepted benchmarks in burn care at six academic burn centers. Background Since the 1960s, U.S. morbidity and mortality rates have declined tremendously for burn patients, likely related to improvements in surgical and critical care treatment. We describe the baseline patient characteristics and well-defined outcomes for major burn injuries. Methods We followed 300 adults and 241 children from 2003–2009 through hospitalization using standard operating procedures developed at study onset. We created an extensive database on patient and injury characteristics, anatomic and physiological derangement, clinical treatment, and outcomes. These data were compared with existing benchmarks in burn care. Results Study patients were critically injured as demonstrated by mean %TBSA (41.2±18.3 for adults and 57.8±18.2 for children) and presence of inhalation injury in 38% of the adults and 54.8% of the children. Mortality in adults was 14.1% for those less than 55 years old and 38.5% for those age ≥55 years. Mortality in patients less than 17 years old was 7.9%. Overall, the multiple organ failure rate was 27%. When controlling for age and %TBSA, presence of inhalation injury was not significant. Conclusions This study provides the current benchmark for major burn patients. Mortality rates, notwithstanding significant % TBSA and presence of inhalation injury, have significantly declined compared to previous benchmarks. Modern day surgical and medically intensive management has markedly improved to the point where we can expect patients less than 55 years old with severe burn injuries and inhalation injury to survive these devastating conditions. PMID:24722222

  20. ``Observation, Experiment, and the Future of Physics'' John G. King's acceptance speech for the 2000 Oersted Medal presented by the American Association of Physics Teachers, 18 January 2000

    NASA Astrophysics Data System (ADS)

    King, John G.

    2001-01-01

    Looking at our built world, most physicists see order where many others see magic. This view of order should be available to all, and physics would flourish better in an appreciative society. Despite the remarkable developments in the teaching of physics in the last half century, too many people, whether they've had physics courses or not, don't have an inkling of the power and value of our subject, whose importance ranges from the practical to the psychological. We need to supplement people's experiences in ways that are applicable to different groups, from physics majors to people without formal education. I will describe and explain an ambitious program to stimulate scientific, engineering, and technological interest and understanding through direct observation of a wide range of phenomena and experimentation with them. For the very young: toys, playgrounds, kits, projects. For older students: indoor showcases, projects, and courses taught in intensive form. For all ages: more instructive everyday surroundings with outdoor showcases and large demonstrations.

  1. Benchmarking: A tool to enhance performance

    SciTech Connect

    Munro, J.F.; Kristal, J.; Thompson, G.; Johnson, T.

    1996-12-31

    The Office of Environmental Management is bringing Headquarters and the Field together to implement process improvements throughout the Complex through a systematic process of organizational learning called benchmarking. Simply stated, benchmarking is a process of continuously comparing and measuring practices, processes, or methodologies with those of other private and public organizations. The EM benchmarking program, which began as the result of a recommendation from Xerox Corporation, is building trust and removing barriers to performance enhancement across the DOE organization. The EM benchmarking program is designed to be field-centered with Headquarters providing facilitatory and integrative functions on an ``as needed`` basis. One of the main goals of the program is to assist Field Offices and their associated M&O/M&I contractors develop the capabilities to do benchmarking for themselves. In this regard, a central precept is that in order to realize tangible performance benefits, program managers and staff -- the ones closest to the work - must take ownership of the studies. This avoids the ``check the box`` mentality associated with some third party studies. This workshop will provide participants with a basic level of understanding why the EM benchmarking team was developed and the nature and scope of its mission. Participants will also begin to understand the types of study levels and the particular methodology the EM benchmarking team is using to conduct studies. The EM benchmarking team will also encourage discussion on ways that DOE (both Headquarters and the Field) can team with its M&O/M&I contractors to conduct additional benchmarking studies. This ``introduction to benchmarking`` is intended to create a desire to know more and a greater appreciation of how benchmarking processes could be creatively employed to enhance performance.

  2. Benchmarking Multipacting Simulations in VORPAL

    SciTech Connect

    C. Nieter, C. Roark, P. Stoltz, K. Tian

    2009-05-01

    We will present the results of benchmarking simulations run to test the ability of VORPAL to model multipacting processes in Superconducting Radio Frequency structures. VORPAL is an electromagnetic (FDTD) particle-in-cell simulation code originally developed for applications in plasma and beam physics. The addition of conformal boundaries and algorithms for secondary electron emission allow VORPAL to be applied to multipacting processes. We start with simulations of multipacting between parallel plates where there are well understood theoretical predictions for the frequency bands where multipacting is expected to occur. We reproduce the predicted multipacting bands and demonstrate departures from the theoretical predictions when a more sophisticated model of secondary emission is used. Simulations of existing cavity structures developed at Jefferson National Laboratories will also be presented where we compare results from VORPAL to experimental data.

  3. Evaluation of Plutonium Hemisphere Critical Experiments Partially Reflected by Steel and Oil

    SciTech Connect

    John D. Bess

    2012-01-01

    A series of 15 critical experiments performed at the Rocky Flats Critical Mass Laboratory in the late 1960s were evaluated and then determined to represent acceptable benchmark experiments for the validation of calculational methods. This series of experiments was part of a larger set of experiments performed to evaluate operational safety margins at the Rocky Flats Plant. The experiments consisted of bare plutonium metal hemishells reflected by steel hemishells of increasing thickness and motor oil. The hemishell assembly was suspended within dual aluminum tanks. Criticality was achieved by pumping oil into the tanks such that effectively infinite reflection was achieved in all directions except directly above the assembly; then the critical oil height was recorded. The results of these experiments had been initially ignored because early computational methods had been inadequate to analyze partially-reflected configurations. The dominant uncertainties include the uncertainty in the average plutonium density and the composition of materials in the gaps between the plutonium hemishells. Simple and detailed benchmark models were developed. Eigenvalue calculations using MCNP5 and ENDF/B-VII.0 were within 2s of the benchmark values. This benchmark evaluation has been added to the ICSBEP Handbook.

  4. HyspIRI Low Latency Concept and Benchmarks

    NASA Technical Reports Server (NTRS)

    Mandl, Dan

    2010-01-01

    Topics include HyspIRI low latency data ops concept, HyspIRI data flow, ongoing efforts, experiment with Web Coverage Processing Service (WCPS) approach to injecting new algorithms into SensorWeb, low fidelity HyspIRI IPM testbed, compute cloud testbed, open cloud testbed environment, Global Lambda Integrated Facility (GLIF) and OCC collaboration with Starlight, delay tolerant network (DTN) protocol benchmarking, and EO-1 configuration for preliminary DTN prototype.

  5. HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks

    SciTech Connect

    Paulson, Patrick R.; Purohit, Sumit; Rodriguez, Luke R.

    2015-05-01

    This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.

  6. Benchmarking Learning and Teaching: Developing a Method

    ERIC Educational Resources Information Center

    Henderson-Smart, Cheryl; Winning, Tracey; Gerzina, Tania; King, Shalinie; Hyde, Sarah

    2006-01-01

    Purpose: To develop a method for benchmarking teaching and learning in response to an institutional need to validate a new program in Dentistry at the University of Sydney, Australia. Design/methodology/approach: After a collaborative partner, University of Adelaide, was identified, the areas of teaching and learning to be benchmarked, PBL…

  7. Benchmark Assessment for Improved Learning. AACC Report

    ERIC Educational Resources Information Center

    Herman, Joan L.; Osmundson, Ellen; Dietel, Ronald

    2010-01-01

    This report describes the purposes of benchmark assessments and provides recommendations for selecting and using benchmark assessments--addressing validity, alignment, reliability, fairness and bias and accessibility, instructional sensitivity, utility, and reporting issues. We also present recommendations on building capacity to support schools'…

  8. Beyond Benchmarking: Value-Adding Metrics

    ERIC Educational Resources Information Center

    Fitz-enz, Jac

    2007-01-01

    HR metrics has grown up a bit over the past two decades, moving away from simple benchmarking practices and toward a more inclusive approach to measuring institutional performance and progress. In this article, the acknowledged "father" of human capital performance benchmarking provides an overview of several aspects of today's HR metrics…

  9. The He + H2+ --> HeH+ + H reaction: Ab initio studies of the potential energy surface, benchmark time-independent quantum dynamics in an extended energy range and comparison with experiments

    NASA Astrophysics Data System (ADS)

    De Fazio, Dario; de Castro-Vitores, Miguel; Aguado, Alfredo; Aquilanti, Vincenzo; Cavalli, Simonetta

    2012-12-01

    In this work we critically revise several aspects of previous ab initio quantum chemistry studies [P. Palmieri et al., Mol. Phys. 98, 1835 (2000);, 10.1080/00268970009483387 C. N. Ramachandran et al., Chem. Phys. Lett. 469, 26 (2009)], 10.1016/j.cplett.2008.12.035 of the HeH_2^+ system. New diatomic curves for the H_2^+ and HeH+ molecular ions, which provide vibrational frequencies at a near spectroscopic level of accuracy, have been generated to test the quality of the diatomic terms employed in the previous analytical fittings. The reliability of the global potential energy surfaces has also been tested performing benchmark quantum scattering calculations within the time-independent approach in an extended interval of energies. In particular, the total integral cross sections have been calculated in the total collision energy range 0.955-2.400 eV for the scattering of the He atom by the ortho- and para-hydrogen molecular ion. The energy profiles of the total integral cross sections for selected vibro-rotational states of H_2^+ (v = 0, …,5 and j = 1, …,7) show a strong rotational enhancement for the lower vibrational states which becomes weaker as the vibrational quantum number increases. Comparison with several available experimental data is presented and discussed.

  10. Benchmark 1 - Nonlinear strain path forming limit of a reverse draw: Part A: Benchmark description

    NASA Astrophysics Data System (ADS)

    Benchmark-1 Committee

    2013-12-01

    The objective of this benchmark is to demonstrate the predictability of forming limits under nonlinear strain paths for a draw panel with a non-axisymmetric reversed dome-shape at the center. It is important to recognize that treating strain forming limits as though they were static during the deformation process may not lead to successful predictions of this benchmark, due to the nonlinearity of the strain paths involved in this benchmark. The benchmark tool is designed to enable a two-stage draw/reverse draw continuous forming process. Three typical sheet materials, AA5182-O Aluminum, and DP600 and TRIP780 Steels, are selected for this benchmark study.

  11. Acceptance procedures: Microfilm printer

    NASA Technical Reports Server (NTRS)

    Lockwood, H. E.

    1973-01-01

    Acceptance tests were made for a special order automatic additive color microfilm printer. Tests include film capacity, film transport, resolution, illumination uniformity, exposure range checks, and color cuing considerations.

  12. Two-dimensional benchmark calculations for PNL-30 through PNL-35

    SciTech Connect

    Mosteller, R.D.

    1997-12-01

    Interest in critical experiments with lattices of mixed-oxide (MOX) fuel pins has been revived by the possibility that light water reactors will be used for the disposition of weapons-grade plutonium. A series of six experiments with MOX lattices, designated PNL-30 through PNL-35, was performed at Pacific Northwest Laboratories in 1975 and 1976, and a set of benchmark specifications for these experiments subsequently was adopted by the Cross Section Evaluation Working Group (CSEWG). Although there appear to be some problems with these experiments, they remain the only CSEWG benchmarks for MOX lattices.

  13. The IAEA Coordinated Research Program on HTGR Reactor Physics, Thermal-hydraulics and Depletion Uncertainty Analysis: Description of the Benchmark Test Cases and Phases

    SciTech Connect

    Frederik Reitsma; Gerhard Strydom; Bismark Tyobeka; Kostadin Ivanov

    2012-10-01

    The continued development of High Temperature Gas Cooled Reactors (HTGRs) requires verification of design and safety features with reliable high fidelity physics models and robust, efficient, and accurate codes. The uncertainties in the HTR analysis tools are today typically assessed with sensitivity analysis and then a few important input uncertainties (typically based on a PIRT process) are varied in the analysis to find a spread in the parameter of importance. However, one wish to apply a more fundamental approach to determine the predictive capability and accuracies of coupled neutronics/thermal-hydraulics and depletion simulations used for reactor design and safety assessment. Today there is a broader acceptance of the use of uncertainty analysis even in safety studies and it has been accepted by regulators in some cases to replace the traditional conservative analysis. Finally, there is also a renewed focus in supplying reliable covariance data (nuclear data uncertainties) that can then be used in uncertainty methods. Uncertainty and sensitivity studies are therefore becoming an essential component of any significant effort in data and simulation improvement. In order to address uncertainty in analysis and methods in the HTGR community the IAEA launched a Coordinated Research Project (CRP) on the HTGR Uncertainty Analysis in Modelling early in 2012. The project is built on the experience of the OECD/NEA Light Water Reactor (LWR) Uncertainty Analysis in Best-Estimate Modelling (UAM) benchmark activity, but focuses specifically on the peculiarities of HTGR designs and its simulation requirements. Two benchmark problems were defined with the prismatic type design represented by the MHTGR-350 design from General Atomics (GA) while a 250 MW modular pebble bed design, similar to the INET (China) and indirect-cycle PBMR (South Africa) designs are also included. In the paper more detail on the benchmark cases, the different specific phases and tasks and the latest

  14. Metrics and Benchmarks for Visualization

    NASA Technical Reports Server (NTRS)

    Uselton, Samuel P.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    What is a "good" visualization? How can the quality of a visualization be measured? How can one tell whether one visualization is "better" than another? I claim that the true quality of a visualization can only be measured in the context of a particular purpose. The same image generated from the same data may be excellent for one purpose and abysmal for another. A good measure of visualization quality will correspond to the performance of users in accomplishing the intended purpose, so the "gold standard" is user testing. As a user of visualization software (or at least a consultant to such users) I don't expect visualization software to have been tested in this way for every possible use. In fact, scientific visualization (as distinct from more "production oriented" uses of visualization) will continually encounter new data, new questions and new purposes; user testing can never keep up. User need software they can trust, and advice on appropriate visualizations of particular purposes. Considering the following four processes, and their impact on visualization trustworthiness, reveals important work needed to create worthwhile metrics and benchmarks for visualization. These four processes are (1) complete system testing (user-in-loop), (2) software testing, (3) software design and (4) information dissemination. Additional information is contained in the original extended abstract.

  15. Benchmarking Measures of Network Influence

    PubMed Central

    Bramson, Aaron; Vandermarliere, Benjamin

    2016-01-01

    Identifying key agents for the transmission of diseases (ideas, technology, etc.) across social networks has predominantly relied on measures of centrality on a static base network or a temporally flattened graph of agent interactions. Various measures have been proposed as the best trackers of influence, such as degree centrality, betweenness, and k-shell, depending on the structure of the connectivity. We consider SIR and SIS propagation dynamics on a temporally-extruded network of observed interactions and measure the conditional marginal spread as the change in the magnitude of the infection given the removal of each agent at each time: its temporal knockout (TKO) score. We argue that this TKO score is an effective benchmark measure for evaluating the accuracy of other, often more practical, measures of influence. We find that none of the network measures applied to the induced flat graphs are accurate predictors of network propagation influence on the systems studied; however, temporal networks and the TKO measure provide the requisite targets for the search for effective predictive measures. PMID:27670635

  16. Benchmark problems in computational aeroacoustics

    NASA Technical Reports Server (NTRS)

    Porter-Locklear, Freda

    1994-01-01

    A recent directive at NASA Langley is aimed at numerically predicting principal noise sources. During my summer stay, I worked with high-order ENO code, developed by Dr. Harold Atkins, for solving the unsteady compressible Navier-Stokes equations, as it applies to computational aeroacoustics (CAA). A CAA workshop, composed of six categories of benchmark problems, has been organized to test various numerical properties of code. My task was to determine the robustness of Atkins' code for these test problems. In one category, we tested the nonlinear wave propagation of the code for the one-dimensional Euler equations, with initial pressure, density, and velocity conditions. Using freestream boundary conditions, our results were plausible. In another category, we solved the linearized two-dimensional Euler equations to test the effectiveness of radiation boundary conditions. Here we utilized MAPLE to compute eigenvalues and eigenvectors of the Jacobian given variable and flux vectors. We experienced a minor problem with inflow and outflow boundary conditions. Next, we solved the quasi one dimensional unsteady flow equations with an incoming acoustic wave of amplitude 10(exp -6). The small amplitude sound wave was incident on a convergent-divergent nozzle. After finding a steady-state solution and then marching forward, our solution indicated that after 30 periods the acoustic wave had dissipated (a period is time required for sound wave to traverse one end of nozzle to other end).

  17. Benchmarking Measures of Network Influence

    NASA Astrophysics Data System (ADS)

    Bramson, Aaron; Vandermarliere, Benjamin

    2016-09-01

    Identifying key agents for the transmission of diseases (ideas, technology, etc.) across social networks has predominantly relied on measures of centrality on a static base network or a temporally flattened graph of agent interactions. Various measures have been proposed as the best trackers of influence, such as degree centrality, betweenness, and k-shell, depending on the structure of the connectivity. We consider SIR and SIS propagation dynamics on a temporally-extruded network of observed interactions and measure the conditional marginal spread as the change in the magnitude of the infection given the removal of each agent at each time: its temporal knockout (TKO) score. We argue that this TKO score is an effective benchmark measure for evaluating the accuracy of other, often more practical, measures of influence. We find that none of the network measures applied to the induced flat graphs are accurate predictors of network propagation influence on the systems studied; however, temporal networks and the TKO measure provide the requisite targets for the search for effective predictive measures.

  18. Benchmarking for Bayesian Reinforcement Learning

    PubMed Central

    Ernst, Damien; Couëtoux, Adrien

    2016-01-01

    In the Bayesian Reinforcement Learning (BRL) setting, agents try to maximise the collected rewards while interacting with their environment while using some prior knowledge that is accessed beforehand. Many BRL algorithms have already been proposed, but the benchmarks used to compare them are only relevant for specific cases. The paper addresses this problem, and provides a new BRL comparison methodology along with the corresponding open source library. In this methodology, a comparison criterion that measures the performance of algorithms on large sets of Markov Decision Processes (MDPs) drawn from some probability distributions is defined. In order to enable the comparison of non-anytime algorithms, our methodology also includes a detailed analysis of the computation time requirement of each algorithm. Our library is released with all source code and documentation: it includes three test problems, each of which has two different prior distributions, and seven state-of-the-art RL algorithms. Finally, our library is illustrated by comparing all the available algorithms and the results are discussed. PMID:27304891

  19. Benchmark problems in computational aeroacoustics

    NASA Astrophysics Data System (ADS)

    Porter-Locklear, Freda

    1994-12-01

    A recent directive at NASA Langley is aimed at numerically predicting principal noise sources. During my summer stay, I worked with high-order ENO code, developed by Dr. Harold Atkins, for solving the unsteady compressible Navier-Stokes equations, as it applies to computational aeroacoustics (CAA). A CAA workshop, composed of six categories of benchmark problems, has been organized to test various numerical properties of code. My task was to determine the robustness of Atkins' code for these test problems. In one category, we tested the nonlinear wave propagation of the code for the one-dimensional Euler equations, with initial pressure, density, and velocity conditions. Using freestream boundary conditions, our results were plausible. In another category, we solved the linearized two-dimensional Euler equations to test the effectiveness of radiation boundary conditions. Here we utilized MAPLE to compute eigenvalues and eigenvectors of the Jacobian given variable and flux vectors. We experienced a minor problem with inflow and outflow boundary conditions. Next, we solved the quasi one dimensional unsteady flow equations with an incoming acoustic wave of amplitude 10(exp -6). The small amplitude sound wave was incident on a convergent-divergent nozzle. After finding a steady-state solution and then marching forward, our solution indicated that after 30 periods the acoustic wave had dissipated (a period is time required for sound wave to traverse one end of nozzle to other end).

  20. Developing integrated benchmarks for DOE performance measurement

    SciTech Connect

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  1. MODEL PREDICTION RESULTS FOR 2008 ULTRASONIC BENCHMARK PROBLEMS

    SciTech Connect

    Kim, Hak-Joon; Song, Sung-Jin

    2009-03-03

    The World Federation of NDE Centers (WFNDEC) has addressed two types of problems for the 2008 ultrasonic benchmark problems: effects of surface curvatures on the ultrasonic responses of flat-bottomed holes, and prediction of side-drilled hole responses at various depths in a steel block. To solve this year ultrasonic benchmark problems, multi-Gaussian beam models was adopted for calculation of insonifying fields on the flat-bottomed holes and the side-drilled holes. And, the Kirchhoff approximation and the separation of variables method were applied for calculation of far-field scattering amplitudes of flat-bottomed holes and side-drilled holes, respectively. In this paper, we present comparison of the model predictions to the experiments for side-drilled holes and discuss the effect of interface curvatures on ultrasonic responses by comparison of the peak-to-peak amplitudes of the flat-bottomed hole responses with different interface curvatures.

  2. Benchmarking Nonlinear Turbulence Simulations on Alcator C-Mod

    SciTech Connect

    M.H. Redi; C.L. Fiore; W. Dorland; M.J. Greenwald; G.W. Hammett; K. Hill; D. McCune; D.R. Mikkelsen; G. Rewoldt; J.E. Rice

    2004-06-22

    Linear simulations of plasma microturbulence are used with recent radial profiles of toroidal velocity from similar plasmas to consider nonlinear microturbulence simulations and observed transport analysis on Alcator C-Mod. We focus on internal transport barrier (ITB) formation in fully equilibrated H-mode plasmas with nearly flat velocity profiles. Velocity profile data, transport analysis and linear growth rates are combined to integrate data and simulation, and explore the effects of toroidal velocity on benchmarking simulations. Areas of interest for future nonlinear simulations are identified. A good gyrokinetic benchmark is found in the plasma core, without extensive nonlinear simulations. RF-heated C-Mod H-mode experiments, which exhibit an ITB, have been studied with the massively parallel code GS2 towards validation of gyrokinetic microturbulence models. New, linear, gyrokinetic calculations are reported and discussed in connection with transport analysis near the ITB trigger time of shot No.1001220016.

  3. The International Criticality Safety Benchmark Evaluation Project on the Internet

    SciTech Connect

    Briggs, J.B.; Brennan, S.A.; Scott, L.

    2000-07-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in October 1992 by the US Department of Energy's (DOE's) defense programs and is documented in the Transactions of numerous American Nuclear Society and International Criticality Safety Conferences. The work of the ICSBEP is documented as an Organization for Economic Cooperation and Development (OECD) handbook, International Handbook of Evaluated Criticality Safety Benchmark Experiments. The ICSBEP Internet site was established in 1996 and its address is http://icsbep.inel.gov/icsbep. A copy of the ICSBEP home page is shown in Fig. 1. The ICSBEP Internet site contains the five primary links. Internal sublinks to other relevant sites are also provided within the ICSBEP Internet site. A brief description of each of the five primary ICSBEP Internet site links is given.

  4. Use of Simulation to Study Nurses Acceptance and Non-Acceptance of Clinical Decision Support Suggestions

    PubMed Central

    Sousa, Vanessa E. C.; Lopez, Karen Dunn; Febretti, Alessandro; Stifter, Janet; Yao, Yingwei; Johnson, Andrew; Wilkie, Diana J.; Keenan, Gail M.

    2015-01-01

    Our long term goal is to ensure nurse clinical decision support (CDS) works as intended before full deployment in clinical practice. As part of a broader effort, this pilot explores factors influencing acceptance/non-acceptance of 8 CDS suggestions displayed through selecting a blinking red button in an electronic health record (EHR) based nursing plan of care software prototype. A diverse sample of 21 nurses participated in this high fidelity clinical simulation experience and completed a questionnaire to assess reasons for accepting/not accepting the CDS suggestions. Of 168 total suggestions displayed during the experiment (8 for each of the 21 nurses), 123 (73.2%) were accepted and 45 (26.8%) were not accepted. The mode number of acceptances by nurses was 7 of 8 with only 2 of 21 nurses accepting all. The main reason for CDS acceptance was the nurse’s belief that the suggestions were good for the patient (n=100%) with other features being secondarily reinforcing. Reasons for non-acceptance were less clear, with under half of the subjects indicating low confidence in the evidence. This study provides preliminary evidence that high quality simulation and targeted questionnaires about specific CDS selections offers a cost effective means for testing before full deployment in clinical practice. PMID:26361268

  5. Toxicological benchmarks for screening potential contaminants of concern for effects on terrestrial plants: 1994 revision

    SciTech Connect

    Will, M.E.; Suter, G.W. II

    1994-09-01

    One of the initial stages in ecological risk assessment for hazardous waste sites is screening contaminants to determine which of them are worthy of further consideration as contaminants of potential concern. This process is termed contaminant screening. It is performed by comparing measured ambient concentrations of chemicals to benchmark concentrations. Currently, no standard benchmark concentrations exist for assessing contaminants in soil with respect to their toxicity to plants. This report presents a standard method for deriving benchmarks for this purpose (phytotoxicity benchmarks), a set of data concerning effects of chemicals in soil or soil solution on plants, and a set of phytotoxicity benchmarks for 38 chemicals potentially associated with United States Department of Energy (DOE) sites. In addition, background information on the phytotoxicity and occurrence of the chemicals in soils is presented, and literature describing the experiments from which data were drawn for benchmark derivation is reviewed. Chemicals that are found in soil at concentrations exceeding both the phytotoxicity benchmark and the background concentration for the soil type should be considered contaminants of potential concern.

  6. Toxicological Benchmarks for Screening Potential Contaminants of Concern for Effects on Terrestrial Plants

    SciTech Connect

    Suter, G.W. II

    1993-01-01

    One of the initial stages in ecological risk assessment for hazardous waste sites is screening contaminants to determine which of them are worthy of further consideration as contaminants of potential concern. This process is termed contaminant screening. It is performed by comparing measured ambient concentrations of chemicals to benchmark concentrations. Currently, no standard benchmark concentrations exist for assessing contaminants in soil with respect to their toxicity to plants. This report presents a standard method for deriving benchmarks for this purpose (phytotoxicity benchmarks), a set of data concerning effects of chemicals in soil or soil solution on plants, and a set of phytotoxicity benchmarks for 38 chemicals potentially associated with United States Department of Energy (DOE) sites. In addition, background information on the phytotoxicity and occurrence of the chemicals in soils is presented, and literature describing the experiments from which data were drawn for benchmark derivation is reviewed. Chemicals that are found in soil at concentrations exceeding both the phytotoxicity benchmark and the background concentration for the soil type should be considered contaminants of potential concern.

  7. Geant4 Computing Performance Benchmarking and Monitoring

    DOE PAGES

    Dotti, Andrea; Elvira, V. Daniel; Folger, Gunter; Genser, Krzysztof; Jun, Soon Yung; Kowalkowski, James B.; Paterno, Marc

    2015-12-23

    Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared tomore » previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. In conclusion, the scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.« less

  8. Geant4 Computing Performance Benchmarking and Monitoring

    SciTech Connect

    Dotti, Andrea; Elvira, V. Daniel; Folger, Gunter; Genser, Krzysztof; Jun, Soon Yung; Kowalkowski, James B.; Paterno, Marc

    2015-12-23

    Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared to previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. In conclusion, the scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.

  9. Geant4 Computing Performance Benchmarking and Monitoring

    NASA Astrophysics Data System (ADS)

    Dotti, Andrea; Elvira, V. Daniel; Folger, Gunter; Genser, Krzysztof; Jun, Soon Yung; Kowalkowski, James B.; Paterno, Marc

    2015-12-01

    Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared to previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. The scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.

  10. Updates to the integrated protein-protein interaction benchmarks: Docking benchmark version 5 and affinity benchmark version 2

    PubMed Central

    Vreven, Thom; Moal, Iain H.; Vangone, Anna; Pierce, Brian G.; Kastritis, Panagiotis L.; Torchala, Mieczyslaw; Chaleil, Raphael; Jiménez-García, Brian; Bates, Paul A.; Fernandez-Recio, Juan; Bonvin, Alexandre M.J.J.; Weng, Zhiping

    2015-01-01

    We present an updated and integrated version of our widely used protein-protein docking and binding affinity benchmarks. The benchmarks consist of non-redundant, high quality structures of protein-protein complexes along with the unbound structures of their components. Fifty-five new complexes were added to the docking benchmark, 35 of which have experimentally-measured binding affinities. These updated docking and affinity benchmarks now contain 230 and 179 entries, respectively. In particular, the number of antibody-antigen complexes has increased significantly, by 67% and 74% in the docking and affinity benchmarks, respectively. We tested previously developed docking and affinity prediction algorithms on the new cases. Considering only the top ten docking predictions per benchmark case, a prediction accuracy of 38% is achieved on all 55 cases, and up to 50% for the 32 rigid-body cases only. Predicted affinity scores are found to correlate with experimental binding energies up to r=0.52 overall, and r=0.72 for the rigid complexes. PMID:26231283

  11. Benchmarking of Optical Dimerizer Systems

    PubMed Central

    2015-01-01

    Optical dimerizers are a powerful new class of optogenetic tools that allow light-inducible control of protein–protein interactions. Such tools have been useful for regulating cellular pathways and processes with high spatiotemporal resolution in live cells, and a growing number of dimerizer systems are available. As these systems have been characterized by different groups using different methods, it has been difficult for users to compare their properties. Here, we set about to systematically benchmark the properties of four optical dimerizer systems, CRY2/CIB1, TULIPs, phyB/PIF3, and phyB/PIF6. Using a yeast transcriptional assay, we find significant differences in light sensitivity and fold-activation levels between the red light regulated systems but similar responses between the CRY2/CIB and TULIP systems. Further comparison of the ability of the CRY2/CIB1 and TULIP systems to regulate a yeast MAPK signaling pathway also showed similar responses, with slightly less background activity in the dark observed with CRY2/CIB. In the process of developing this work, we also generated an improved blue-light-regulated transcriptional system using CRY2/CIB in yeast. In addition, we demonstrate successful application of the CRY2/CIB dimerizers using a membrane-tethered CRY2, which may allow for better local control of protein interactions. Taken together, this work allows for a better understanding of the capacities of these different dimerization systems and demonstrates new uses of these dimerizers to control signaling and transcription in yeast. PMID:25350266

  12. Benchmarking of optical dimerizer systems.

    PubMed

    Pathak, Gopal P; Strickland, Devin; Vrana, Justin D; Tucker, Chandra L

    2014-11-21

    Optical dimerizers are a powerful new class of optogenetic tools that allow light-inducible control of protein-protein interactions. Such tools have been useful for regulating cellular pathways and processes with high spatiotemporal resolution in live cells, and a growing number of dimerizer systems are available. As these systems have been characterized by different groups using different methods, it has been difficult for users to compare their properties. Here, we set about to systematically benchmark the properties of four optical dimerizer systems, CRY2/CIB1, TULIPs, phyB/PIF3, and phyB/PIF6. Using a yeast transcriptional assay, we find significant differences in light sensitivity and fold-activation levels between the red light regulated systems but similar responses between the CRY2/CIB and TULIP systems. Further comparison of the ability of the CRY2/CIB1 and TULIP systems to regulate a yeast MAPK signaling pathway also showed similar responses, with slightly less background activity in the dark observed with CRY2/CIB. In the process of developing this work, we also generated an improved blue-light-regulated transcriptional system using CRY2/CIB in yeast. In addition, we demonstrate successful application of the CRY2/CIB dimerizers using a membrane-tethered CRY2, which may allow for better local control of protein interactions. Taken together, this work allows for a better understanding of the capacities of these different dimerization systems and demonstrates new uses of these dimerizers to control signaling and transcription in yeast. PMID:25350266

  13. DOE Commercial Building Benchmark Models: Preprint

    SciTech Connect

    Torcelini, P.; Deru, M.; Griffith, B.; Benne, K.; Halverson, M.; Winiarski, D.; Crawley, D. B.

    2008-07-01

    To provide a consistent baseline of comparison and save time conducting such simulations, the U.S. Department of Energy (DOE) has developed a set of standard benchmark building models. This paper will provide an executive summary overview of these benchmark buildings, and how they can save building analysts valuable time. Fully documented and implemented to use with the EnergyPlus energy simulation program, the benchmark models are publicly available and new versions will be created to maintain compatibility with new releases of EnergyPlus. The benchmark buildings will form the basis for research on specific building technologies, energy code development, appliance standards, and measurement of progress toward DOE energy goals. Having a common starting point allows us to better share and compare research results and move forward to make more energy efficient buildings.

  14. Benchmark calculations from summarized data: an example

    SciTech Connect

    Crump, K. S.; Teeguarden, Justin G.

    2009-03-01

    Benchmark calculations often are made from data extracted from publications. Such datamay not be in a formmost appropriate for benchmark analysis, and, as a result, suboptimal and/or non-standard benchmark analyses are often applied. This problem can be mitigated in some cases using Monte Carlo computational methods that allow the likelihood of the published data to be calculated while still using an appropriate benchmark dose (BMD) definition. Such an approach is illustrated herein using data from a study of workers exposed to styrene, in which a hybrid BMD calculation is implemented from dose response data reported only as means and standard deviations of ratios of scores on neuropsychological tests from exposed subjects to corresponding scores from matched controls. The likelihood of the data is computed using a combination of analytic and Monte Carlo integration methods.

  15. Benchmarking antimicrobial drug use in hospitals.

    PubMed

    Ibrahim, Omar M; Polk, Ron E

    2012-04-01

    Measuring and monitoring antibiotic use in hospitals is believed to be an important component of the strategies available to antimicrobial stewardship programs to address acquired antimicrobial resistance. Recent efforts to organize large numbers of hospitals into networks allow for interhospital comparisons of a variety of healthcare processes and outcomes, a process often called 'benchmarking'. For comparisons of antimicrobial use to be valid, usage figures must be risk-adjusted to account for differences in patient mix and hospital characteristics. The purpose of this review is to describe recent methods to benchmark antimicrobial drug use and to critically assess the potential advantages and the remaining challenges. While many methodological challenges remain, and the clinical outcomes resulting from benchmarking programs have yet to be determined, recent developments suggest that benchmarking antimicrobial drug use will become an important component of antimicrobial stewardship program activities.

  16. Toward Scalable Benchmarks for Mass Storage Systems

    NASA Technical Reports Server (NTRS)

    Miller, Ethan L.

    1996-01-01

    This paper presents guidelines for the design of a mass storage system benchmark suite, along with preliminary suggestions for programs to be included. The benchmarks will measure both peak and sustained performance of the system as well as predicting both short- and long-term behavior. These benchmarks should be both portable and scalable so they may be used on storage systems from tens of gigabytes to petabytes or more. By developing a standard set of benchmarks that reflect real user workload, we hope to encourage system designers and users to publish performance figures that can be compared with those of other systems. This will allow users to choose the system that best meets their needs and give designers a tool with which they can measure the performance effects of improvements to their systems.

  17. NAS Grid Benchmarks. 1.0

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob; Frumkin, Michael; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We provide a paper-and-pencil specification of a benchmark suite for computational grids. It is based on the NAS (NASA Advanced Supercomputing) Parallel Benchmarks (NPB) and is called the NAS Grid Benchmarks (NGB). NGB problems are presented as data flow graphs encapsulating an instance of a slightly modified NPB task in each graph node, which communicates with other nodes by sending/receiving initialization data. Like NPB, NGB specifies several different classes (problem sizes). In this report we describe classes S, W, and A, and provide verification values for each. The implementor has the freedom to choose any language, grid environment, security model, fault tolerance/error correction mechanism, etc., as long as the resulting implementation passes the verification test and reports the turnaround time of the benchmark.

  18. Simple Benchmark Specifications for Space Radiation Protection

    NASA Technical Reports Server (NTRS)

    Singleterry, Robert C. Jr.; Aghara, Sukesh K.

    2013-01-01

    This report defines space radiation benchmark specifications. This specification starts with simple, monoenergetic, mono-directional particles on slabs and progresses to human models in spacecraft. This report specifies the models and sources needed to what the team performing the benchmark needs to produce in a report. Also included are brief descriptions of how OLTARIS, the NASA Langley website for space radiation analysis, performs its analysis.

  19. Implementation of NAS Parallel Benchmarks in Java

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Schultz, Matthew; Jin, Hao-Qiang; Yan, Jerry

    2000-01-01

    A number of features make Java an attractive but a debatable choice for High Performance Computing (HPC). In order to gauge the applicability of Java to the Computational Fluid Dynamics (CFD) we have implemented NAS Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would move Java closer to Fortran in the competition for CFD applications.

  20. Benchmarking of Graphite Reflected Critical Assemblies of UO2

    SciTech Connect

    Margaret A. Marshall; John D. Bess

    2011-11-01

    A series of experiments were carried out in 1963 at the Oak Ridge National Laboratory Critical Experiments Facility (ORCEF) for use in space reactor research programs. A core containing 93.2% enriched UO2 fuel rods was used in these experiments. The first part of the experimental series consisted of 253 tightly-packed fuel rods (1.27 cm triangular pitch) with graphite reflectors [1], the second part used 253 graphite-reflected fuel rods organized in a 1.506 cm triangular pitch [2], and the final part of the experimental series consisted of 253 beryllium-reflected fuel rods with a 1.506 cm triangular pitch. [3] Fission rate distribution and cadmium ratio measurements were taken for all three parts of the experimental series. Reactivity coefficient measurements were taken for various materials placed in the beryllium reflected core. The first part of this experimental series has been evaluated for inclusion in the International Reactor Physics Experiment Evaluation Project (IRPhEP) [4] and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbooks, [5] and is discussed below. These experiments are of interest as benchmarks because they support the validation of compact reactor designs with similar characteristics to the design parameters for a space nuclear fission surface power systems. [6

  1. Benchmarking for Cost Improvement. Final report

    SciTech Connect

    Not Available

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  2. Machine characterization and benchmark performance prediction

    NASA Technical Reports Server (NTRS)

    Saavedra-Barrera, Rafael H.

    1988-01-01

    From runs of standard benchmarks or benchmark suites, it is not possible to characterize the machine nor to predict the run time of other benchmarks which have not been run. A new approach to benchmarking and machine characterization is reported. The creation and use of a machine analyzer is described, which measures the performance of a given machine on FORTRAN source language constructs. The machine analyzer yields a set of parameters which characterize the machine and spotlight its strong and weak points. Also described is a program analyzer, which analyzes FORTRAN programs and determines the frequency of execution of each of the same set of source language operations. It is then shown that by combining a machine characterization and a program characterization, we are able to predict with good accuracy the run time of a given benchmark on a given machine. Characterizations are provided for the Cray-X-MP/48, Cyber 205, IBM 3090/200, Amdahl 5840, Convex C-1, VAX 8600, VAX 11/785, VAX 11/780, SUN 3/50, and IBM RT-PC/125, and for the following benchmark programs or suites: Los Alamos (BMK8A1), Baskett, Linpack, Livermore Loops, Madelbrot Set, NAS Kernels, Shell Sort, Smith, Whetstone and Sieve of Erathostenes.

  3. Action-Oriented Benchmarking: Concepts and Tools

    SciTech Connect

    California Energy Commission; Mathew, Paul; Mills, Evan; Mathew, Paul; Piette, Mary Ann; Bourassa, Norman; Brook, Martha

    2008-02-13

    Most energy benchmarking tools provide static feedback on how one building compares to a larger set of loosely similar buildings, without providing information at the end-use level or on what can be done to reduce consumption, cost, or emissions. In this article--Part 1 of a two-part series--we describe an 'action-oriented benchmarking' approach, which extends whole-building energy benchmarking to include analysis of system and component energy use metrics and features. Action-oriented benchmarking thereby allows users to generate more meaningful metrics and to identify, screen and prioritize potential efficiency improvements. This opportunity assessment process can then be used to inform and optimize a full-scale audit or commissioning process. We introduce a new web-based action-oriented benchmarking system and associated software tool-EnergyIQ. The benchmarking methods, visualizations, and user interface design are informed by an end-user needs assessment survey and best-practice guidelines from ASHRAE.

  4. Benchmarking ENDF/B-VII.1, JENDL-4.0 and JEFF-3.1

    SciTech Connect

    Van Der Marck, S. C.

    2012-07-01

    Three nuclear data libraries have been tested extensively using criticality safety benchmark calculations. The three libraries are the new release of the US library ENDF/B-VII.1 (2011), the new release of the Japanese library JENDL-4.0 (2011), and the OECD/NEA library JEFF-3.1 (2006). All calculations were performed with the continuous-energy Monte Carlo code MCNP (version 4C3, as well as version 6-beta1). Around 2000 benchmark cases from the International Handbook of Criticality Safety Benchmark Experiments (ICSBEP) were used. The results were analyzed per ICSBEP category, and per element. Overall, the three libraries show similar performance on most criticality safety benchmarks. The largest differences are probably caused by elements such as Be, C, Fe, Zr, W. (authors)

  5. Thermo-hydro-mechanical-chemical processes in fractured-porous media: Benchmarks and examples

    NASA Astrophysics Data System (ADS)

    Kolditz, O.; Shao, H.; Görke, U.; Kalbacher, T.; Bauer, S.; McDermott, C. I.; Wang, W.

    2012-12-01

    The book comprises an assembly of benchmarks and examples for porous media mechanics collected over the last twenty years. Analysis of thermo-hydro-mechanical-chemical (THMC) processes is essential to many applications in environmental engineering, such as geological waste deposition, geothermal energy utilisation, carbon capture and storage, water resources management, hydrology, even climate change. In order to assess the feasibility as well as the safety of geotechnical applications, process-based modelling is the only tool to put numbers, i.e. to quantify future scenarios. This charges a huge responsibility concerning the reliability of computational tools. Benchmarking is an appropriate methodology to verify the quality of modelling tools based on best practices. Moreover, benchmarking and code comparison foster community efforts. The benchmark book is part of the OpenGeoSys initiative - an open source project to share knowledge and experience in environmental analysis and scientific computation.

  6. Smaller hospitals accept advertising.

    PubMed

    Mackesy, R

    1988-07-01

    Administrators at small- and medium-sized hospitals gradually have accepted the role of marketing in their organizations, albeit at a much slower rate than larger institutions. This update of a 1983 survey tracks the increasing competitiveness, complexity and specialization of providing health care and of advertising a small hospital's services. PMID:10288550

  7. Students Accepted on Probation.

    ERIC Educational Resources Information Center

    Lorberbaum, Caroline S.

    This report is a justification of the Dalton Junior College admissions policy designed to help students who had had academic and/or social difficulties at other schools. These students were accepted on probation, their problems carefully analyzed, and much effort devoted to those with low academic potential. They received extensive academic and…

  8. Approaches to acceptable risk

    SciTech Connect

    Whipple, C.

    1997-04-30

    Several alternative approaches to address the question {open_quotes}How safe is safe enough?{close_quotes} are reviewed and an attempt is made to apply the reasoning behind these approaches to the issue of acceptability of radiation exposures received in space. The approaches to the issue of the acceptability of technological risk described here are primarily analytical, and are drawn from examples in the management of environmental health risks. These include risk-based approaches, in which specific quantitative risk targets determine the acceptability of an activity, and cost-benefit and decision analysis, which generally focus on the estimation and evaluation of risks, benefits and costs, in a framework that balances these factors against each other. These analytical methods tend by their quantitative nature to emphasize the magnitude of risks, costs and alternatives, and to downplay other factors, especially those that are not easily expressed in quantitative terms, that affect acceptance or rejection of risk. Such other factors include the issues of risk perceptions and how and by whom risk decisions are made.

  9. Why was Relativity Accepted?

    NASA Astrophysics Data System (ADS)

    Brush, S. G.

    Historians of science have published many studies of the reception of Einstein's special and general theories of relativity. Based on a review of these studies, and my own research on the role of the light-bending prediction in the reception of general relativity, I discuss the role of three kinds of reasons for accepting relativity (1) empirical predictions and explanations; (2) social-psychological factors; and (3) aesthetic-mathematical factors. According to the historical studies, acceptance was a three-stage process. First, a few leading scientists adopted the special theory for aesthetic-mathematical reasons. In the second stage, their enthusiastic advocacy persuaded other scientists to work on the theory and apply it to problems currently of interest in atomic physics. The special theory was accepted by many German physicists by 1910 and had begun to attract some interest in other countries. In the third stage, the confirmation of Einstein's light-bending prediction attracted much public attention and forced all physicists to take the general theory of relativity seriously. In addition to light-bending, the explanation of the advance of Mercury's perihelion was considered strong evidence by theoretical physicists. The American astronomers who conducted successful tests of general relativity became defenders of the theory. There is little evidence that relativity was `socially constructed' but its initial acceptance was facilitated by the prestige and resources of its advocates.

  10. Verification, validation, and benchmarking report for TRIMHX: A three dimensional hexagonal transient diffusion theory code

    SciTech Connect

    Le, T.T.

    1992-03-01

    TRIMHX is a fundamental Reactor Analysis tool in use at the Savannah River Site (SRS) and is an integral part of the Generalized Reactor Analysis Subsystem (GRASS). TRIMHX solves the time dependent multigroup neutron diffusion equation in two and three dimensional hexagonal geometry by standard and coarse mesh finite difference methods. The TRIMHX implementation assumes the solution to this equation can be discretized in space, energy, and time. These are industry accepted approaches which can be found in many nuclear engineering books. This report concerns the verification and validation of TRIMHX, a transient two and three dimensional hex-z diffusion theory code. The validation was performed to determine the accuracy of the code, and the verification was performed to determine if the code was correctly using the correct theory and that all the subroutines function as required. For TRIMHX, the validation requirement was satisfied by comparing the results of the code with experiments and benchmarking the code against other standard or validated code results. The verification requirement for TRIMHX was performed indirectly since it is impossible and not necessary to reverify a large code like TRIMHX line by line. The extensive operations history of TRIMHX in conjunction with the comparisons against many numerical experiments (exact solutions) and other diffusion theory codes is sufficient to establish that the code is functioning as intended and therefore it is verified. This report summarizes four sets of experiments performed in 1974, 1977, and 1988, two DIF3D/TRIMHX comparison problems performed in 1991, a DIF3D/FX2-TH/TRIMHX comparison problem produced for this report, and the comparison of TRIMHX/GRIMHX initial static calculations. The results of these experiments show that TRIMHX was correctly implemented and is ready to submit into SCMS production mode.

  11. A novel video dataset for change detection benchmarking.

    PubMed

    Goyette, Nil; Jodoin, Pierre-Marc; Porikli, Fatih; Konrad, Janusz; Ishwar, Prakash

    2014-11-01

    Change detection is one of the most commonly encountered low-level tasks in computer vision and video processing. A plethora of algorithms have been developed to date, yet no widely accepted, realistic, large-scale video data set exists for benchmarking different methods. Presented here is a unique change detection video data set consisting of nearly 90 000 frames in 31 video sequences representing six categories selected to cover a wide range of challenges in two modalities (color and thermal infrared). A distinguishing characteristic of this benchmark video data set is that each frame is meticulously annotated by hand for ground-truth foreground, background, and shadow area boundaries-an effort that goes much beyond a simple binary label denoting the presence of change. This enables objective and precise quantitative comparison and ranking of video-based change detection algorithms. This paper discusses various aspects of the new data set, quantitative performance metrics used, and comparative results for over two dozen change detection algorithms. It draws important conclusions on solved and remaining issues in change detection, and describes future challenges for the scientific community. The data set, evaluation tools, and algorithm rankings are available to the public on a website and will be updated with feedback from academia and industry in the future.

  12. Criticality Benchmark Analysis of the HTTR Annular Startup Core Configurations

    SciTech Connect

    John D. Bess

    2009-11-01

    One of the high priority benchmarking activities for corroborating the Next Generation Nuclear Plant (NGNP) Project and Very High Temperature Reactor (VHTR) Program is evaluation of Japan's existing High Temperature Engineering Test Reactor (HTTR). The HTTR is a 30 MWt engineering test reactor utilizing graphite moderation, helium coolant, and prismatic TRISO fuel. A large amount of critical reactor physics data is available for validation efforts of High Temperature Gas-cooled Reactors (HTGRs). Previous international reactor physics benchmarking activities provided a collation of mixed results that inaccurately predicted actual experimental performance.1 Reevaluations were performed by the Japanese to reduce the discrepancy between actual and computationally-determined critical configurations.2-3 Current efforts at the Idaho National Laboratory (INL) involve development of reactor physics benchmark models in conjunction with the International Reactor Physics Experiment Evaluation Project (IRPhEP) for use with verification and validation methods in the VHTR Program. Annular cores demonstrate inherent safety characteristics that are of interest in developing future HTGRs.

  13. Benchmarking Gas Path Diagnostic Methods: A Public Approach

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Bird, Jeff; Davison, Craig; Volponi, Al; Iverson, R. Eugene

    2008-01-01

    Recent technology reviews have identified the need for objective assessments of engine health management (EHM) technology. The need is two-fold: technology developers require relevant data and problems to design and validate new algorithms and techniques while engine system integrators and operators need practical tools to direct development and then evaluate the effectiveness of proposed solutions. This paper presents a publicly available gas path diagnostic benchmark problem that has been developed by the Propulsion and Power Systems Panel of The Technical Cooperation Program (TTCP) to help address these needs. The problem is coded in MATLAB (The MathWorks, Inc.) and coupled with a non-linear turbofan engine simulation to produce "snap-shot" measurements, with relevant noise levels, as if collected from a fleet of engines over their lifetime of use. Each engine within the fleet will experience unique operating and deterioration profiles, and may encounter randomly occurring relevant gas path faults including sensor, actuator and component faults. The challenge to the EHM community is to develop gas path diagnostic algorithms to reliably perform fault detection and isolation. An example solution to the benchmark problem is provided along with associated evaluation metrics. A plan is presented to disseminate this benchmark problem to the engine health management technical community and invite technology solutions.

  14. Performance Evaluation and Benchmarking of Next Intelligent Systems

    SciTech Connect

    del Pobil, Angel; Madhavan, Raj; Bonsignorio, Fabio

    2009-10-01

    Performance Evaluation and Benchmarking of Intelligent Systems presents research dedicated to the subject of performance evaluation and benchmarking of intelligent systems by drawing from the experiences and insights of leading experts gained both through theoretical development and practical implementation of intelligent systems in a variety of diverse application domains. This contributed volume offers a detailed and coherent picture of state-of-the-art, recent developments, and further research areas in intelligent systems. The chapters cover a broad range of applications, such as assistive robotics, planetary surveying, urban search and rescue, and line tracking for automotive assembly. Subsystems or components described in this book include human-robot interaction, multi-robot coordination, communications, perception, and mapping. Chapters are also devoted to simulation support and open source software for cognitive platforms, providing examples of the type of enabling underlying technologies that can help intelligent systems to propagate and increase in capabilities. Performance Evaluation and Benchmarking of Intelligent Systems serves as a professional reference for researchers and practitioners in the field. This book is also applicable to advanced courses for graduate level students and robotics professionals in a wide range of engineering and related disciplines including computer science, automotive, healthcare, manufacturing, and service robotics.

  15. Social Acceptance of Wind: A Brief Overview (Presentation)

    SciTech Connect

    Lantz, E.

    2015-01-01

    This presentation discusses concepts and trends in social acceptance of wind energy, profiles recent research findings, and discussions mitigation strategies intended to resolve wind power social acceptance challenges as informed by published research and the experiences of individuals participating in the International Energy Agencies Working Group on Social Acceptance of Wind Energy

  16. ACT in Context: An Exploration of Experiential Acceptance

    ERIC Educational Resources Information Center

    Block-Lerner, Jennifer; Wulfert, Edelgard; Moses, Erica

    2009-01-01

    Experiential acceptance, which involves "having," or "allowing" private experiences, has recently gained much attention in the cognitive-behavioral literature. Acceptance, however, may be considered a common factor among psychotherapeutic traditions. The purposes of this paper are to examine the historical roots of acceptance and to discuss the…

  17. Monte Carlo code criticality benchmark comparisons for waste packaging

    SciTech Connect

    Alesso, H.P.; Annese, C.E.; Buck, R.M.; Pearson, J.S.; Lloyd, W.R.

    1992-07-01

    COG is a new point-wise Monte Carlo code being developed and tested at Lawrence Livermore National Laboratory (LLNL). It solves the Boltzmann equation for the transport of neutrons and photons. The objective of this paper is to report on COG results for criticality benchmark experiments both on a Cray mainframe and on a HP 9000 workstation. COG has been recently ported to workstations to improve its accessibility to a wider community of users. COG has some similarities to a number of other computer codes used in the shielding and criticality community. The recently introduced high performance reduced instruction set (RISC) UNIX workstations provide computational power that approach mainframes at a fraction of the cost. A version of COG is currently being developed for the Hewlett Packard 9000/730 computer with a UNIX operating system. Subsequent porting operations will move COG to SUN, DEC, and IBM workstations. In addition, a CAD system for preparation of the geometry input for COG is being developed. In July 1977, Babcock & Wilcox Co. (B&W) was awarded a contract to conduct a series of critical experiments that simulated close-packed storage of LWR-type fuel. These experiments provided data for benchmarking and validating calculational methods used in predicting K-effective of nuclear fuel storage in close-packed, neutron poisoned arrays. Low enriched UO2 fuel pins in water-moderated lattices in fuel storage represent a challenging criticality calculation for Monte Carlo codes particularly when the fuel pins extend out of the water. COG and KENO calculational results of these criticality benchmark experiments are presented.

  18. Monte Carlo code criticality benchmark comparisons for waste packaging

    SciTech Connect

    Alesso, H.P.; Annese, C.E.; Buck, R.M.; Pearson, J.S.; Lloyd, W.R.

    1992-07-01

    COG is a new point-wise Monte Carlo code being developed and tested at Lawrence Livermore National Laboratory (LLNL). It solves the Boltzmann equation for the transport of neutrons and photons. The objective of this paper is to report on COG results for criticality benchmark experiments both on a Cray mainframe and on a HP 9000 workstation. COG has been recently ported to workstations to improve its accessibility to a wider community of users. COG has some similarities to a number of other computer codes used in the shielding and criticality community. The recently introduced high performance reduced instruction set (RISC) UNIX workstations provide computational power that approach mainframes at a fraction of the cost. A version of COG is currently being developed for the Hewlett Packard 9000/730 computer with a UNIX operating system. Subsequent porting operations will move COG to SUN, DEC, and IBM workstations. In addition, a CAD system for preparation of the geometry input for COG is being developed. In July 1977, Babcock Wilcox Co. (B W) was awarded a contract to conduct a series of critical experiments that simulated close-packed storage of LWR-type fuel. These experiments provided data for benchmarking and validating calculational methods used in predicting K-effective of nuclear fuel storage in close-packed, neutron poisoned arrays. Low enriched UO2 fuel pins in water-moderated lattices in fuel storage represent a challenging criticality calculation for Monte Carlo codes particularly when the fuel pins extend out of the water. COG and KENO calculational results of these criticality benchmark experiments are presented.

  19. Benchmarking transportation logistics practices for effective system planning

    SciTech Connect

    Thrower, A.W.; Dravo, A.N.; Keister, M.

    2007-07-01

    This paper presents preliminary findings of an Office of Civilian Radioactive Waste Management (OCRWM) benchmarking project to identify best practices for logistics enterprises. The results will help OCRWM's Office of Logistics Management (OLM) design and implement a system to move spent nuclear fuel (SNF) and high-level radioactive waste (HLW) to the Yucca Mountain repository for disposal when that facility is licensed and built. This report suggests topics for additional study. The project team looked at three Federal radioactive material logistics operations that are widely viewed to be successful: (1) the Waste Isolation Pilot Plant (WIPP) in Carlsbad, New Mexico; (2) the Naval Nuclear Propulsion Program (NNPP); and (3) domestic and foreign research reactor (FRR) SNF acceptance programs. (authors)

  20. Raising Quality and Achievement. A College Guide to Benchmarking.

    ERIC Educational Resources Information Center

    Owen, Jane

    This booklet introduces the principles and practices of benchmarking as a way of raising quality and achievement at further education colleges in Britain. Section 1 defines the concept of benchmarking. Section 2 explains what benchmarking is not and the steps that should be taken before benchmarking is initiated. The following aspects and…

  1. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  2. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 4 2012-10-01 2012-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  3. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  4. 42 CFR 414.1255 - Benchmarks for cost measures.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 3 2013-10-01 2013-10-01 false Benchmarks for cost measures. 414.1255 Section 414... Payment Modifier Under the Physician Fee Schedule § 414.1255 Benchmarks for cost measures. The benchmark...-based payment modifier. In calculating the national benchmark, groups of physicians' performance...

  5. 45 CFR 156.110 - EHB-benchmark plan standards.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 1 2013-10-01 2013-10-01 false EHB-benchmark plan standards. 156.110 Section 156... Essential Health Benefits Package § 156.110 EHB-benchmark plan standards. An EHB-benchmark plan must meet..., including oral and vision care. (b) Coverage in each benefit category. A base-benchmark plan not...

  6. 45 CFR 156.110 - EHB-benchmark plan standards.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 1 2014-10-01 2014-10-01 false EHB-benchmark plan standards. 156.110 Section 156... Essential Health Benefits Package § 156.110 EHB-benchmark plan standards. An EHB-benchmark plan must meet..., including oral and vision care. (b) Coverage in each benefit category. A base-benchmark plan not...

  7. Trinity Acceptance Tests Performance Summary.

    SciTech Connect

    Rajan, Mahesh

    2015-12-01

    Ensuring Real Applications perform well on Trinity is key to success. Four components: ASC applications, Sustained System Performance (SSP), Extra-Large MiniApplications problems, and Micro-benchmarks.

  8. Acceptability of human risk.

    PubMed Central

    Kasperson, R E

    1983-01-01

    This paper has three objectives: to explore the nature of the problem implicit in the term "risk acceptability," to examine the possible contributions of scientific information to risk standard-setting, and to argue that societal response is best guided by considerations of process rather than formal methods of analysis. Most technological risks are not accepted but are imposed. There is also little reason to expect consensus among individuals on their tolerance of risk. Moreover, debates about risk levels are often at base debates over the adequacy of the institutions which manage the risks. Scientific information can contribute three broad types of analyses to risk-setting deliberations: contextual analysis, equity assessment, and public preference analysis. More effective risk-setting decisions will involve attention to the process used, particularly in regard to the requirements of procedural justice and democratic responsibility. PMID:6418541

  9. DICE: Database for the International Criticality Safety Benchmark Evaluation Program Handbook

    SciTech Connect

    Nouri, Ali; Nagel, Pierre; Briggs, J. Blair; Ivanova, Tatiana

    2003-09-15

    The 2002 edition of the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments' (ICSBEP Handbook) spans more than 26 000 pages and contains 330 evaluations with benchmark specifications for 2881 critical or near-critical configurations. With such a large content, it became evident that the users needed more than a broad and qualitative classification of experiments to make efficient use of the ICSBEP Handbook. This paper describes the features of Database for the International Handbook of Evaluated Criticality Safety Benchmark Experiments (DICE), which is a database for the ICSBEP Handbook. The DICE program contains a relational database loaded with selected information from each configuration and a users' interface that enables one to query the database and to extract specific parameters. Summary descriptions of each experimental configuration can also be obtained. In addition, plotting capabilities provide the means of comparing neutron spectra and sensitivity coefficients for a set of configurations.

  10. MCNP calculations for Russian criticality-safety benchmarks

    SciTech Connect

    Capell, B.M.; Mosteller, R.D.; Pelowitz, D.B.

    1996-12-31

    The current edition of the International Handbook of Evaluated Criticality Safety Benchmark Experiments contains evaluations of 20 critical experiments performed and evaluated by the Institute for Experimental Physics of the Russian Federal Nuclear Center (VNIIEF) at Arzamas-16 and 16 critical experiments performed and evaluated by the Institute for Technical Physics of the Russian Federal Nuclear Center (VNIITF) at Chelyabinsk-70. These fast-spectrum experiments are of particular interest for data testing of ENDF/B-VI because they contain uranium metal systems of intermediate enrichment as well as uranium and plutonium metal systems with reflectors such as graphite, stainless steel, polyethylene, beryllium, and beryllium oxide. This paper presents the first published results for such systems using cross-section libraries based on ENDF/B-VI.

  11. Benchmarking ENDF/B-VII.1, JENDL-4.0 and JEFF-3.1.1 with MCNP6

    NASA Astrophysics Data System (ADS)

    van der Marck, Steven C.

    2012-12-01

    Recent releases of three major world nuclear reaction data libraries, ENDF/B-VII.1, JENDL-4.0, and JEFF-3.1.1, have been tested extensively using benchmark calculations. The calculations were performed with the latest release of the continuous energy Monte Carlo neutronics code MCNP, i.e. MCNP6. Three types of benchmarks were used, viz. criticality safety benchmarks, (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 2000 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6Li, 7Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D2O, H2O, concrete, polyethylene and teflon). The new functionality in MCNP6 to calculate the effective delayed neutron fraction was tested by comparison with more than thirty measurements in widely varying systems. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. The performance of the three libraries, in combination with MCNP6, is shown to be good. The results for the LEU-COMP-THERM category are on average very close to the benchmark value. Also for most other categories the results are satisfactory. Deviations from the benchmark values do occur in certain benchmark series, or in isolated cases within benchmark series. Such

  12. Benchmarking local healthcare-associated infections: available benchmarks and interpretation challenges.

    PubMed

    El-Saed, Aiman; Balkhy, Hanan H; Weber, David J

    2013-10-01

    Growing numbers of healthcare facilities are routinely collecting standardized data on healthcare-associated infection (HAI), which can be used not only to track internal performance but also to compare local data to national and international benchmarks. Benchmarking overall (crude) HAI surveillance metrics without accounting or adjusting for potential confounders can result in misleading conclusions. Methods commonly used to provide risk-adjusted metrics include multivariate logistic regression analysis, stratification, indirect standardization, and restrictions. The characteristics of recognized benchmarks worldwide, including the advantages and limitations are described. The choice of the right benchmark for the data from the Gulf Cooperation Council (GCC) states is challenging. The chosen benchmark should have similar data collection and presentation methods. Additionally, differences in surveillance environments including regulations should be taken into consideration when considering such a benchmark. The GCC center for infection control took some steps to unify HAI surveillance systems in the region. GCC hospitals still need to overcome legislative and logistic difficulties in sharing data to create their own benchmark. The availability of a regional GCC benchmark may better enable health care workers and researchers to obtain more accurate and realistic comparisons.

  13. The VTE Benchmarking Model: Benchmarking Quality Performance in Vocational Technical Education.

    ERIC Educational Resources Information Center

    Losh, Charles

    1993-01-01

    Discusses benchmarking--finding and implementing the best practices--in business and industry and describes a model that can be used in vocational-technical education. Suggests that benchmarking is a tool that can be used by vocational-technical educators as they strive for excellence. (JOW)

  14. Test Nationally, Benchmark Locally: Using Local DIBELS Benchmarks to Predict Performance on the Pssa

    ERIC Educational Resources Information Center

    Ferchalk, Matthew R.

    2013-01-01

    The Dynamic Indicators of Basic Early Literacy Skills (DIBELS) benchmarks are frequently used to make important decision regarding student performance. More information, however, is needed to understand if the nationally-derived benchmarks created by the DIBELS system provide the most accurate criterion for evaluating reading proficiency. The…

  15. Benchmark 2 - Springback of a draw / re-draw panel: Part C: Benchmark analysis

    NASA Astrophysics Data System (ADS)

    Carsley, John E.; Xia, Cedric; Yang, Lianxiang; Stoughton, Thomas B.; Xu, Siguang; Hartfield-Wünsch, Susan E.; Li, Jingjing

    2013-12-01

    Benchmark analysis is summarized for DP600 and AA 5182-O. Nine simulation results submitted for this benchmark study are compared to the physical measurement results. The details on the codes, friction parameters, mesh technology, CPU, and material models are also summarized at the end of this report with the participant information details.

  16. Environmental radiation: risk benchmarks or benchmarking risk assessment.

    PubMed

    Bates, Matthew E; Valverde, L James; Vogel, John T; Linkov, Igor

    2011-07-01

    In the wake of the compound March 2011 nuclear disaster at the Fukushima I nuclear power plant in Japan, international public dialogue has repeatedly turned to questions of the accuracy of current risk assessment processes to assess nuclear risks and the adequacy of existing regulatory risk thresholds to protect us from nuclear harm. We confront these issues with an emphasis on learning from the incident in Japan for future US policy discussions. Without delving into a broader philosophical discussion of the general social acceptance of the risk, the relative adequacy of existing US Nuclear Regulatory Commission (NRC) risk thresholds is assessed in comparison with the risk thresholds of federal agencies not currently under heightened public scrutiny. Existing NRC thresholds are found to be among the most conservative in the comparison, suggesting that the agency's current regulatory framework is consistent with larger societal ideals. In turning to risk assessment methodologies, the disaster in Japan does indicate room for growth. Emerging lessons seem to indicate an opportunity to enhance resilience through systemic levels of risk aggregation. Specifically, we believe bringing systemic reasoning to the risk management process requires a framework that (i) is able to represent risk-based knowledge and information about a panoply of threats; (ii) provides a systemic understanding (and representation) of the natural and built environments of interest and their dependencies; and (iii) allows for the rational and coherent valuation of a range of outcome variables of interest, both tangible and intangible. Rather than revisiting the thresholds themselves, we see the goal of future nuclear risk management in adopting and implementing risk assessment techniques that systemically evaluate large-scale socio-technical systems with a view toward enhancing resilience and minimizing the potential for surprise. PMID:21608107

  17. Age and Acceptance of Euthanasia.

    ERIC Educational Resources Information Center

    Ward, Russell A.

    1980-01-01

    Study explores relationship between age (and sex and race) and acceptance of euthanasia. Women and non-Whites were less accepting because of religiosity. Among older people less acceptance was attributable to their lesser education and greater religiosity. Results suggest that quality of life in old age affects acceptability of euthanasia. (Author)

  18. Action-Oriented Benchmarking: Using the CEUS Database to Benchmark Commercial Buildings in California

    SciTech Connect

    Mathew, Paul; Mills, Evan; Bourassa, Norman; Brook, Martha

    2008-02-01

    The 2006 Commercial End Use Survey (CEUS) database developed by the California Energy Commission is a far richer source of energy end-use data for non-residential buildings than has previously been available and opens the possibility of creating new and more powerful energy benchmarking processes and tools. In this article--Part 2 of a two-part series--we describe the methodology and selected results from an action-oriented benchmarking approach using the new CEUS database. This approach goes beyond whole-building energy benchmarking to more advanced end-use and component-level benchmarking that enables users to identify and prioritize specific energy efficiency opportunities - an improvement on benchmarking tools typically in use today.

  19. Toxicological benchmarks for wildlife: 1994 Revision

    SciTech Connect

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report.

  20. Quantum benchmark for teleportation and storage of squeezed states.

    PubMed

    Adesso, Gerardo; Chiribella, Giulio

    2008-05-01

    We provide a quantum benchmark for teleportation and storage of single-mode squeezed states with zero displacement and a completely unknown degree of squeezing along a given direction. For pure squeezed input states, a fidelity higher than 81.5% has to be attained in order to outperform any classical strategy based on an estimation of the unknown squeezing and repreparation of squeezed states. For squeezed thermal input states, we derive an upper and a lower bound on the classical average fidelity which tighten for moderate degree of mixedness. These results enable a critical discussion of recent experiments with squeezed light.

  1. Preliminary Benchmark Evaluation of Japan’s High Temperature Engineering Test Reactor

    SciTech Connect

    John Darrell Bess

    2009-05-01

    A benchmark model of the initial fully-loaded start-up core critical of Japan’s High Temperature Engineering Test Reactor (HTTR) was developed to provide data in support of ongoing validation efforts of the Very High Temperature Reactor Program using publicly available resources. The HTTR is a 30 MWt test reactor utilizing graphite moderation, helium coolant, and prismatic TRISO fuel. The benchmark was modeled using MCNP5 with various neutron cross-section libraries. An uncertainty evaluation was performed by perturbing the benchmark model and comparing the resultant eigenvalues. The calculated eigenvalues are approximately 2-3% greater than expected with an uncertainty of ±0.70%. The primary sources of uncertainty are the impurities in the core and reflector graphite. The release of additional HTTR data could effectively reduce the benchmark model uncertainties and bias. Sensitivity of the results to the graphite impurity content might imply that further evaluation of the graphite content could significantly improve calculated results. Proper characterization of graphite for future Next Generation Nuclear Power reactor designs will improve computational modeling capabilities. Current benchmarking activities include evaluation of the annular HTTR cores and assessment of the remaining start-up core physics experiments, including reactivity effects, reactivity coefficient, and reaction-rate distribution measurements. Long term benchmarking goals might include analyses of the hot zero-power critical, rise-to-power tests, and other irradiation, safety, and technical evaluations performed with the HTTR.

  2. Benchmarking computational fluid dynamics models for lava flow simulation

    NASA Astrophysics Data System (ADS)

    Dietterich, Hannah; Lev, Einat; Chen, Jiangzhi

    2016-04-01

    Numerical simulations of lava flow emplacement are valuable for assessing lava flow hazards, forecasting active flows, interpreting past eruptions, and understanding the controls on lava flow behavior. Existing lava flow models vary in simplifying assumptions, physics, dimensionality, and the degree to which they have been validated against analytical solutions, experiments, and natural observations. In order to assess existing models and guide the development of new codes, we conduct a benchmarking study of computational fluid dynamics models for lava flow emplacement, including VolcFlow, OpenFOAM, FLOW-3D, and COMSOL. Using the new benchmark scenarios defined in Cordonnier et al. (Geol Soc SP, 2015) as a guide, we model viscous, cooling, and solidifying flows over horizontal and sloping surfaces, topographic obstacles, and digital elevation models of natural topography. We compare model results to analytical theory, analogue and molten basalt experiments, and measurements from natural lava flows. Overall, the models accurately simulate viscous flow with some variability in flow thickness where flows intersect obstacles. OpenFOAM, COMSOL, and FLOW-3D can each reproduce experimental measurements of cooling viscous flows, and FLOW-3D simulations with temperature-dependent rheology match results from molten basalt experiments. We can apply these models to reconstruct past lava flows in Hawai'i and Saudi Arabia using parameters assembled from morphology, textural analysis, and eruption observations as natural test cases. Our study highlights the strengths and weaknesses of each code, including accuracy and computational costs, and provides insights regarding code selection.

  3. Energy benchmarking of South Australian WWTPs.

    PubMed

    Krampe, J

    2013-01-01

    Optimising the energy consumption and energy generation of wastewater treatment plants (WWTPs) is a topic with increasing importance for water utilities in times of rising energy costs and pressures to reduce greenhouse gas (GHG) emissions. Assessing the energy efficiency and energy optimisation of a WWTP are difficult tasks as most plants vary greatly in size, process layout and other influencing factors. To overcome these limits it is necessary to compare energy efficiency with a statistically relevant base to identify shortfalls and optimisation potential. Such energy benchmarks have been successfully developed and used in central Europe over the last two decades. This paper demonstrates how the latest available energy benchmarks from Germany have been applied to 24 WWTPs in South Australia. It shows how energy benchmarking can be used to identify shortfalls in current performance, prioritise detailed energy assessments and help inform decisions on capital investment.

  4. Benchmark field study of deep neutron penetration

    SciTech Connect

    Morgan, J.F.; Sale, K. ); Gold, R.; Roberts, J.H.; Preston, C.C. )

    1991-06-10

    A unique benchmark neutron field has been established at the Lawrence Livermore National Laboratory (LLNL) to study deep penetration neutron transport. At LLNL, a tandem accelerator is used to generate a monoenergetic neutron source that permits investigation of deep neutron penetration under conditions that are virtually ideal to model, namely the transport of mono-energetic neutrons through a single material in a simple geometry. General features of the Lawrence Tandem (LATAN) benchmark field are described with emphasis on neutron source characteristics and room return background. The single material chosen for the first benchmark, LATAN-1, is a steel representative of Light Water Reactor (LWR) Pressure Vessels (PV). Also included is a brief description of the Little Boy replica, a critical reactor assembly designed to mimic the radiation doses from the atomic bomb dropped on Hiroshima, and its us in neutron spectrometry. 18 refs.

  5. New Test Set for Video Quality Benchmarking

    NASA Astrophysics Data System (ADS)

    Raventos, Joaquin

    A new test set design and benchmarking approach (US Patent pending) allows a "standard observer" to assess the end-to-end image quality characteristics of video imaging systems operating in day time or low-light conditions. It uses randomized targets based on extensive application of Photometry, Geometrical Optics, and Digital Media. The benchmarking takes into account the target's contrast sensitivity, its color characteristics, and several aspects of human vision such as visual acuity and dynamic response. The standard observer is part of the "extended video imaging system" (EVIS). The new test set allows image quality benchmarking by a panel of standard observers at the same time. The new approach shows that an unbiased assessment can be guaranteed. Manufacturers, system integrators, and end users will assess end-to-end performance by simulating a choice of different colors, luminance levels, and dynamic conditions in the laboratory or in permanent video systems installations.

  6. Standardized benchmarking in the quest for orthologs.

    PubMed

    Altenhoff, Adrian M; Boeckmann, Brigitte; Capella-Gutierrez, Salvador; Dalquen, Daniel A; DeLuca, Todd; Forslund, Kristoffer; Huerta-Cepas, Jaime; Linard, Benjamin; Pereira, Cécile; Pryszcz, Leszek P; Schreiber, Fabian; da Silva, Alan Sousa; Szklarczyk, Damian; Train, Clément-Marie; Bork, Peer; Lecompte, Odile; von Mering, Christian; Xenarios, Ioannis; Sjölander, Kimmen; Jensen, Lars Juhl; Martin, Maria J; Muffato, Matthieu; Gabaldón, Toni; Lewis, Suzanna E; Thomas, Paul D; Sonnhammer, Erik; Dessimoz, Christophe

    2016-05-01

    Achieving high accuracy in orthology inference is essential for many comparative, evolutionary and functional genomic analyses, yet the true evolutionary history of genes is generally unknown and orthologs are used for very different applications across phyla, requiring different precision-recall trade-offs. As a result, it is difficult to assess the performance of orthology inference methods. Here, we present a community effort to establish standards and an automated web-based service to facilitate orthology benchmarking. Using this service, we characterize 15 well-established inference methods and resources on a battery of 20 different benchmarks. Standardized benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimum requirement for new tools and resources, and guides the development of more accurate orthology inference methods. PMID:27043882

  7. NAS Parallel Benchmarks, Multi-Zone Versions

    NASA Technical Reports Server (NTRS)

    vanderWijngaart, Rob F.; Haopiang, Jin

    2003-01-01

    We describe an extension of the NAS Parallel Benchmarks (NPB) suite that involves solving the application benchmarks LU, BT and SP on collections of loosely coupled discretization meshes. The solutions on the meshes are updated independently, but after each time step they exchange boundary value information. This strategy, which is common among structured-mesh production flow solver codes in use at NASA Ames and elsewhere, provides relatively easily exploitable coarse-grain parallelism between meshes. Since the individual application benchmarks also allow fine-grain parallelism themselves, this NPB extension, named NPB Multi-Zone (NPB-MZ), is a good candidate for testing hybrid and multi-level parallelization tools and strategies.

  8. Standardized benchmarking in the quest for orthologs.

    PubMed

    Altenhoff, Adrian M; Boeckmann, Brigitte; Capella-Gutierrez, Salvador; Dalquen, Daniel A; DeLuca, Todd; Forslund, Kristoffer; Huerta-Cepas, Jaime; Linard, Benjamin; Pereira, Cécile; Pryszcz, Leszek P; Schreiber, Fabian; da Silva, Alan Sousa; Szklarczyk, Damian; Train, Clément-Marie; Bork, Peer; Lecompte, Odile; von Mering, Christian; Xenarios, Ioannis; Sjölander, Kimmen; Jensen, Lars Juhl; Martin, Maria J; Muffato, Matthieu; Gabaldón, Toni; Lewis, Suzanna E; Thomas, Paul D; Sonnhammer, Erik; Dessimoz, Christophe

    2016-05-01

    Achieving high accuracy in orthology inference is essential for many comparative, evolutionary and functional genomic analyses, yet the true evolutionary history of genes is generally unknown and orthologs are used for very different applications across phyla, requiring different precision-recall trade-offs. As a result, it is difficult to assess the performance of orthology inference methods. Here, we present a community effort to establish standards and an automated web-based service to facilitate orthology benchmarking. Using this service, we characterize 15 well-established inference methods and resources on a battery of 20 different benchmarks. Standardized benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimum requirement for new tools and resources, and guides the development of more accurate orthology inference methods.

  9. The benchmark analysis of gastric, colorectal and rectal cancer pathways: toward establishing standardized clinical pathway in the cancer care.

    PubMed

    Ryu, Munemasa; Hamano, Masaaki; Nakagawara, Akira; Shinoda, Masayuki; Shimizu, Hideaki; Miura, Takeshi; Yoshida, Isao; Nemoto, Atsushi; Yoshikawa, Aki

    2011-01-01

    Most clinical pathways in treating cancers in Japan are based on individual physician's personal experiences rather than on an empirical analysis of clinical data such as benchmark comparison with other hospitals. Therefore, these pathways are far from being standardized. By comparing detailed clinical data from five cancer centers, we have observed various differences among hospitals. By conducting benchmark analyses, providing detailed feedback to the participating hospitals and by repeating the benchmark a year later, we strive to develop more standardized clinical pathways for the treatment of cancers. The Cancer Quality Initiative was launched in 2007 by five cancer centers. Using diagnosis procedure combination data, the member hospitals benchmarked their pre-operative and post-operative length of stays, the duration of antibiotics administrations and the post-operative fasting duration for gastric, colon and rectal cancers. The benchmark was conducted by disclosing hospital identities and performed using 2007 and 2008 data. In the 2007 benchmark, substantial differences were shown among five hospitals in the treatment of gastric, colon and rectal cancers. After providing the 2007 results to the participating hospitals and organizing several brainstorming discussions, significant improvements were observed in the 2008 data study. The benchmark analysis of clinical data is extremely useful in promoting more standardized care and, thus in improving the quality of cancer treatment in Japan. By repeating the benchmark analyses, we can offer truly clinical evidence-based higher quality standardized cancer treatment to our patients.

  10. Baby-Crying Acceptance

    NASA Astrophysics Data System (ADS)

    Martins, Tiago; de Magalhães, Sérgio Tenreiro

    The baby's crying is his most important mean of communication. The crying monitoring performed by devices that have been developed doesn't ensure the complete safety of the child. It is necessary to join, to these technological resources, means of communicating the results to the responsible, which would involve the digital processing of information available from crying. The survey carried out, enabled to understand the level of adoption, in the continental territory of Portugal, of a technology that will be able to do such a digital processing. It was used the TAM as the theoretical referential. The statistical analysis showed that there is a good probability of acceptance of such a system.

  11. Two-dimensional benchmark calculations for PNL-30 through PNL-35

    SciTech Connect

    Mosteller, R.D.

    1997-09-01

    Interest in critical experiments with lattices of mixed-oxide (MOX) fuel pins has been revived by the possibility that light water reactors will be used for disposition of weapons-grade plutonium. A series of six experiments with MOX lattices, designated PNL-30 through PNL-35, was performed at Pacific Northwest Laboratories in 1975 and 1976, and a set of benchmark specifications for these experiments subsequently was adopted by the Cross Section Evaluation Working Group (CSEWG). Although there appear to be some problems with these experiments, they remain the only CSEWG benchmarks for MOX lattices. The number of fuel pins in these experiments is relatively low, corresponding to fewer than 4 typical pressurized-water-reactor fuel assemblies. Accordingly, they are more appropriate as benchmarks for lattice-physics codes than for reactor-core simulator codes. Unfortunately, the CSEWG specifications retain the full three-dimensional (3D) detail of the experiments, while lattice-physics codes almost universally are limited to two dimensions (2D). This paper proposes an extension of the benchmark specifications to include a 2D model, and it justifies that extension by comparing results from the MCNP Monte Carlo code for the 2D and 3D specifications.

  12. Toxicological benchmarks for wildlife: 1996 Revision

    SciTech Connect

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets.

  13. Benchmarks for the point kinetics equations

    SciTech Connect

    Ganapol, B.; Picca, P.; Previti, A.; Mostacci, D.

    2013-07-01

    A new numerical algorithm is presented for the solution to the point kinetics equations (PKEs), whose accurate solution has been sought for over 60 years. The method couples the simplest of finite difference methods, a backward Euler, with Richardsons extrapolation, also called an acceleration. From this coupling, a series of benchmarks have emerged. These include cases from the literature as well as several new ones. The novelty of this presentation lies in the breadth of reactivity insertions considered, covering both prescribed and feedback reactivities, and the extreme 8- to 9- digit accuracy achievable. The benchmarks presented are to provide guidance to those who wish to develop further numerical improvements. (authors)

  14. Los Alamos National Laboratory computer benchmarking 1982

    SciTech Connect

    Martin, J.L.

    1983-06-01

    Evaluating the performance of computing machinery is a continual effort of the Computer Research and Applications Group of the Los Alamos National Laboratory. This report summarizes the results of the group's benchmarking activities performed between October 1981 and September 1982, presenting compilation and execution times as well as megaflop rates for a set of benchmark codes. Tests were performed on the following computers: Cray Research, Inc. (CRI) Cray-1S; Control Data Corporation (CDC) 7600, 6600, Cyber 73, Cyber 825, Cyber 835, Cyber 855, and Cyber 205; Digital Equipment Corporation (DEC) VAX 11/780 and VAX 11/782; and Apollo Computer, Inc., Apollo.

  15. FDNS CFD Code Benchmark for RBCC Ejector Mode Operation

    NASA Technical Reports Server (NTRS)

    Holt, James B.; Ruf, Joe

    1999-01-01

    Computational Fluid Dynamics (CFD) analysis results are compared with benchmark quality test data from the Propulsion Engineering Research Center's (PERC) Rocket Based Combined Cycle (RBCC) experiments to verify fluid dynamic code and application procedures. RBCC engine flowpath development will rely on CFD applications to capture the multi-dimensional fluid dynamic interactions and to quantify their effect on the RBCC system performance. Therefore, the accuracy of these CFD codes must be determined through detailed comparisons with test data. The PERC experiments build upon the well-known 1968 rocket-ejector experiments of Odegaard and Stroup by employing advanced optical and laser based diagnostics to evaluate mixing and secondary combustion. The Finite Difference Navier Stokes (FDNS) code was used to model the fluid dynamics of the PERC RBCC ejector mode configuration. Analyses were performed for both Diffusion and Afterburning (DAB) and Simultaneous Mixing and Combustion (SMC) test conditions. Results from both the 2D and the 3D models are presented.

  16. Experimental power density distribution benchmark in the TRIGA Mark II reactor

    SciTech Connect

    Snoj, L.; Stancar, Z.; Radulovic, V.; Podvratnik, M.; Zerovnik, G.; Trkov, A.; Barbot, L.; Domergue, C.; Destouches, C.

    2012-07-01

    In order to improve the power calibration process and to benchmark the existing computational model of the TRIGA Mark II reactor at the Josef Stefan Inst. (JSI), a bilateral project was started as part of the agreement between the French Commissariat a l'energie atomique et aux energies alternatives (CEA) and the Ministry of higher education, science and technology of Slovenia. One of the objectives of the project was to analyze and improve the power calibration process of the JSI TRIGA reactor (procedural improvement and uncertainty reduction) by using absolutely calibrated CEA fission chambers (FCs). This is one of the few available power density distribution benchmarks for testing not only the fission rate distribution but also the absolute values of the fission rates. Our preliminary calculations indicate that the total experimental uncertainty of the measured reaction rate is sufficiently low that the experiments could be considered as benchmark experiments. (authors)

  17. Object-Oriented Implementation of the NAS Parallel Benchmarks using Charm++

    NASA Technical Reports Server (NTRS)

    Krishnan, Sanjeev; Bhandarkar, Milind; Kale, Laxmikant V.

    1996-01-01

    This report describes experiences with implementing the NAS Computational Fluid Dynamics benchmarks using a parallel object-oriented language, Charm++. Our main objective in implementing the NAS CFD kernel benchmarks was to develop a code that could be used to easily experiment with different domain decomposition strategies and dynamic load balancing. We also wished to leverage the object-orientation provided by the Charm++ parallel object-oriented language, to develop reusable abstractions that would simplify the process of developing parallel applications. We first describe the Charm++ parallel programming model and the parallel object array abstraction, then go into detail about each of the Scalar Pentadiagonal (SP) and Lower/Upper Triangular (LU) benchmarks, along with performance results. Finally we conclude with an evaluation of the methodology used.

  18. Computer acceptance of older adults.

    PubMed

    Nägle, Sibylle; Schmidt, Ludger

    2012-01-01

    Even though computers play a massive role in everyday life of modern societies, older adults, and especially older women, are less likely to use a computer, and they perform fewer activities on it than younger adults. To get a better understanding of the factors affecting older adults' intention towards and usage of computers, the Unified Theory of Acceptance and Usage of Technology (UTAUT) was applied as part of a more extensive study with 52 users and non-users of computers, ranging in age from 50 to 90 years. The model covers various aspects of computer usage in old age via four key constructs, namely performance expectancy, effort expectancy, social influences, and facilitating conditions, as well as the variables gender, age, experience, and voluntariness it. Interestingly, next to performance expectancy, facilitating conditions showed the strongest correlation with use as well as with intention. Effort expectancy showed no significant correlation with the intention of older adults to use a computer.

  19. Overview of TPC Benchmark E: The Next Generation of OLTP Benchmarks

    NASA Astrophysics Data System (ADS)

    Hogan, Trish

    Set to replace the aging TPC-C, the TPC Benchmark E is the next generation OLTP benchmark, which more accurately models client database usage. TPC-E addresses the shortcomings of TPC-C. It has a much more complex workload, requires the use of RAID-protected storage, generates much less I/O, and is much cheaper and easier to set up, run, and audit. After a period of overlap, it is expected that TPC-E will become the de facto OLTP benchmark.

  20. Criticality benchmark guide for light-water-reactor fuel in transportation and storage packages

    SciTech Connect

    Lichtenwalter, J.J.; Bowman, S.M.; DeHart, M.D.; Hopper, C.M.

    1997-03-01

    This report is designed as a guide for performing criticality benchmark calculations for light-water-reactor (LWR) fuel applications. The guide provides documentation of 180 criticality experiments with geometries, materials, and neutron interaction characteristics representative of transportation packages containing LWR fuel or uranium oxide pellets or powder. These experiments should benefit the U.S. Nuclear Regulatory Commission (NRC) staff and licensees in validation of computational methods used in LWR fuel storage and transportation concerns. The experiments are classified by key parameters such as enrichment, water/fuel volume, hydrogen-to-fissile ratio (H/X), and lattice pitch. Groups of experiments with common features such as separator plates, shielding walls, and soluble boron are also identified. In addition, a sample validation using these experiments and a statistical analysis of the results are provided. Recommendations for selecting suitable experiments and determination of calculational bias and uncertainty are presented as part of this benchmark guide.

  1. NAS Parallel Benchmarks Results 3-95

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Bailey, David H.; Walter, Howard (Technical Monitor)

    1995-01-01

    The NAS Parallel Benchmarks (NPB) were developed in 1991 at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a "pencil and paper" fashion, i.e., the complete details of the problem are given in a NAS technical document. Except for a few restrictions, benchmark implementors are free to select the language constructs and implementation techniques best suited for a particular system. In this paper, we present new NPB performance results for the following systems: (a) Parallel-Vector Processors: CRAY C90, CRAY T90 and Fujitsu VPP500; (b) Highly Parallel Processors: CRAY T3D, IBM SP2-WN (Wide Nodes), and IBM SP2-TN2 (Thin Nodes 2); and (c) Symmetric Multiprocessors: Convex Exemplar SPPIOOO, CRAY J90, DEC Alpha Server 8400 5/300, and SGI Power Challenge XL (75 MHz). We also present sustained performance per dollar for Class B LU, SP and BT benchmarks. We also mention future NAS plans for the NPB.

  2. Canadian Language Benchmarks 2000: Theoretical Framework.

    ERIC Educational Resources Information Center

    Pawlikowska-Smith, Grazyna

    This document provides indepth study and support of the "Canadian Language Benchmarks 2000" (CLB 2000). In order to make the CLB 2000 usable, the competencies and standards were considerably compressed and simplified, and much of the indepth discussion of language ability or proficiency was omitted, at publication. This document includes: (1)…

  3. Benchmarking 2011: Trends in Education Philanthropy

    ERIC Educational Resources Information Center

    Grantmakers for Education, 2011

    2011-01-01

    The analysis in "Benchmarking 2011" is based on data from an unduplicated sample of 184 education grantmaking organizations--approximately two-thirds of Grantmakers for Education's (GFE's) network of grantmakers--who responded to an online survey consisting of fixed-choice and open-ended questions. Because a different subset of funders elects to…

  4. Benchmarking 2010: Trends in Education Philanthropy

    ERIC Educational Resources Information Center

    Bearman, Jessica

    2010-01-01

    "Benchmarking 2010" offers insights into the current priorities, practices and concerns of education grantmakers. The report is divided into five sections: (1) Mapping the Education Grantmaking Landscape; (2) 2010 Funding Priorities; (3) Strategies for Leveraging Greater Impact; (4) Identifying Significant Trends in Education Funding; and (5)…

  5. What Is the Impact of Subject Benchmarking?

    ERIC Educational Resources Information Center

    Pidcock, Steve

    2006-01-01

    The introduction of subject benchmarking led to fears of increased external intervention in the activities of universities and a more restrictive view of institutional autonomy, accompanied by an undermining of the academic profession, particularly through the perceived threat of the introduction of a national curriculum for higher education. For…

  6. Benchmarking Year Five Students' Reading Abilities

    ERIC Educational Resources Information Center

    Lim, Chang Kuan; Eng, Lin Siew; Mohamed, Abdul Rashid

    2014-01-01

    Reading and understanding a written text is one of the most important skills in English learning.This study attempts to benchmark Year Five students' reading abilities of fifteen rural schools in a district in Malaysia. The objectives of this study are to develop a set of standardised written reading comprehension and a set of indicators to inform…

  7. Issues in Benchmarking and Assessing Institutional Engagement

    ERIC Educational Resources Information Center

    Furco, Andrew; Miller, William

    2009-01-01

    The process of assessing and benchmarking community engagement can take many forms. To date, more than two dozen assessment tools for measuring community engagement institutionalization have been published. These tools vary substantially in purpose, level of complexity, scope, process, structure, and focus. While some instruments are designed to…

  8. Alberta K-12 ESL Proficiency Benchmarks

    ERIC Educational Resources Information Center

    Salmon, Kathy; Ettrich, Mike

    2012-01-01

    The Alberta K-12 ESL Proficiency Benchmarks are organized by division: kindergarten, grades 1-3, grades 4-6, grades 7-9, and grades 10-12. They are descriptors of language proficiency in listening, speaking, reading, and writing. The descriptors are arranged in a continuum of seven language competences across five proficiency levels. Several…

  9. Benchmark Generation using Domain Specific Modeling

    SciTech Connect

    Bui, Ngoc B.; Zhu, Liming; Gorton, Ian; Liu, Yan

    2007-08-01

    Performance benchmarks are domain specific applications that are specialized to a certain set of technologies and platforms. The development of a benchmark application requires mapping the performance specific domain concepts to an implementation and producing complex technology and platform specific code. Domain Specific Modeling (DSM) promises to bridge the gap between application domains and implementations by allowing designers to specify solutions in domain-specific abstractions and semantics through Domain Specific Languages (DSL). This allows generation of a final implementation automatically from high level models. The modeling and task automation benefits obtained from this approach usually justify the upfront cost involved. This paper employs a DSM based approach to invent a new DSL, DSLBench, for benchmark generation. DSLBench and its associated code generation facilities allow the design and generation of a completely deployable benchmark application for performance testing from a high level model. DSLBench is implemented using Microsoft Domain Specific Language toolkit. It is integrated with the Visual Studio 2005 Team Suite as a plug-in to provide extra modeling capabilities for performance testing. We illustrate the approach using a case study based on .Net and C#.

  10. 2010 Recruiting Benchmarks Survey. Research Brief

    ERIC Educational Resources Information Center

    National Association of Colleges and Employers (NJ1), 2010

    2010-01-01

    The National Association of Colleges and Employers conducted its annual survey of employer members from June 15, 2010 to August 15, 2010, to benchmark data relevant to college recruiting. From a base of 861 employers holding organizational membership, there were 268 responses for a response rate of 31 percent. Following are some of the major…

  11. MHEC Survey Establishes Midwest Property Insurance Benchmarks.

    ERIC Educational Resources Information Center

    Midwestern Higher Education Commission Risk Management Institute Research Bulletin, 1994

    1994-01-01

    This publication presents the results of a survey of over 200 midwestern colleges and universities on their property insurance programs and establishes benchmarks to help these institutions evaluate their insurance programs. Findings included the following: (1) 51 percent of respondents currently purchase their property insurance as part of a…

  12. Sequenced Benchmarks for K-8 Science.

    ERIC Educational Resources Information Center

    Kendall, John S.; DeFrees, Keri L.; Richardson, Amy

    This document describes science benchmarks for grades K-8 in Earth and Space Science, Life Science, and Physical Science. Each subject area is divided into topics followed by a short content description and grade level information. Source documents for this paper included science content guides from California, Ohio, South Carolina, and South…

  13. Standardised Benchmarking in the Quest for Orthologs

    PubMed Central

    Altenhoff, Adrian M.; Boeckmann, Brigitte; Capella-Gutierrez, Salvador; Dalquen, Daniel A.; DeLuca, Todd; Forslund, Kristoffer; Huerta-Cepas, Jaime; Linard, Benjamin; Pereira, Cécile; Pryszcz, Leszek P.; Schreiber, Fabian; Sousa da Silva, Alan; Szklarczyk, Damian; Train, Clément-Marie; Bork, Peer; Lecompte, Odile; von Mering, Christian; Xenarios, Ioannis; Sjölander, Kimmen; Juhl Jensen, Lars; Martin, Maria J.; Muffato, Matthieu; Gabaldón, Toni; Lewis, Suzanna E.; Thomas, Paul D.; Sonnhammer, Erik; Dessimoz, Christophe

    2016-01-01

    The identification of evolutionarily related genes across different species—orthologs in particular—forms the backbone of many comparative, evolutionary, and functional genomic analyses. Achieving high accuracy in orthology inference is thus essential. Yet the true evolutionary history of genes, required to ascertain orthology, is generally unknown. Furthermore, orthologs are used for very different applications across different phyla, with different requirements in terms of the precision-recall trade-off. As a result, assessing the performance of orthology inference methods remains difficult for both users and method developers. Here, we present a community effort to establish standards in orthology benchmarking and facilitate orthology benchmarking through an automated web-based service (http://orthology.benchmarkservice.org). Using this new service, we characterise the performance of 15 well-established orthology inference methods and resources on a battery of 20 different benchmarks. Standardised benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimal requirement for new tools and resources, and guides the development of more accurate orthology inference methods. PMID:27043882

  14. Administrative benchmarks for Medicare, Medicaid HMOs.

    PubMed

    1998-11-01

    Plus, check out benchmark data on Medicare and Medicaid administrative costs. Every provider knows that HMOs take a slice of the Medicare or Medicaid premium for their administrative costs before they determine provider capitation. But how much does administration really cost? Here's some PMPM data from a study by the Sherlock Company.

  15. Benchmarking Peer Production Mechanisms, Processes & Practices

    ERIC Educational Resources Information Center

    Fischer, Thomas; Kretschmer, Thomas

    2008-01-01

    This deliverable identifies key approaches for quality management in peer production by benchmarking peer production practices and processes in other areas. (Contains 29 footnotes, 13 figures and 2 tables.)[This report has been authored with contributions of: Kaisa Honkonen-Ratinen, Matti Auvinen, David Riley, Jose Pinzon, Thomas Fischer, Thomas…

  16. Benchmark Generation and Simulation at Extreme Scale

    SciTech Connect

    Lagadapati, Mahesh; Mueller, Frank; Engelmann, Christian

    2016-01-01

    The path to extreme scale high-performance computing (HPC) poses several challenges related to power, performance, resilience, productivity, programmability, data movement, and data management. Investigating the performance of parallel applications at scale on future architectures and the performance impact of different architectural choices is an important component of HPC hardware/software co-design. Simulations using models of future HPC systems and communication traces from applications running on existing HPC systems can offer an insight into the performance of future architectures. This work targets technology developed for scalable application tracing of communication events. It focuses on extreme-scale simulation of HPC applications and their communication behavior via lightweight parallel discrete event simulation for performance estimation and evaluation. Instead of simply replaying a trace within a simulator, this work promotes the generation of a benchmark from traces. This benchmark is subsequently exposed to simulation using models to reflect the performance characteristics of future-generation HPC systems. This technique provides a number of benefits, such as eliminating the data intensive trace replay and enabling simulations at different scales. The presented work features novel software co-design aspects, combining the ScalaTrace tool to generate scalable trace files, the ScalaBenchGen tool to generate the benchmark, and the xSim tool to assess the benchmark characteristics within a simulator.

  17. Benchmark Problems for Spacecraft Formation Flying Missions

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Leitner, Jesse A.; Burns, Richard D.; Folta, David C.

    2003-01-01

    To provide high-level focus to distributed space system flight dynamics and control research, several benchmark problems are suggested. These problems are not specific to any current or proposed mission, but instead are intended to capture high-level features that would be generic to many similar missions.

  18. Benchmarking in Universities: League Tables Revisited

    ERIC Educational Resources Information Center

    Turner, David

    2005-01-01

    This paper examines the practice of benchmarking universities using a "league table" approach. Taking the example of the "Sunday Times University League Table", the author reanalyses the descriptive data on UK universities. Using a linear programming technique, data envelope analysis (DEA), the author uses the re-analysis to demonstrate the major…

  19. Benchmarking the ATLAS software through the Kit Validation engine

    NASA Astrophysics Data System (ADS)

    De Salvo, Alessandro; Brasolin, Franco

    2010-04-01

    The measurement of the experiment software performance is a very important metric in order to choose the most effective resources to be used and to discover the bottlenecks of the code implementation. In this work we present the benchmark techniques used to measure the ATLAS software performance through the ATLAS offline testing engine Kit Validation and the online portal Global Kit Validation. The performance measurements, the data collection, the online analysis and display of the results will be presented. The results of the measurement on different platforms and architectures will be shown, giving a full report on the CPU power and memory consumption of the Monte Carlo generation, simulation, digitization and reconstruction of the most CPU-intensive channels. The impact of the multi-core computing on the ATLAS software performance will also be presented, comparing the behavior of different architectures when increasing the number of concurrent processes. The benchmark techniques described in this paper have been used in the HEPiX group since the beginning of 2008 to help defining the performance metrics for the High Energy Physics applications, based on the real experiment software.

  20. Benchmarking the operational search accuracy of a national identification system

    NASA Astrophysics Data System (ADS)

    Suman, Ambika; Whitaker, Geoff

    2005-03-01

    This paper reports on some of the challenges associated with setting up and conducting a full operational benchmark of a palm and fingerprint identification system, based on PITO's own recent experience in this field. The tests described were undertaken as part of the overall evaluation of suppliers tendering for a multi million pound contract to deliver a new national automated fingerprint service for the UK (known as IDENT1), as a successor to the existing systems, both in England and Wales, and in Scotland. The emphasis throughout was on 'operationally' representative testing and it was this that determined the design and scale of the tests, which PITO believes are the largest such tests of a national AFIS ever undertaken. The knowledge gained from performing these benchmark tests has provided PITO with extremely valuable experience in both the theoretical and practical issues surrounding the design and conduct of operational tests on large scale identification systems, and it is these issues that are discussed in this paper.

  1. VENUS-2 Experimental Benchmark Analysis

    SciTech Connect

    Pavlovichev, A.M.

    2001-09-28

    The VENUS critical facility is a zero power reactor located at SCK-CEN, Mol, Belgium, which for the VENUS-2 experiment utilized a mixed-oxide core with near-weapons-grade plutonium. In addition to the VENUS-2 Core, additional computational variants based on each type of fuel cycle VENUS-2 core (3.3 wt. % UO{sub 2}, 4.0 wt. % UO{sub 2}, and 2.0/2.7 wt.% MOX) were also calculated. The VENUS-2 critical configuration and cell variants have been calculated with MCU-REA, which is a continuous energy Monte Carlo code system developed at Russian Research Center ''Kurchatov Institute'' and is used extensively in the Fissile Materials Disposition Program. The calculations resulted in a k{sub eff} of 0.99652 {+-} 0.00025 and relative pin powers within 2% for UO{sub 2} pins and 3% for MOX pins of the experimental values.

  2. RESULTS FOR THE INTERMEDIATE-SPECTRUM ZEUS BENCHMARK OBTAINED WITH NEW 63,65Cu CROSS-SECTION EVALUATIONS

    SciTech Connect

    Sobes, Vladimir; Leal, Luiz C

    2014-01-01

    The four HEU, intermediate-spectrum, copper-reflected Zeus experiments have shown discrepant results between measurement and calculation for the last several major releases of the ENDF library. The four benchmarks show a trend in reported C/E values with increasing energy of average lethargy causing fission. Recently, ORNL has made improvements to the evaluations of three key isotopes involved in the benchmark cases in question. Namely, an updated evaluation for 235U and evaluations of 63,65Cu. This paper presents the benchmarking results of the four intermediate-spectrum Zeus cases using the three updated evaluations.

  3. Toxicological Benchmarks for Screening Potential Contaminants of Concern for Effects on Soil and Litter Invertebrates and Heterotrophic Process

    SciTech Connect

    Will, M.E.

    1994-01-01

    This report presents a standard method for deriving benchmarks for the purpose of ''contaminant screening,'' performed by comparing measured ambient concentrations of chemicals. The work was performed under Work Breakdown Structure 1.4.12.2.3.04.07.02 (Activity Data Sheet 8304). In addition, this report presents sets of data concerning the effects of chemicals in soil on invertebrates and soil microbial processes, benchmarks for chemicals potentially associated with United States Department of Energy sites, and literature describing the experiments from which data were drawn for benchmark derivation.

  4. A chemical EOR benchmark study of different reservoir simulators

    NASA Astrophysics Data System (ADS)

    Goudarzi, Ali; Delshad, Mojdeh; Sepehrnoori, Kamy

    2016-09-01

    Interest in chemical EOR processes has intensified in recent years due to the advancements in chemical formulations and injection techniques. Injecting Polymer (P), surfactant/polymer (SP), and alkaline/surfactant/polymer (ASP) are techniques for improving sweep and displacement efficiencies with the aim of improving oil production in both secondary and tertiary floods. There has been great interest in chemical flooding recently for different challenging situations. These include high temperature reservoirs, formations with extreme salinity and hardness, naturally fractured carbonates, and sandstone reservoirs with heavy and viscous crude oils. More oil reservoirs are reaching maturity where secondary polymer floods and tertiary surfactant methods have become increasingly important. This significance has added to the industry's interest in using reservoir simulators as tools for reservoir evaluation and management to minimize costs and increase the process efficiency. Reservoir simulators with special features are needed to represent coupled chemical and physical processes present in chemical EOR processes. The simulators need to be first validated against well controlled lab and pilot scale experiments to reliably predict the full field implementations. The available data from laboratory scale include 1) phase behavior and rheological data; and 2) results of secondary and tertiary coreflood experiments for P, SP, and ASP floods under reservoir conditions, i.e. chemical retentions, pressure drop, and oil recovery. Data collected from corefloods are used as benchmark tests comparing numerical reservoir simulators with chemical EOR modeling capabilities such as STARS of CMG, ECLIPSE-100 of Schlumberger, REVEAL of Petroleum Experts. The research UTCHEM simulator from The University of Texas at Austin is also included since it has been the benchmark for chemical flooding simulation for over 25 years. The results of this benchmark comparison will be utilized to improve

  5. Finding a benchmark for monitoring hospital cleanliness.

    PubMed

    Mulvey, D; Redding, P; Robertson, C; Woodall, C; Kingsmore, P; Bedwell, D; Dancer, S J

    2011-01-01

    This study evaluated three methods for monitoring hospital cleanliness. The aim was to find a benchmark that could indicate risk to patients from a contaminated environment. We performed visual monitoring, ATP bioluminescence and microbiological screening of five clinical surfaces before and after detergent-based cleaning on two wards over a four-week period. Five additional sites that were not featured in the routine domestic specification were also sampled. Measurements from all three methods were integrated and compared in order to choose appropriate levels for routine monitoring. We found that visual assessment did not reflect ATP values nor environmental contamination with microbial flora including Staphylococcus aureus and meticillin-resistant S. aureus (MRSA). There was a relationship between microbial growth categories and the proportion of ATP values exceeding a chosen benchmark but neither reliably predicted the presence of S. aureus or MRSA. ATP values were occasionally diverse. Detergent-based cleaning reduced levels of organic soil by 32% (95% confidence interval: 16-44%; P<0.001) but did not necessarily eliminate indicator staphylococci, some of which survived the cleaning process. An ATP benchmark value of 100 relative light units offered the closest correlation with microbial growth levels <2.5 cfu/cm(2) (receiver operating characteristic ROC curve sensitivity: 57%; specificity: 57%). In conclusion, microbiological and ATP monitoring confirmed environmental contamination, persistence of hospital pathogens and measured the effect on the environment from current cleaning practices. This study has provided provisional benchmarks to assist with future assessment of hospital cleanliness. Further work is required to refine practical sampling strategy and choice of benchmarks. PMID:21129820

  6. Benchmark 1 - Failure Prediction after Cup Drawing, Reverse Redrawing and Expansion Part A: Benchmark Description

    NASA Astrophysics Data System (ADS)

    Watson, Martin; Dick, Robert; Huang, Y. Helen; Lockley, Andrew; Cardoso, Rui; Santos, Abel

    2016-08-01

    This Benchmark is designed to predict the fracture of a food can after drawing, reverse redrawing and expansion. The aim is to assess different sheet metal forming difficulties such as plastic anisotropic earing and failure models (strain and stress based Forming Limit Diagrams) under complex nonlinear strain paths. To study these effects, two distinct materials, TH330 steel (unstoved) and AA5352 aluminum alloy are considered in this Benchmark. Problem description, material properties, and simulation reports with experimental data are summarized.

  7. Using Benchmarking To Influence Tuition and Fee Decisions.

    ERIC Educational Resources Information Center

    Hubbell, Loren W. Loomis; Massa, Robert J.; Lapovsky, Lucie

    2002-01-01

    Discusses the use of benchmarking in managing enrollment. Using a case study, illustrates how benchmarking can help administrators develop strategies for planning and implementing admissions and pricing practices. (EV)

  8. 42 CFR 425.602 - Establishing the benchmark.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... computing an ACO's fixed historical benchmark that is adjusted for historical growth and beneficiary... program. (2) Makes separate expenditure calculations for each of the following populations of... making up the historical benchmark, determines national growth rates and trends expenditures for...

  9. 42 CFR 425.602 - Establishing the benchmark.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... computing an ACO's fixed historical benchmark that is adjusted for historical growth and beneficiary... program. (2) Makes separate expenditure calculations for each of the following populations of... making up the historical benchmark, determines national growth rates and trends expenditures for...

  10. The Impact Hydrocode Benchmark and Validation Project: Initial Results

    NASA Astrophysics Data System (ADS)

    Pierazzo, E.; Artemieva, N.; Asphaug, E.; Cazamias, J.; Coker, R.; Collins, G. S.; Gisler, G.; Holsapple, K. A.; Housen, K. R.; Ivanov, B.; Johnson, C.; Korycansky, D. G.; Melosh, H. J.; Taylor, E. A.; Turtle, E. P.; Wünnemann, K.

    2007-03-01

    This work presents initial results of a validation and benchmarking effort from the impact cratering and explosion community. Several impact codes routinely used to model impact and explosion events are being compared using simple benchmark tests.

  11. Results Oriented Benchmarking: The Evolution of Benchmarking at NASA from Competitive Comparisons to World Class Space Partnerships

    NASA Technical Reports Server (NTRS)

    Bell, Michael A.

    1999-01-01

    Informal benchmarking using personal or professional networks has taken place for many years at the Kennedy Space Center (KSC). The National Aeronautics and Space Administration (NASA) recognized early on, the need to formalize the benchmarking process for better utilization of resources and improved benchmarking performance. The need to compete in a faster, better, cheaper environment has been the catalyst for formalizing these efforts. A pioneering benchmarking consortium was chartered at KSC in January 1994. The consortium known as the Kennedy Benchmarking Clearinghouse (KBC), is a collaborative effort of NASA and all major KSC contractors. The charter of this consortium is to facilitate effective benchmarking, and leverage the resulting quality improvements across KSC. The KBC acts as a resource with experienced facilitators and a proven process. One of the initial actions of the KBC was to develop a holistic methodology for Center-wide benchmarking. This approach to Benchmarking integrates the best features of proven benchmarking models (i.e., Camp, Spendolini, Watson, and Balm). This cost-effective alternative to conventional Benchmarking approaches has provided a foundation for consistent benchmarking at KSC through the development of common terminology, tools, and techniques. Through these efforts a foundation and infrastructure has been built which allows short duration benchmarking studies yielding results gleaned from world class partners that can be readily implemented. The KBC has been recognized with the Silver Medal Award (in the applied research category) from the International Benchmarking Clearinghouse.

  12. Middleware Evaluation and Benchmarking for Use in Mission Operations Centers

    NASA Technical Reports Server (NTRS)

    Antonucci, Rob; Waktola, Waka

    2005-01-01

    Middleware technologies have been promoted as timesaving, cost-cutting alternatives to the point-to-point communication used in traditional mission operations systems. However, missions have been slow to adopt the new technology. The lack of existing middleware-based missions has given rise to uncertainty about middleware's ability to perform in an operational setting. Most mission architects are also unfamiliar with the technology and do not know the benefits and detriments to architectural choices - or even what choices are available. We will present the findings of a study that evaluated several middleware options specifically for use in a mission operations system. We will address some common misconceptions regarding the applicability of middleware-based architectures, and we will identify the design decisions and tradeoffs that must be made when choosing a middleware solution. The Middleware Comparison and Benchmark Study was conducted at NASA Goddard Space Flight Center to comprehensively evaluate candidate middleware products, compare and contrast the performance of middleware solutions with the traditional point- to-point socket approach, and assess data delivery and reliability strategies. The study focused on requirements of the Global Precipitation Measurement (GPM) mission, validating the potential use of middleware in the GPM mission ground system. The study was jointly funded by GPM and the Goddard Mission Services Evolution Center (GMSEC), a virtual organization for providing mission enabling solutions and promoting the use of appropriate new technologies for mission support. The study was broken into two phases. To perform the generic middleware benchmarking and performance analysis, a network was created with data producers and consumers passing data between themselves. The benchmark monitored the delay, throughput, and reliability of the data as the characteristics were changed. Measurements were taken under a variety of topologies, data demands

  13. Experience With the SCALE Criticality Safety Cross Section Libraries

    SciTech Connect

    Bowman, S.M.

    2000-08-21

    This report provides detailed information on the SCALE criticality safety cross-section libraries. Areas covered include the origins of the libraries, the data on which they are based, how they were generated, past experience and validations, and performance comparisons with measured critical experiments and numerical benchmarks. The performance of the SCALE criticality safety cross-section libraries on various types of fissile systems are examined in detail. Most of the performance areas are demonstrated by examining the performance of the libraries vs critical experiments to show general trends and weaknesses. In areas where directly applicable critical experiments do not exist, performance is examined based on the general knowledge of the strengths and weaknesses of the cross sections. In this case, the experience in the use of the cross sections and comparisons with the results of other libraries on the same systems are relied on for establishing acceptability of application of a particular SCALE library to a particular fissile system. This report should aid in establishing when a SCALE cross-section library would be expected to perform acceptably and where there are known or suspected deficiencies that would cause the calculations to be less reliable. To determine the acceptability of a library for a particular application, the calculational bias of the library should be established by directly applicable critical experiments.

  14. Benchmarking and Threshold Standards in Higher Education. Staff and Educational Development Series.

    ERIC Educational Resources Information Center

    Smith, Helen, Ed.; Armstrong, Michael, Ed.; Brown, Sally, Ed.

    This book explores the issues involved in developing standards in higher education, examining the practical issues involved in benchmarking and offering a critical analysis of the problems associated with this developmental tool. The book focuses primarily on experience in the United Kingdom (UK), but looks also at international activity in this…

  15. International E-Benchmarking: Flexible Peer Development of Authentic Learning Principles in Higher Education

    ERIC Educational Resources Information Center

    Leppisaari, Irja; Vainio, Leena; Herrington, Jan; Im, Yeonwook

    2011-01-01

    More and more, social technologies and virtual work methods are facilitating new ways of crossing boundaries in professional development and international collaborations. This paper examines the peer development of higher education teachers through the experiences of the IVBM project (International Virtual Benchmarking, 2009-2010). The…

  16. Winning Strategy: Set Benchmarks of Early Success to Build Momentum for the Long Term

    ERIC Educational Resources Information Center

    Spiro, Jody

    2012-01-01

    Change is a highly personal experience. Everyone participating in the effort has different reactions to change, different concerns, and different motivations for being involved. The smart change leader sets benchmarks along the way so there are guideposts and pause points instead of an endless change process. "Early wins"--a term used to describe…

  17. 29 CFR 1952.163 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.163 Section 1952.163... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  18. 29 CFR 1952.113 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.113 Section 1952.113... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  19. 29 CFR 1952.213 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.213 Section 1952.213... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  20. 29 CFR 1952.343 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.343 Section 1952.343... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, Compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  1. 29 CFR 1952.373 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.373 Section 1952.373... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  2. 29 CFR 1952.93 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.93 Section 1952.93....93 Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were...

  3. 29 CFR 1952.353 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.353 Section 1952.353... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  4. 29 CFR 1952.233 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.233 Section 1952.233... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  5. 29 CFR 1952.223 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.223 Section 1952.223... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  6. 29 CFR 1952.293 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.293 Section 1952.293... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  7. 47 CFR 69.108 - Transport rate benchmark.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Transport rate benchmark. 69.108 Section 69.108... Computation of Charges § 69.108 Transport rate benchmark. (a) For transport charges computed in accordance with this subpart, the DS3-to-DS1 benchmark ratio shall be calculated as follows: the telephone...

  8. 29 CFR 1952.323 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.323 Section 1952.323... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  9. Discovering and Implementing Best Practices to Strengthen SEAs: Collaborative Benchmarking

    ERIC Educational Resources Information Center

    Building State Capacity and Productivity Center, 2013

    2013-01-01

    This paper is written for state educational agency (SEA) leaders who are considering the benefits of collaborative benchmarking, and it addresses the following questions: (1) What does benchmarking of best practices entail?; (2) How does "collaborative benchmarking" enhance the process?; (3) How do SEAs control the process so that "their" needs…

  10. 29 CFR 1952.163 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 9 2011-07-01 2011-07-01 false Compliance staffing benchmarks. 1952.163 Section 1952.163... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  11. 29 CFR 1952.93 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 9 2013-07-01 2013-07-01 false Compliance staffing benchmarks. 1952.93 Section 1952.93....93 Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were...

  12. 29 CFR 1952.323 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 9 2012-07-01 2012-07-01 false Compliance staffing benchmarks. 1952.323 Section 1952.323... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  13. 29 CFR 1952.353 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 9 2012-07-01 2012-07-01 false Compliance staffing benchmarks. 1952.353 Section 1952.353... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  14. 29 CFR 1952.353 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 9 2013-07-01 2013-07-01 false Compliance staffing benchmarks. 1952.353 Section 1952.353... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  15. 29 CFR 1952.113 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 9 2013-07-01 2013-07-01 false Compliance staffing benchmarks. 1952.113 Section 1952.113... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  16. 29 CFR 1952.223 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 9 2013-07-01 2013-07-01 false Compliance staffing benchmarks. 1952.223 Section 1952.223... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  17. 29 CFR 1952.353 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 9 2014-07-01 2014-07-01 false Compliance staffing benchmarks. 1952.353 Section 1952.353... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  18. 29 CFR 1952.323 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 9 2013-07-01 2013-07-01 false Compliance staffing benchmarks. 1952.323 Section 1952.323... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  19. 29 CFR 1952.213 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 9 2012-07-01 2012-07-01 false Compliance staffing benchmarks. 1952.213 Section 1952.213... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  20. 29 CFR 1952.293 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 9 2014-07-01 2014-07-01 false Compliance staffing benchmarks. 1952.293 Section 1952.293... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  1. 45 CFR 156.100 - State selection of benchmark.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 1 2014-10-01 2014-10-01 false State selection of benchmark. 156.100 Section 156... Essential Health Benefits Package § 156.100 State selection of benchmark. Each State may identify a single EHB-benchmark plan according to the selection criteria described below: (a) State selection of...

  2. 29 CFR 1952.343 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 9 2011-07-01 2011-07-01 false Compliance staffing benchmarks. 1952.343 Section 1952.343... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, Compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  3. 29 CFR 1952.373 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 9 2011-07-01 2011-07-01 false Compliance staffing benchmarks. 1952.373 Section 1952.373... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  4. 29 CFR 1952.293 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 9 2012-07-01 2012-07-01 false Compliance staffing benchmarks. 1952.293 Section 1952.293... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  5. 29 CFR 1952.113 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 9 2014-07-01 2014-07-01 false Compliance staffing benchmarks. 1952.113 Section 1952.113... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  6. 29 CFR 1952.233 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 9 2014-07-01 2014-07-01 false Compliance staffing benchmarks. 1952.233 Section 1952.233... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  7. 29 CFR 1952.343 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 9 2013-07-01 2013-07-01 false Compliance staffing benchmarks. 1952.343 Section 1952.343... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, Compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  8. 29 CFR 1952.293 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 9 2013-07-01 2013-07-01 false Compliance staffing benchmarks. 1952.293 Section 1952.293... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  9. 29 CFR 1952.93 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 9 2011-07-01 2011-07-01 false Compliance staffing benchmarks. 1952.93 Section 1952.93....93 Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were...

  10. 29 CFR 1952.323 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 9 2011-07-01 2011-07-01 false Compliance staffing benchmarks. 1952.323 Section 1952.323... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  11. 41 CFR 60-300.45 - Benchmarks for hiring.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 41 Public Contracts and Property Management 1 2014-07-01 2014-07-01 false Benchmarks for hiring... VETERANS, AND ARMED FORCES SERVICE MEDAL VETERANS Affirmative Action Program § 60-300.45 Benchmarks for hiring. The benchmark is not a rigid and inflexible quota which must be met, nor is it to be...

  12. 47 CFR 69.108 - Transport rate benchmark.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Transport rate benchmark. 69.108 Section 69.108... Computation of Charges § 69.108 Transport rate benchmark. (a) For transport charges computed in accordance with this subpart, the DS3-to-DS1 benchmark ratio shall be calculated as follows: the telephone...

  13. 47 CFR 69.108 - Transport rate benchmark.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Transport rate benchmark. 69.108 Section 69.108... Computation of Charges § 69.108 Transport rate benchmark. (a) For transport charges computed in accordance with this subpart, the DS3-to-DS1 benchmark ratio shall be calculated as follows: the telephone...

  14. 29 CFR 1952.93 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 9 2012-07-01 2012-07-01 false Compliance staffing benchmarks. 1952.93 Section 1952.93....93 Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were...

  15. 29 CFR 1952.343 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 9 2014-07-01 2014-07-01 false Compliance staffing benchmarks. 1952.343 Section 1952.343... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, Compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  16. 29 CFR 1952.113 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 9 2011-07-01 2011-07-01 false Compliance staffing benchmarks. 1952.113 Section 1952.113... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  17. 29 CFR 1952.213 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 9 2014-07-01 2014-07-01 false Compliance staffing benchmarks. 1952.213 Section 1952.213... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  18. 29 CFR 1952.373 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 9 2013-07-01 2013-07-01 false Compliance staffing benchmarks. 1952.373 Section 1952.373... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  19. 29 CFR 1952.93 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 9 2014-07-01 2014-07-01 false Compliance staffing benchmarks. 1952.93 Section 1952.93....93 Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were...

  20. 47 CFR 69.108 - Transport rate benchmark.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false Transport rate benchmark. 69.108 Section 69.108... Computation of Charges § 69.108 Transport rate benchmark. (a) For transport charges computed in accordance with this subpart, the DS3-to-DS1 benchmark ratio shall be calculated as follows: the telephone...