Science.gov

Sample records for acceptable benchmark experiment

  1. Benchmark physics experiments for SP-100

    NASA Astrophysics Data System (ADS)

    Olsen, David N.; Carpenter, Stuart G.; Grasseschi, Gary L.; Smith, Dale M.

    A space nuclear power system (SNPS) benchmark reactor physics program was performed at Argonne's Zero Power Physics Reactor (ZPPR). Two uranium fuelled, BeO reflected reactors were assembled to test 300 kWe conceptual designs considered for the SP-100. The major difference between configurations was the reactivity control concept. Program goals were to aid designers in evaluating SP-100 designs and provide guidance in defining a series of engineering mockup criticals to be performed in support of the ground engineering test. ZPPR-16 was a short program aimed at providing basic physics data for cores representing early SP-100 designs. All measurement results from the experimental program are available. Initial analysis, using standard deterministic methods, shows significant errors when compared against the measurements. Calculational difficulties are enhanced by the need to model a natural B4C/graphite room-return shield used in the ZPPR experiments.

  2. Companies' opinions and acceptance of global food safety initiative benchmarks after implementation.

    PubMed

    Crandall, Phil; Van Loo, Ellen J; O'Bryan, Corliss A; Mauromoustakos, Andy; Yiannas, Frank; Dyenson, Natalie; Berdnik, Irina

    2012-09-01

    International attention has been focused on minimizing costs that may unnecessarily raise food prices. One important aspect to consider is the redundant and overlapping costs of food safety audits. The Global Food Safety Initiative (GFSI) has devised benchmarked schemes based on existing international food safety standards for use as a unifying standard accepted by many retailers. The present study was conducted to evaluate the impact of the decision made by Walmart Stores (Bentonville, AR) to require their suppliers to become GFSI compliant. An online survey of 174 retail suppliers was conducted to assess food suppliers' opinions of this requirement and the benefits suppliers realized when they transitioned from their previous food safety systems. The most common reason for becoming GFSI compliant was to meet customers' requirements; thus, supplier implementation of the GFSI standards was not entirely voluntary. Other reasons given for compliance were enhancing food safety and remaining competitive. About 54 % of food processing plants using GFSI benchmarked schemes followed the guidelines of Safe Quality Food 2000 and 37 % followed those of the British Retail Consortium. At the supplier level, 58 % followed Safe Quality Food 2000 and 31 % followed the British Retail Consortium. Respondents reported that the certification process took about 10 months. The most common reason for selecting a certain GFSI benchmarked scheme was because it was widely accepted by customers (retailers). Four other common reasons were (i) the standard has a good reputation in the industry, (ii) the standard was recommended by others, (iii) the standard is most often used in the industry, and (iv) the standard was required by one of their customers. Most suppliers agreed that increased safety of their products was required to comply with GFSI benchmarked schemes. They also agreed that the GFSI required a more carefully documented food safety management system, which often required

  3. Benchmarking NMR experiments: A relational database of protein pulse sequences

    NASA Astrophysics Data System (ADS)

    Senthamarai, Russell R. P.; Kuprov, Ilya; Pervushin, Konstantin

    2010-03-01

    Systematic benchmarking of multi-dimensional protein NMR experiments is a critical prerequisite for optimal allocation of NMR resources for structural analysis of challenging proteins, e.g. large proteins with limited solubility or proteins prone to aggregation. We propose a set of benchmarking parameters for essential protein NMR experiments organized into a lightweight (single XML file) relational database (RDB), which includes all the necessary auxiliaries (waveforms, decoupling sequences, calibration tables, setup algorithms and an RDB management system). The database is interfaced to the Spinach library ( http://spindynamics.org), which enables accurate simulation and benchmarking of NMR experiments on large spin systems. A key feature is the ability to use a single user-specified spin system to simulate the majority of deposited solution state NMR experiments, thus providing the (hitherto unavailable) unified framework for pulse sequence evaluation. This development enables predicting relative sensitivity of deposited implementations of NMR experiments, thus providing a basis for comparison, optimization and, eventually, automation of NMR analysis. The benchmarking is demonstrated with two proteins, of 170 amino acids I domain of αXβ2 Integrin and 440 amino acids NS3 helicase.

  4. Providing Nuclear Criticality Safety Analysis Education through Benchmark Experiment Evaluation

    SciTech Connect

    John D. Bess; J. Blair Briggs; David W. Nigg

    2009-11-01

    One of the challenges that today's new workforce of nuclear criticality safety engineers face is the opportunity to provide assessment of nuclear systems and establish safety guidelines without having received significant experience or hands-on training prior to graduation. Participation in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and/or the International Reactor Physics Experiment Evaluation Project (IRPhEP) provides students and young professionals the opportunity to gain experience and enhance critical engineering skills.

  5. Benchmark Evaluation of the Medium-Power Reactor Experiment Program Critical Configurations

    SciTech Connect

    Margaret A. Marshall; John D. Bess

    2013-02-01

    A series of small, compact critical assembly (SCCA) experiments were performed in 1962-1965 at the Oak Ridge National Laboratory Critical Experiments Facility (ORCEF) for the Medium-Power Reactor Experiment (MPRE) program. The MPRE was a stainless-steel clad, highly enriched uranium (HEU)-O2 fuelled, BeO reflected reactor design to provide electrical power to space vehicles. Cooling and heat transfer were to be achieved by boiling potassium in the reactor core and passing vapor directly through a turbine. Graphite- and beryllium-reflected assemblies were constructed at ORCEF to verify the critical mass, power distribution, and other reactor physics measurements needed to validate reactor calculations and reactor physics methods. The experimental series was broken into three parts, with the third portion of the experiments representing the beryllium-reflected measurements. The latter experiments are of interest for validating current reactor design efforts for a fission surface power reactor. The entire series has been evaluated as acceptable benchmark experiments and submitted for publication in the International Handbook of Evaluated Criticality Safety Benchmark Experiments and in the International Handbook of Evaluated Reactor Physics Benchmark Experiments.

  6. TRIGA Mark II Criticality Benchmark Experiment with Burned Fuel

    SciTech Connect

    Persic, Andreja; Ravnik, Matjaz; Zagar, Tomaz

    2000-12-15

    The experimental results of criticality benchmark experiments performed at the Jozef Stefan Institute TRIGA Mark II reactor are presented. The experiments were performed with partly burned fuel in two compact and uniform core configurations in the same arrangements as were used in the fresh fuel criticality benchmark experiment performed in 1991. In the experiments, both core configurations contained only 12 wt% U-ZrH fuel with 20% enriched uranium. The first experimental core contained 43 fuel elements with average burnup of 1.22 MWd or 2.8% {sup 235}U burned. The last experimental core configuration was composed of 48 fuel elements with average burnup of 1.15 MWd or 2.6% {sup 235}U burned. The experimental determination of k{sub eff} for both core configurations, one subcritical and one critical, are presented. Burnup for all fuel elements was calculated in two-dimensional four-group diffusion approximation using the TRIGLAV code. The burnup of several fuel elements was measured also by the reactivity method.

  7. Benchmark enclosure fire suppression experiments - phase 1 test report.

    SciTech Connect

    Figueroa, Victor G.; Nichols, Robert Thomas; Blanchat, Thomas K.

    2007-06-01

    A series of fire benchmark water suppression tests were performed that may provide guidance for dispersal systems for the protection of high value assets. The test results provide boundary and temporal data necessary for water spray suppression model development and validation. A review of fire suppression in presented for both gaseous suppression and water mist fire suppression. The experimental setup and procedure for gathering water suppression performance data are shown. Characteristics of the nozzles used in the testing are presented. Results of the experiments are discussed.

  8. SILENE Benchmark Critical Experiments for Criticality Accident Alarm Systems

    SciTech Connect

    Miller, Thomas Martin; Reynolds, Kevin H.

    2011-01-01

    In October 2010 a series of benchmark experiments was conducted at the Commissariat a Energie Atomique et aux Energies Alternatives (CEA) Valduc SILENE [1] facility. These experiments were a joint effort between the US Department of Energy (DOE) and the French CEA. The purpose of these experiments was to create three benchmarks for the verification and validation of radiation transport codes and evaluated nuclear data used in the analysis of criticality accident alarm systems (CAASs). This presentation will discuss the geometric configuration of these experiments and the quantities that were measured and will present some preliminary comparisons between the measured data and calculations. This series consisted of three single-pulsed experiments with the SILENE reactor. During the first experiment the reactor was bare (unshielded), but during the second and third experiments it was shielded by lead and polyethylene, respectively. During each experiment several neutron activation foils and thermoluminescent dosimeters (TLDs) were placed around the reactor, and some of these detectors were themselves shielded from the reactor by high-density magnetite and barite concrete, standard concrete, and/or BoroBond. All the concrete was provided by CEA Saclay, and the BoroBond was provided by Y-12 National Security Complex. Figure 1 is a picture of the SILENE reactor cell configured for pulse 1. Also included in these experiments were measurements of the neutron and photon spectra with two BICRON BC-501A liquid scintillators. These two detectors were provided and operated by CEA Valduc. They were set up just outside the SILENE reactor cell with additional lead shielding to prevent the detectors from being saturated. The final detectors involved in the experiments were two different types of CAAS detectors. The Babcock International Group provided three CIDAS CAAS detectors, which measured photon dose and dose rate with a Geiger-Mueller tube. CIDAS detectors are currently in

  9. Pulse height spectrum measurement experiment for code benchmarking: first results

    SciTech Connect

    Sale, K E; Hall, J M; Brown, C M

    2000-10-27

    The authors have completed a set of gamma-ray pulse height benchmark experiments using a high purity germanium detector to measure absolute counting rate spectra from {sup 60}Co, {sup 137}Cs and {sup 57}Co isotopic sources. The detector was carefully shielded and collimated so that the geometry of the system was completely known. The measured absolute pulse height spectrum counting rates as a function of detector position relative to the source are compared to energy deposit spectra calculated using the Monte Carlo radiation transport code COG. They present here a small subset of our results. The agreement between the calculated and measured spectra and known sources of discrepancies will be discussed.

  10. Benchmark Data Through The International Reactor Physics Experiment Evaluation Project (IRPHEP)

    SciTech Connect

    J. Blair Briggs; Dr. Enrico Sartori

    2005-09-01

    The International Reactor Physics Experiments Evaluation Project (IRPhEP) was initiated by the Organization for Economic Cooperation and Development (OECD) Nuclear Energy Agency’s (NEA) Nuclear Science Committee (NSC) in June of 2002. The IRPhEP focus is on the derivation of internationally peer reviewed benchmark models for several types of integral measurements, in addition to the critical configuration. While the benchmarks produced by the IRPhEP are of primary interest to the Reactor Physics Community, many of the benchmarks can be of significant value to the Criticality Safety and Nuclear Data Communities. Benchmarks that support the Next Generation Nuclear Plant (NGNP), for example, also support fuel manufacture, handling, transportation, and storage activities and could challenge current analytical methods. The IRPhEP is patterned after the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and is closely coordinated with the ICSBEP. This paper highlights the benchmarks that are currently being prepared by the IRPhEP that are also of interest to the Criticality Safety Community. The different types of measurements and associated benchmarks that can be expected in the first publication and beyond are described. The protocol for inclusion of IRPhEP benchmarks as ICSBEP benchmarks and for inclusion of ICSBEP benchmarks as IRPhEP benchmarks is detailed. The format for IRPhEP benchmark evaluations is described as an extension of the ICSBEP format. Benchmarks produced by the IRPhEP add new dimension to criticality safety benchmarking efforts and expand the collection of available integral benchmarks for nuclear data testing. The first publication of the "International Handbook of Evaluated Reactor Physics Benchmark Experiments" is scheduled for January of 2006.

  11. Analogue experiments as benchmarks for models of lava flow emplacement

    NASA Astrophysics Data System (ADS)

    Garel, F.; Kaminski, E. C.; Tait, S.; Limare, A.

    2013-12-01

    During an effusive volcanic eruption, the crisis management is mainly based on the prediction of lava flow advance and its velocity. The spreading of a lava flow, seen as a gravity current, depends on its "effective rheology" and on the effusion rate. Fast-computing models have arisen in the past decade in order to predict in near real time lava flow path and rate of advance. This type of model, crucial to mitigate volcanic hazards and organize potential evacuation, has been mainly compared a posteriori to real cases of emplaced lava flows. The input parameters of such simulations applied to natural eruptions, especially effusion rate and topography, are often not known precisely, and are difficult to evaluate after the eruption. It is therefore not straightforward to identify the causes of discrepancies between model outputs and observed lava emplacement, whereas the comparison of models with controlled laboratory experiments appears easier. The challenge for numerical simulations of lava flow emplacement is to model the simultaneous advance and thermal structure of viscous lava flows. To provide original constraints later to be used in benchmark numerical simulations, we have performed lab-scale experiments investigating the cooling of isoviscous gravity currents. The simplest experimental set-up is as follows: silicone oil, whose viscosity, around 5 Pa.s, varies less than a factor of 2 in the temperature range studied, is injected from a point source onto a horizontal plate and spreads axisymmetrically. The oil is injected hot, and progressively cools down to ambient temperature away from the source. Once the flow is developed, it presents a stationary radial thermal structure whose characteristics depend on the input flow rate. In addition to the experimental observations, we have developed in Garel et al., JGR, 2012 a theoretical model confirming the relationship between supply rate, flow advance and stationary surface thermal structure. We also provide

  12. The temporal dynamics of emotional acceptance: Experience, expression, and physiology.

    PubMed

    Dan-Glauser, Elise S; Gross, James J

    2015-05-01

    Emotional acceptance has begun to attract considerable attention from researchers and clinicians alike. It is not yet clear, however, what effects emotional acceptance has on early emotion response dynamics. To address this question, participants (N = 37) were shown emotional pictures and cued either to simply attend to them, or to accept or suppress their emotional responses. Continuous measures of emotion experience, expressive behavior, and autonomic responses were obtained. Results indicated that, compared to no regulation, acceptance led to more positive emotions, transiently enhanced expressivity, and lowered respiratory rate. Compared to suppression, acceptance led to more positive emotions, stronger expressivity, and smaller changes in heart rate, blood pressure, and pulse amplitude, as well as greater oxygenation. Acceptance and suppression thus have opposite effects on emotional response dynamics. Because acceptance enhances positive emotion experience and expression, this strategy may be particularly useful in facilitating social interactions. PMID:25782407

  13. Prior Computer Experience and Technology Acceptance

    ERIC Educational Resources Information Center

    Varma, Sonali

    2010-01-01

    Prior computer experience with information technology has been identified as a key variable (Lee, Kozar, & Larsen, 2003) that can influence an individual's future use of newer computer technology. The lack of a theory driven approach to measuring prior experience has however led to conceptually different factors being used interchangeably in…

  14. INTEGRAL BENCHMARKS AVAILABLE THROUGH THE INTERNATIONAL REACTOR PHYSICS EXPERIMENT EVALUATION PROJECT AND THE INTERNATIONAL CRITICALITY SAFETY BENCHMARK EVALUATION PROJECT

    SciTech Connect

    J. Blair Briggs; Lori Scott; Enrico Sartori; Yolanda Rugama

    2008-09-01

    Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) continue to expand their efforts and broaden their scope to identify, evaluate, and provide integral benchmark data for method and data validation. Benchmark model specifications provided by these two projects are used heavily by the international reactor physics, nuclear data, and criticality safety communities. Thus far, 14 countries have contributed to the IRPhEP, and 20 have contributed to the ICSBEP. The status of the IRPhEP and ICSBEP is discussed in this paper, and the future of the two projects is outlined and discussed. Selected benchmarks that have been added to the IRPhEP and ICSBEP handbooks since PHYSOR’06 are highlighted, and the future of the two projects is discussed.

  15. Community-based benchmarking of the CMIP DECK experiments

    NASA Astrophysics Data System (ADS)

    Gleckler, P. J.

    2015-12-01

    A diversity of community-based efforts are independently developing "diagnostic packages" with little or no coordination between them. A short list of examples include NCAR's Climate Variability Diagnostics Package (CVDP), ORNL's International Land Model Benchmarking (ILAMB), LBNL's Toolkit for Extreme Climate Analysis (TECA), PCMDI's Metrics Package (PMP), the EU EMBRACE ESMValTool, the WGNE MJO diagnostics package, and CFMIP diagnostics. The full value of these efforts cannot be realized without some coordination. As a first step, a WCRP effort has initiated a catalog to document candidate packages that could potentially be applied in a "repeat-use" fashion to all simulations contributed to the CMIP DECK (Diagnostic, Evaluation and Characterization of Klima) experiments. Some coordination of community-based diagnostics has the additional potential to improve how CMIP modeling groups analyze their simulations during model-development. The fact that most modeling groups now maintain a "CMIP compliant" data stream means that in principal without much effort they could readily adopt a set of well organized diagnostic capabilities specifically designed to operate on CMIP DECK experiments. Ultimately, a detailed listing of and access to analysis codes that are demonstrated to work "out of the box" with CMIP data could enable model developers (and others) to select those codes they wish to implement in-house, potentially enabling more systematic evaluation during the model development process.

  16. TRIGA Mark II benchmark experiment; Part II: Pulse operation

    SciTech Connect

    Mele, I.; Ravnik, M.; Trkov, A. )

    1994-01-01

    Experimental results of pulse parameters and control rod worth measurements at TRIGA Mark 2 reactor in Ljubljana are presented. The measurements were performed with a completely fresh, uniform, and compact core. Only standard fuel elements with 12 wt% uranium were used. Special efforts were made to get reliable and accurate results at well-defined experimental conditions, and it is proposed to use the results as a benchmark test case for TRIGA reactors.

  17. Experiment vs simulation RT WFNDEC 2014 benchmark: CIVA results

    SciTech Connect

    Tisseur, D. Costin, M. Rattoni, B. Vienne, C. Vabre, A. Cattiaux, G.; Sollier, T.

    2015-03-31

    The French Atomic Energy Commission and Alternative Energies (CEA) has developed for years the CIVA software dedicated to simulation of NDE techniques such as Radiographic Testing (RT). RT modelling is achieved in CIVA using combination of a determinist approach based on ray tracing for transmission beam simulation and a Monte Carlo model for the scattered beam computation. Furthermore, CIVA includes various detectors models, in particular common x-ray films and a photostimulable phosphor plates. This communication presents the results obtained with the configurations proposed in the World Federation of NDEC 2014 RT modelling benchmark with the RT models implemented in the CIVA software.

  18. Experiment vs simulation RT WFNDEC 2014 benchmark: CIVA results

    NASA Astrophysics Data System (ADS)

    Tisseur, D.; Costin, M.; Rattoni, B.; Vienne, C.; Vabre, A.; Cattiaux, G.; Sollier, T.

    2015-03-01

    The French Atomic Energy Commission and Alternative Energies (CEA) has developed for years the CIVA software dedicated to simulation of NDE techniques such as Radiographic Testing (RT). RT modelling is achieved in CIVA using combination of a determinist approach based on ray tracing for transmission beam simulation and a Monte Carlo model for the scattered beam computation. Furthermore, CIVA includes various detectors models, in particular common x-ray films and a photostimulable phosphor plates. This communication presents the results obtained with the configurations proposed in the World Federation of NDEC 2014 RT modelling benchmark with the RT models implemented in the CIVA software.

  19. MCNP benchmark analyses of critical experiments for the Space Nuclear Thermal Propulsion program

    SciTech Connect

    Selcow, E.C.; Cerbone, R.J.; Ludewig, H.; Mughabghab, S.F.; Schmidt, E.; Todosow, M. ); Parma, E.J. ); Ball, R.M.; Hoovler, G.S. )

    1993-01-15

    Benchmark analyses have been performed of Particle Bed Reactor (PBR) critical experiments (CX) using the MCNP radiation transport code. The experiments have been conducted at the Sandia National Laboratory reactor facility in support of the Space Nuclear Thermal Propulsion (SNTP) program. The test reactor is a nineteen element water moderated and reflected thermal system. A series of integral experiments have been carried out to test the capabilities of the radiation transport codes to predict the performance of PBR systems. MCNP was selected as the preferred radiation analysis tool for the benchmark experiments. Comparison between experimental and calculational results indicate close agreement. This paper describes the analyses of benchmark experiments designed to quantify the accuracy of the MCNP radiation transport code for predicting the performance characteristics of PBR reactors.

  20. MCNP benchmark analyses of critical experiments for the Space Nuclear Thermal Propulsion program

    SciTech Connect

    Selcow, E.C.; Cerbone, R.J.; Ludewig, H.; Mughabghab, S.F.; Schmidt, E.; Todosow, M.; Parma, E.J.; Ball, R.M.; Hoovler, G.S.

    1993-06-01

    Benchmark analyses have been performed of Particle Bed Reactor (PBR) critical experiments (CX) using the MCNP radiation transport code. The experiments have been conducted at the Sandia National Laboratory reactor facility in support of the Space Nuclear Thermal Propulsion (SNTP) program. The test reactor is a nineteen element water moderated and reflected thermal system. A series of integral experiments have been carried out to test the capabilities of the radiation transport codes to predict the performance of PBR systems. MCNP was selected as the preferred radiation analysis tool for the benchmark experiments. Comparison between experimental and calculational results indicate very good agreement. This paper describes the analyses of benchmark experiments designed to quantify the accuracy of the MCNP radiation transport code for predicting the performance characteristics of PBR reactors.

  1. Automatically generated acceptance test: A software reliability experiment

    NASA Technical Reports Server (NTRS)

    Protzel, Peter W.

    1988-01-01

    This study presents results of a software reliability experiment investigating the feasibility of a new error detection method. The method can be used as an acceptance test and is solely based on empirical data about the behavior of internal states of a program. The experimental design uses the existing environment of a multi-version experiment previously conducted at the NASA Langley Research Center, in which the launch interceptor problem is used as a model. This allows the controlled experimental investigation of versions with well-known single and multiple faults, and the availability of an oracle permits the determination of the error detection performance of the test. Fault interaction phenomena are observed that have an amplifying effect on the number of error occurrences. Preliminary results indicate that all faults examined so far are detected by the acceptance test. This shows promise for further investigations, and for the employment of this test method on other applications.

  2. Experiences of familial acceptance-rejection among transwomen of color.

    PubMed

    Koken, Juline A; Bimbi, David S; Parsons, Jeffrey T

    2009-12-01

    Because of the stigma associated with transgenderism, many transwomen (biological males who identify as female or transgender) experience rejection or abuse at the hands of their parents and primary caregivers as children and adolescents. The Parental Acceptance-Rejection (PAR) theory indicates that a child's experience of rejection may have a significant impact on their adult lives. The purpose of this study was to conduct a qualitative analysis of adult transwomen of color's experiences with caregivers, guided by PAR theory. Twenty transwomen of color completed semi-structured interviews exploring the reaction of their parents and primary caregivers to their gender. While many participants reported that at least one parent or close family member responded with warmth and acceptance, the majority confronted hostility and aggression; reports of neglect and undifferentiated rejection were also common. Many transwomen were forced out of their homes as adolescents or chose to leave, increasing their risk of homelessness, poverty, and associated negative sequelae. Future research is needed to explore how families come to terms with having a transgender child and how best to promote acceptance of such children. PMID:20001144

  3. Accuracy requirements and benchmark experiments for CFD validation

    NASA Technical Reports Server (NTRS)

    Marvin, Joseph G.

    1988-01-01

    The role of experiment in the development of Computation Fluid Dynamics (CFD) for aerodynamic flow prediction is discussed. The CFD verification is a concept that depends on closely coordinated planning between computational and experimental disciplines. Because code applications are becoming more complex and their potential for design more feasible, it no longer suffices to use experimental data from surface or integral measurements alone to provide the required verification. Flow physics and modeling, flow field, and boundary condition measurements are emerging as critical data. Four types of experiments are introduced and examples given that meet the challenge of validation: flow physics experiments; flow modeling experiments; calibration experiments; and verification experiments. Measurement and accuracy requirements for each of these differ and are discussed. A comprehensive program of validation is described, some examples given, and it is concluded that the future prospects are encouraging.

  4. Developing Evidence for Action on the Postgraduate Experience: An Effective Local Instrument to Move beyond Benchmarking

    ERIC Educational Resources Information Center

    Sampson, K. A.; Johnston, L.; Comer, K.; Brogt, E.

    2016-01-01

    Summative and benchmarking surveys to measure the postgraduate student research experience are well reported in the literature. While useful, we argue that local instruments that provide formative resources with an academic development focus are also required. If higher education institutions are to move beyond the identification of issues and…

  5. RANS Modeling of Benchmark Shockwave / Boundary Layer Interaction Experiments

    NASA Technical Reports Server (NTRS)

    Georgiadis, Nick; Vyas, Manan; Yoder, Dennis

    2010-01-01

    This presentation summarizes the computations of a set of shock wave / turbulent boundary layer interaction (SWTBLI) test cases using the Wind-US code, as part of the 2010 American Institute of Aeronautics and Astronautics (AIAA) shock / boundary layer interaction workshop. The experiments involve supersonic flows in wind tunnels with a shock generator that directs an oblique shock wave toward the boundary layer along one of the walls of the wind tunnel. The Wind-US calculations utilized structured grid computations performed in Reynolds-averaged Navier-Stokes mode. Three turbulence models were investigated: the Spalart-Allmaras one-equation model, the Menter Shear Stress Transport wavenumber-angular frequency two-equation model, and an explicit algebraic stress wavenumber-angular frequency formulation. Effects of grid resolution and upwinding scheme were also considered. The results from the CFD calculations are compared to particle image velocimetry (PIV) data from the experiments. As expected, turbulence model effects dominated the accuracy of the solutions with upwinding scheme selection indicating minimal effects.!

  6. Benchmark experiments on neutron streaming through JET Torus Hall penetrations

    NASA Astrophysics Data System (ADS)

    Batistoni, P.; Conroy, S.; Lilley, S.; Naish, J.; Obryk, B.; Popovichev, S.; Stamatelatos, I.; Syme, B.; Vasilopoulou, T.; contributors, JET

    2015-05-01

    Neutronics experiments are performed at JET for validating in a real fusion environment the neutronics codes and nuclear data applied in ITER nuclear analyses. In particular, the neutron fluence through the penetrations of the JET torus hall is measured and compared with calculations to assess the capability of state-of-art numerical tools to correctly predict the radiation streaming in the ITER biological shield penetrations up to large distances from the neutron source, in large and complex geometries. Neutron streaming experiments started in 2012 when several hundreds of very sensitive thermo-luminescence detectors (TLDs), enriched to different levels in 6LiF/7LiF, were used to measure the neutron and gamma dose separately. Lessons learnt from this first experiment led to significant improvements in the experimental arrangements to reduce the effects due to directional neutron source and self-shielding of TLDs. Here we report the results of measurements performed during the 2013-2014 JET campaign. Data from new positions, at further locations in the South West labyrinth and down to the Torus Hall basement through the air duct chimney, were obtained up to about a 40 m distance from the plasma neutron source. In order to avoid interference between TLDs due to self-shielding effects, only TLDs containing natural Lithium and 99.97% 7Li were used. All TLDs were located in the centre of large polyethylene (PE) moderators, with natLi and 7Li crystals evenly arranged within two PE containers, one in horizontal and the other in vertical orientation, to investigate the shadowing effect in the directional neutron field. All TLDs were calibrated in the quantities of air kerma and neutron fluence. This improved experimental arrangement led to reduced statistical spread in the experimental data. The Monte Carlo N-Particle (MCNP) code was used to calculate the air kerma due to neutrons and the neutron fluence at detector positions, using a JET model validated up to the

  7. Simulation of underwater explosion benchmark experiments with ALE3D

    SciTech Connect

    Couch, R.; Faux, D.

    1997-05-19

    Some code improvements have been made during the course of this study. One immediately obvious need was for more flexibility in the constitutive representation for materials in shell elements. To remedy this situation, a model with a tabular representation of stress versus strain and rate dependent effects was implemented. This was required in order to obtain reasonable results in the IED cylinder simulation. Another deficiency was in the ability to extract and plot variables associated with shell elements. The pipe whip analysis required the development of a scheme to tally and plot time dependent shell quantities such as stresses and strains. This capability had previously existed only for solid elements. Work was initiated to provide the same range of plotting capability for structural elements that exist with the DYNA3D/TAURUS tools. One of the characteristics of these problems is the disparity in zoning required in the vicinity of the charge and bubble compared to that needed in the far field. This disparity can cause the equipotential relaxation logic to provide a less than optimal solution. Various approaches were utilized to bias the relaxation to obtain more optimal meshing during relaxation. Extensions of these techniques have been developed to provide more powerful options, but more work still needs to be done. The results presented here are representative of what can be produced with an ALE code structured like ALE3D. They are not necessarily the best results that could have been obtained. More experience in assessing sensitivities to meshing and boundary conditions would be very useful. A number of code deficiencies discovered in the course of this work have been corrected and are available for any future investigations.

  8. Linac code benchmarking of HALODYN and PARMILA based on beam experiments

    NASA Astrophysics Data System (ADS)

    Yin, X.; Bayer, W.; Hofmann, I.

    2016-01-01

    As part of the 'High Intensity Pulsed Proton Injector' (HIPPI) project in the European Framework Programme, a program for the comparison and benchmarking of 3D Particle-In-Cell (PIC) linac codes with experiment has been implemented. HALODYN and PARMILA are two of the codes involved in this program. In this study, the initial Twiss parameters were obtained from the results of beam experiments that were conducted using the GSI UNILAC in low-beam-current. Furthermore, beam dynamics simulations of the Alvarez Drift Tube Linac (DTL) section were performed by HALODYN and PARMILA codes and benchmarked for the same beam experiments. These simulation results exhibit some agreements with the experimental results for the low-beam-current case. The similarities and differences between the experimental and simulated results were analyzed quantitatively. In addition, various physical aspects of the simulation codes and the linac design strategy are also discussed.

  9. Shielding Benchmark Experiments Through Concrete and Iron with High-Energy Proton and Heavy Ion Accelerators

    NASA Astrophysics Data System (ADS)

    Nakamura, T.; Sasaki, M.; Nunomiya, T.; Nakao, N.; Kim, E.; Kurosawa, T.; Taniguchi, S.; Iwase, H.; Uwamino, Y.; Shibata, T.; Ito, S.; Fukumura, A.; Perry, D. R.; Wright, P.

    The deep penetration of neutrons through thick shield has become a very serious problem in the shielding design of high-energy, high-intensity accelerator facility. In the design calculation, the Monte Carlo transport calculation through thick shields has large statistical errors and the basic nuclear data and model used in the existing Monte Carlo codes are not well evaluated because of very few experimental data. It is therefore strongly needed to do the deep penetration experiment as shielding benchmark for investigating the calculation accuracy. Under this circumference, we performed the following two shielding experiments through concrete and iron, one with a 800 MeV proton accelerator of the Rutherford Appleton Laboratory (RAL), England and the other with a high energy heavy iron accelerator of the National Institute of Radiological Sciences (NIRS), Japan. Here these two shielding benchmark experiments are outlined.

  10. Evaluation of HEU-Beryllium Benchmark Experiments to Improve Computational Analysis of Space Reactors

    SciTech Connect

    Bess, John; Bledsoe, Keith C; Rearden, Bradley T

    2011-01-01

    An assessment was previously performed to evaluate modeling capabilities and quantify preliminary biases and uncertainties associated with the modeling methods and data utilized in designing a nuclear reactor such as a beryllium-reflected, highly-enriched-uranium (HEU)-O2 fission surface power (FSP) system for space nuclear power. The conclusion of the previous study was that current capabilities could preclude the necessity of a cold critical test of the FSP; however, additional testing would reduce uncertainties in the beryllium and uranium cross-section data and the overall uncertainty in the computational models. A series of critical experiments using HEU metal were performed in the 1960s and 1970s in support of criticality safety operations at the Y-12 Plant. Of the hundreds of experiments, three were identified as fast-fission configurations reflected by beryllium metal. These experiments have been evaluated as benchmarks for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE). Further evaluation of the benchmark experiments was performed using the sensitivity and uncertainty analysis capabilities of SCALE 6. The data adjustment methods of SCALE 6 have been employed in the validation of an example FSP design model to reduce the uncertainty due to the beryllium cross section data.

  11. Evaluation of HEU-Beryllium Benchmark Experiments to Improve Computational Analysis of Space Reactors

    SciTech Connect

    John D. Bess; Keith C. Bledsoe; Bradley T. Rearden

    2011-02-01

    An assessment was previously performed to evaluate modeling capabilities and quantify preliminary biases and uncertainties associated with the modeling methods and data utilized in designing a nuclear reactor such as a beryllium-reflected, highly-enriched-uranium (HEU)-O2 fission surface power (FSP) system for space nuclear power. The conclusion of the previous study was that current capabilities could preclude the necessity of a cold critical test of the FSP; however, additional testing would reduce uncertainties in the beryllium and uranium cross-section data and the overall uncertainty in the computational models. A series of critical experiments using HEU metal were performed in the 1960s and 1970s in support of criticality safety operations at the Y-12 Plant. Of the hundreds of experiments, three were identified as fast-fission configurations reflected by beryllium metal. These experiments have been evaluated as benchmarks for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE). Further evaluation of the benchmark experiments was performed using the sensitivity and uncertainty analysis capabilities of SCALE 6. The data adjustment methods of SCALE 6 have been employed in the validation of an example FSP design model to reduce the uncertainty due to the beryllium cross section data.

  12. Integral Data Benchmark of HENDL2.0/MG Compared with Neutronics Shielding Experiments

    NASA Astrophysics Data System (ADS)

    Jiang, Jieqiong; Xu, Dezheng; Zheng, Shanliang; He, Zhaozhong; Hu, Yanglin; Li, Jingjing; Zou, Jun; Zeng, Qin; Chen, Mingliang; Wang, Minghuang

    2009-10-01

    HENDL2.0, the latest version of the hybrid evaluated nuclear data library, was developed based upon some evaluated data from FENDL2.1 and ENDF/B-VII. To qualify and validate the working library, an integral test for the neutron production data of HENDL2.0 was performed with a series of existing spherical shell benchmark experiments (such as V, Be, Fe, Pb, Cr, Mn, Cu, Al, Si, Co, Zr, Nb, Mo, W and Ti). These experiments were simulated numerically using HENDL2.0/MG and a home-developed code VisualBUS. Calculations were conducted with both FENDL2.1/MG and FENDL2.1/MC, which are based on a continuous-energy Monte Carlo Code MCNP/4C. By comparison and analysis of the neutron leakage spectra and the integral test, benchmark results of neutron production data are presented in this paper.

  13. Material Activation Benchmark Experiments at the NuMI Hadron Absorber Hall in Fermilab

    SciTech Connect

    Matsumura, H.; Matsuda, N.; Kasugai, Y.; Toyoda, A.; Yashima, H.; Sekimoto, S.; Iwase, H.; Oishi, K.; Sakamoto, Y.; Nakashima, H.; Leveling, A.; Boehnlein, D.; Lauten, G.; Mokhov, N.; Vaziri, K.

    2014-06-15

    In our previous study, double and mirror symmetric activation peaks found for Al and Au arranged spatially on the back of the Hadron absorber of the NuMI beamline in Fermilab were considerably higher than those expected purely from muon-induced reactions. From material activation bench-mark experiments, we conclude that this activation is due to hadrons with energy greater than 3 GeV that had passed downstream through small gaps in the hadron absorber.

  14. Graphite and Beryllium Reflector Critical Assemblies of UO2 (Benchmark Experiments 2 and 3)

    SciTech Connect

    Margaret A. Marshall; John D. Bess

    2012-11-01

    INTRODUCTION A series of experiments was carried out in 1962-65 at the Oak Ridge National Laboratory Critical Experiments Facility (ORCEF) for use in space reactor research programs. A core containing 93.2 wt% enriched UO2 fuel rods was used in these experiments. The first part of the experimental series consisted of 252 tightly-packed fuel rods (1.27-cm triangular pitch) with graphite reflectors [1], the second part used 252 graphite-reflected fuel rods organized in a 1.506-cm triangular-pitch array [2], and the final part of the experimental series consisted of 253 beryllium-reflected fuel rods in a 1.506-cm-triangular-pitch configuration and in a 7-tube-cluster configuration [3]. Fission rate distribution and cadmium ratio measurements were taken for all three parts of the experimental series. Reactivity coefficient measurements were taken for various materials placed in the beryllium reflected core. All three experiments in the series have been evaluated for inclusion in the International Reactor Physics Experiment Evaluation Project (IRPhEP) [4] and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbooks, [5]. The evaluation of the first experiment in the series was discussed at the 2011 ANS Winter meeting [6]. The evaluations of the second and third experiments are discussed below. These experiments are of interest as benchmarks because they support the validation of compact reactor designs with similar characteristics to the design parameters for a space nuclear fission surface power systems [7].

  15. Activation measurements for the E.C. bulk shield benchmark experiment

    NASA Astrophysics Data System (ADS)

    Angelone, M.; Arpesella, C.; Martone, M.; Pillon, Mario

    1995-03-01

    The use of the absolute radiometric techniques for the E. C. bulk shield experiment at the 14 MeV Frascati Neutron Generator (FNG) is reported. In this application, the activity level, in some cases, results too low to be measured at the Frascati counting station. In these cases the radiometric measurements are performed using the low background HPGe detectors located at the underground laboratory of Gran Sasso d'Italia. The use of these detectors enhances the FNG capability of performing bulk shield benchmark experiments allowing the measurements of very low activation levels.

  16. TRIGA Mark II benchmark experiment; Part I: Steady-state operation

    SciTech Connect

    Mele, I.; Ravnik, M.; Trkov, A. )

    1994-01-01

    The experimental results of startup tests after reconstruction and modification of the TRIGA Mark II reactor in Ljubljana are presented. The experiments were performed with a completely fresh, compact, and uniform core. The operating conditions were well defined and controlled, so that the results can be used as a benchmark test case for TRIGA reactor calculations. Both steady-state and pulse mode operation were tested. In this paper, the following steady-state experiments are treated: critical core and excess reactivity, control rod worths, fuel element reactivity worth distribution, fuel temperature distribution, and fuel temperature reactivity coefficient.

  17. Benchmark experiments for validation of reaction rates determination in reactor dosimetry

    NASA Astrophysics Data System (ADS)

    Rataj, J.; Huml, O.; Heraltova, L.; Bily, T.

    2014-11-01

    The precision of Monte Carlo calculations of quantities of neutron dosimetry strongly depends on precision of reaction rates prediction. Research reactor represents a very useful tool for validation of the ability of a code to calculate such quantities as it can provide environments with various types of neutron energy spectra. Especially, a zero power research reactor with well-defined core geometry and neutronic properties enables precise comparison between experimental and calculated data. Thus, at the VR-1 zero power research reactor, a set of benchmark experiments were proposed and carried out to verify the MCNP Monte Carlo code ability to predict correctly the reaction rates. For that purpose two frequently used reactions were chosen: He-3(n,p)H-3 and Au-197(n,γ)Au-198. The benchmark consists of response measurement of small He-3 gas filled detector in various positions of reactor core and of activated gold wires placed inside the core or to its vicinity. The reaction rates were calculated in MCNP5 code utilizing a detailed model of VR-1 reactor which was validated for neutronic calculations at the reactor. The paper describes in detail the experimental set-up of the benchmark, the MCNP model of the VR-1 reactor and provides a comparison between experimental and calculated data.

  18. Progress of Integral Experiments in Benchmark Fission Assemblies for a Blanket of Hybrid Reactor

    NASA Astrophysics Data System (ADS)

    Liu, R.; Zhu, T. H.; Yan, X. S.; Lu, X. X.; Jiang, L.; Wang, M.; Han, Z. J.; Wen, Z. W.; Lin, J. F.; Yang, Y. W.

    2014-04-01

    This article describes recent progress in integral neutronics experiments in benchmark fission assemblies for the blanket design in a hybrid reactor. The spherical assemblies consist of three layers of depleted uranium shells and several layers of polyethylene shells, separately. In the assemblies with centralizing the D-T neutron source, the plutonium production rates, uranium fission rates and leakage neutron spectra are measured. The measured results are compared to the calculated ones with the MCNP-4B code and ENDF/B-VI library data, available.

  19. Recent accelerator experiments updates in Shielding INtegral Benchmark Archive Database (SINBAD)

    NASA Astrophysics Data System (ADS)

    Kodeli, I.; Sartori, E.; Kirk, B.

    2006-06-01

    SINBAD is an internationally established set of radiation shielding and dosimetry data relative to experiments relevant in reactor shielding, fusion blanket neutronics and accelerator shielding. In addition to the characterization of the radiation source, it describes shielding materials and instrumentation and the relevant detectors. The experimental results, be it dose, reaction rates or unfolded spectra are presented in tabular ASCII form that can easily be exported to different computer environments for further use. Most sets in SINBAD also contain the computer model used for the interpretation of the experiment and, where available, results from uncertainty analysis. The set of primary documents used for the benchmark compilation and evaluation are provided in computer readable form. SINBAD is available free of charge from RSICC and from the NEA Data Bank.

  20. Quality in E-Learning--A Conceptual Framework Based on Experiences from Three International Benchmarking Projects

    ERIC Educational Resources Information Center

    Ossiannilsson, E.; Landgren, L.

    2012-01-01

    Between 2008 and 2010, Lund University took part in three international benchmarking projects, "E-xcellence+," the "eLearning Benchmarking Exercise 2009," and the "First Dual-Mode Distance Learning Benchmarking Club." A comparison of these models revealed a rather high level of correspondence. From this finding and from desktop studies of the…

  1. Risk decision making in operational safety management - experience from the Nordic benchmark study

    SciTech Connect

    Holmberg, J.; Pulkkinen, U. ); Poern, K. ); Shen, K. )

    1994-12-01

    Technical Research Centre of Finland (VTT) and Studsvik AB, Sweden, have simulated decision making of the Swedish Nuclear Power Inspectorate and a power company by applying decision models in a benchmark study. Based on the experience from the benchmark study, a decision analysis framework to be used in safety related problems is outlined. By this framework both the power companies and the safety authorities could be provided with a more rigorous, systematic approach in their decision making. A decision analytic approach provides a structure for identifying the information requirements of the problem solving. Thus it could serve as a discussion forum between the authorities and the utilities. In this context, probabilistic safety assessment (PSA) has a crucial role of expressing the plant safety status in terms of reactor core damage accident probability and of risk contributions from various accident precursors. However, a decision under uncertainty should not be based solely on probabilities, particularly when the event in question is a rare one and its probability of occurrence is estimated by means of different kinds of approximations. 26 refs., 4 figs., 4 tabs.

  2. Integral Reactor Physics Benchmarks - the International Criticality Safety Benchmark Evaluation Project (icsbep) and the International Reactor Physics Experiment Evaluation Project (irphep)

    NASA Astrophysics Data System (ADS)

    Briggs, J. Blair; Nigg, David W.; Sartori, Enrico

    2006-04-01

    Since the beginning of the nuclear industry, thousands of integral experiments related to reactor physics and criticality safety have been performed. Many of these experiments can be used as benchmarks for validation of calculational techniques and improvements to nuclear data. However, many were performed in direct support of operations and thus were not performed with a high degree of quality assurance and were not well documented. For years, common validation practice included the tedious process of researching integral experiment data scattered throughout journals, transactions, reports, and logbooks. Two projects have been established to help streamline the validation process and preserve valuable integral data: the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP). The two projects are closely coordinated to avoid duplication of effort and to leverage limited resources to achieve a common goal. A short history of these two projects and their common purpose are discussed in this paper. Accomplishments of the ICSBEP are highlighted and the future of the two projects outlined.

  3. A parallel high-order accurate finite element nonlinear Stokes ice sheet model and benchmark experiments

    SciTech Connect

    Leng, Wei; Ju, Lili; Gunzburger, Max; Price, Stephen; Ringler, Todd

    2012-01-01

    The numerical modeling of glacier and ice sheet evolution is a subject of growing interest, in part because of the potential for models to inform estimates of global sea level change. This paper focuses on the development of a numerical model that determines the velocity and pressure fields within an ice sheet. Our numerical model features a high-fidelity mathematical model involving the nonlinear Stokes system and combinations of no-sliding and sliding basal boundary conditions, high-order accurate finite element discretizations based on variable resolution grids, and highly scalable parallel solution strategies, all of which contribute to a numerical model that can achieve accurate velocity and pressure approximations in a highly efficient manner. We demonstrate the accuracy and efficiency of our model by analytical solution tests, established ice sheet benchmark experiments, and comparisons with other well-established ice sheet models.

  4. The stainless steel bulk shielding benchmark experiment at the Frascati Neutron Generator (FNG)

    NASA Astrophysics Data System (ADS)

    Batistoni, P.; Angelone, M.; Martone, M.; Petrizzi, L.; Pillon, M.; Rado, V.; Santamarina, A.; Abidi, I.; Gastaldi, G.; Joyer, P.; Marquette, J. P.; Martini, M.

    1994-09-01

    In the framework of the European Technology Program for NET/ITER, ENEA (Ente Nazionale per le Nuove Tecnologie, l'Energia e l'Ambiente), Frascati and CEA (Commissariat à l'Energie Atomique), Cadarache, are collaborating on a bulk shielding benchmark experiment using the 14 MeV Frascati Neutron Generator (FNG). The aim of the experiment is to obtain accurate experimental data for improving the nuclear database and methods used in the shielding designs, through a rigorous analysis of the results. The experiment consists of the irradiation of a stainless steel block by 14 MeV neutrons. The neutron flux and spectra at different depths, up to 65 cm inside the block, are measured by fission chambers and activation foils characterized by different energy response ranges. The γ-ray dose measurements are performed with ionization chambers and thermo-luminescent dosimeters (TLD). The first results are presented, as well as the comparison with calculations using the cross section library EFF (European Fusion File).

  5. Developing Attitudes of Acceptance toward Lesbian, Gay, and Bisexual Peers: Enlightenment, Contact, and the College Experience

    ERIC Educational Resources Information Center

    Engberg, Mark E.; Hurtado, Sylvia; Smith, Gilia C.

    2007-01-01

    This study proposes an empirically based model with a strong theoretical foundation in higher education and social psychology to better understand how the college experience influences the development of attitudes of acceptance towards lesbian, gay, and bisexual (LGB) persons. Our results demonstrated that students develop more accepting attitudes…

  6. Benchmarking atomic physics models for magnetically confined fusion plasma physics experiments

    NASA Astrophysics Data System (ADS)

    May, M. J.; Finkenthal, M.; Soukhanovskii, V.; Stutman, D.; Moos, H. W.; Pacella, D.; Mazzitelli, G.; Fournier, K.; Goldstein, W.; Gregory, B.

    1999-01-01

    In present magnetically confined fusion devices, high and intermediate Z impurities are either puffed into the plasma for divertor radiative cooling experiments or are sputtered from the high Z plasma facing armor. The beneficial cooling of the edge as well as the detrimental radiative losses from the core of these impurities can be properly understood only if the atomic physics used in the modeling of the cooling curves is very accurate. To this end, a comprehensive experimental and theoretical analysis of some relevant impurities is undertaken. Gases (Ne, Ar, Kr, and Xe) are puffed and nongases are introduced through laser ablation into the FTU tokamak plasma. The charge state distributions and total density of these impurities are determined from spatial scans of several photometrically calibrated vacuum ultraviolet and x-ray spectrographs (3-1600 Å), the multiple ionization state transport code transport code (MIST) and a collisional radiative model. The radiative power losses are measured with bolometery, and the emissivity profiles were measured by a visible bremsstrahlung array. The ionization balance, excitation physics, and the radiative cooling curves are computed from the Hebrew University Lawrence Livermore atomic code (HULLAC) and are benchmarked by these experiments. (Supported by U.S. DOE Grant No. DE-FG02-86ER53214 at JHU and Contract No. W-7405-ENG-48 at LLNL.)

  7. Research Reactor Benchmarks

    SciTech Connect

    Ravnik, Matjaz; Jeraj, Robert

    2003-09-15

    A criticality benchmark experiment performed at the Jozef Stefan Institute TRIGA Mark II research reactor is described. This experiment and its evaluation are given as examples of benchmark experiments at research reactors. For this reason the differences and possible problems compared to other benchmark experiments are particularly emphasized. General guidelines for performing criticality benchmarks in research reactors are given. The criticality benchmark experiment was performed in a normal operating reactor core using commercially available fresh 20% enriched fuel elements containing 12 wt% uranium in uranium-zirconium hydride fuel material. Experimental conditions to minimize experimental errors and to enhance computer modeling accuracy are described. Uncertainties in multiplication factor due to fuel composition and geometry data are analyzed by sensitivity analysis. The simplifications in the benchmark model compared to the actual geometry are evaluated. Sample benchmark calculations with the MCNP and KENO Monte Carlo codes are given.

  8. Criticality experiments and benchmarks for cross section evaluation: the neptunium case

    NASA Astrophysics Data System (ADS)

    Leong, L. S.; Tassan-Got, L.; Audouin, L.; Paradela, C.; Wilson, J. N.; Tarrio, D.; Berthier, B.; Duran, I.; Le Naour, C.; Stéphan, C.

    2013-03-01

    The 237Np neutron-induced fission cross section has been recently measured in a large energy range (from eV to GeV) at the n_TOF facility at CERN. When compared to previous measurement the n_TOF fission cross section appears to be higher by 5-7% beyond the fission threshold. To check the relevance of n_TOF data, we apply a criticality experiment performed at Los Alamos with a 6 kg sphere of 237Np, surrounded by enriched uranium 235U so as to approach criticality with fast neutrons. The multiplication factor ke f f of the calculation is in better agreement with the experiment (the deviation of 750 pcm is reduced to 250 pcm) when we replace the ENDF/B-VII.0 evaluation of the 237Np fission cross section by the n_TOF data. We also explore the hypothesis of deficiencies of the inelastic cross section in 235U which has been invoked by some authors to explain the deviation of 750 pcm. With compare to inelastic large distortion calculation, it is incompatible with existing measurements. Also we show that the v of 237Np can hardly be incriminated because of the high accuracy of the existing data. Fission rate ratios or averaged fission cross sections measured in several fast neutron fields seem to give contradictory results on the validation of the 237Np cross section but at least one of the benchmark experiments, where the active deposits have been well calibrated for the number of atoms, favors the n_TOF data set. These outcomes support the hypothesis of a higher fission cross section of 237Np.

  9. Monte Carlo Simulation of the TRIGA Mark II Benchmark Experiment with Burned Fuel

    SciTech Connect

    Jeraj, Robert; Zagar, Tomaz; Ravnik, Matjaz

    2002-03-15

    Monte Carlo calculations of a criticality experiment with burned fuel on the TRIGA Mark II research reactor are presented. The main objective was to incorporate burned fuel composition calculated with the WIMSD4 deterministic code into the MCNP4B Monte Carlo code and compare the calculated k{sub eff} with the measurements. The criticality experiment was performed in 1998 at the ''Jozef Stefan'' Institute TRIGA Mark II reactor in Ljubljana, Slovenia, with the same fuel elements and loading pattern as in the TRIGA criticality benchmark experiment with fresh fuel performed in 1991. The only difference was that in 1998, the fuel elements had on average burnup of {approx}3%, corresponding to 1.3-MWd energy produced in the core in the period between 1991 and 1998. The fuel element burnup accumulated during 1991-1998 was calculated with the TRIGLAV in-house-developed fuel management two-dimensional multigroup diffusion code. The burned fuel isotopic composition was calculated with the WIMSD4 code and compared to the ORIGEN2 calculations. Extensive comparison of burned fuel material composition was performed for both codes for burnups up to 20% burned {sup 235}U, and the differences were evaluated in terms of reactivity. The WIMSD4 and ORIGEN2 results agreed well for all isotopes important in reactivity calculations, giving increased confidence in the WIMSD4 calculation of the burned fuel material composition. The k{sub eff} calculated with the combined WIMSD4 and MCNP4B calculations showed good agreement with the experimental values. This shows that linking of WIMSD4 with MCNP4B for criticality calculations with burned fuel is feasible and gives reliable results.

  10. 2-D Circulation Control Airfoil Benchmark Experiments Intended for CFD Code Validation

    NASA Technical Reports Server (NTRS)

    Englar, Robert J.; Jones, Gregory S.; Allan, Brian G.; Lin, Johb C.

    2009-01-01

    A current NASA Research Announcement (NRA) project being conducted by Georgia Tech Research Institute (GTRI) personnel and NASA collaborators includes the development of Circulation Control (CC) blown airfoils to improve subsonic aircraft high-lift and cruise performance. The emphasis of this program is the development of CC active flow control concepts for both high-lift augmentation, drag control, and cruise efficiency. A collaboration in this project includes work by NASA research engineers, whereas CFD validation and flow physics experimental research are part of NASA s systematic approach to developing design and optimization tools for CC applications to fixed-wing aircraft. The design space for CESTOL type aircraft is focusing on geometries that depend on advanced flow control technologies that include Circulation Control aerodynamics. The ability to consistently predict advanced aircraft performance requires improvements in design tools to include these advanced concepts. Validation of these tools will be based on experimental methods applied to complex flows that go beyond conventional aircraft modeling techniques. This paper focuses on recent/ongoing benchmark high-lift experiments and CFD efforts intended to provide 2-D CFD validation data sets related to NASA s Cruise Efficient Short Take Off and Landing (CESTOL) study. Both the experimental data and related CFD predictions are discussed.

  11. Labeling Sexual Victimization Experiences: The Role of Sexism, Rape Myth Acceptance, and Tolerance for Sexual Harassment.

    PubMed

    LeMaire, Kelly L; Oswald, Debra L; Russell, Brenda L

    2016-01-01

    This study investigated whether attitudinal variables, such as benevolent and hostile sexism toward men and women, female rape myth acceptance, and tolerance of sexual harassment are related to women labeling their sexual assault experiences as rape. In a sample of 276 female college students, 71 (25.7%) reported at least one experience that met the operational definition of rape, although only 46.5% of those women labeled the experience "rape." Benevolent sexism, tolerance of sexual harassment, and rape myth acceptance, but not hostile sexism, significantly predicted labeling of previous sexual assault experiences by the victims. Specifically, those with more benevolent sexist attitudes toward both men and women, greater rape myth acceptance, and more tolerant attitudes of sexual harassment were less likely to label their past sexual assault experience as rape. The results are discussed for their clinical and theoretical implications. PMID:26832168

  12. Identification of Integral Benchmarks for Nuclear Data Testing Using DICE (Database for the International Handbook of Evaluated Criticality Safety Benchmark Experiments)

    SciTech Connect

    J. Blair Briggs; A. Nichole Ellis; Yolanda Rugama; Nicolas Soppera; Manuel Bossant

    2011-08-01

    Typical users of the International Criticality Safety Evaluation Project (ICSBEP) Handbook have specific criteria to which they desire to find matching experiments. Depending on the application, those criteria may consist of any combination of physical or chemical characteristics and/or various neutronic parameters. The ICSBEP Handbook contains a structured format helping the user narrow the search for experiments of interest. However, with nearly 4300 different experimental configurations and the ever increasing addition of experimental data, the necessity to perform multiple criteria searches have rendered these features insufficient. As a result, a relational database was created with information extracted from the ICSBEP Handbook. A users’ interface was designed by OECD and DOE to allow the interrogation of this database. The database and the corresponding users’ interface are referred to as DICE. DICE currently offers the capability to perform multiple criteria searches that go beyond simple fuel, physical form and spectra and includes expanded general information, fuel form, moderator/coolant, neutron-absorbing material, cladding, reflector, separator, geometry, benchmark results, spectra, and neutron balance parameters. DICE also includes the capability to display graphical representations of neutron spectra, detailed neutron balance, sensitivity coefficients for capture, fission, elastic scattering, inelastic scattering, nu-bar and mu-bar, as well as several other features.

  13. Sexual harassment: early adolescents' self-reports of experiences and acceptance.

    PubMed

    Roscoe, B; Strouse, J S; Goodwin, M P

    1994-01-01

    Considerable attention has been focused on sexual harassment experiences and attitudes of older adolescents and adults. Recently, educational and judicial institutions have recognized that harassment also occurs among junior and senior high school students. The primary aim of this project was to gather information regarding early adolescents' experiences with and acceptance of sexual harassment behaviors. Results indicate a considerable proportion of females (50%) and males (37%) have been victims of sexual harassment perpetrated by their peers, even though their acceptance of these behaviors is quite low. Suggestions for a sexual harassment educational program for early adolescents are presented. PMID:7832018

  14. Perceived Acceptance From Outsiders Shapes Security in Romantic Relationships: The Overgeneralization of Extradyadic Experiences.

    PubMed

    Lemay, Edward P; Razzak, Suad

    2016-05-01

    Romantic relationships unfold in the context of people's other interpersonal relationships, and processes that occur in those other relationships have been shown to affect the functioning of romantic relationships. In accordance with this perspective, two dyadic daily report studies demonstrated that people generalize experiences of interpersonal acceptance and rejection from other people onto their romantic partners. Participants felt more confident that they were valued by their romantic partners on days they experienced acceptance, relative to rejection, from outsiders. In addition, this overgeneralization of daily extradyadic acceptance and rejection had prospective effects on romantic relationship security the following day, was independent of the romantic partner's actual relationship evaluations on each day, was partially mediated by daily self-esteem, and predicted daily enactment of prosocial and antisocial behaviors toward romantic partners. These results suggest that overgeneralization of daily acceptance and rejection from outsiders shapes the functioning of romantic relationships. PMID:27029573

  15. Overview of the 2014 Edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook)

    SciTech Connect

    John D. Bess; J. Blair Briggs; Jim Gulliford; Ian Hill

    2014-10-01

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) is a widely recognized world class program. The work of the IRPhEP is documented in the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook). Integral data from the IRPhEP Handbook is used by reactor safety and design, nuclear data, criticality safety, and analytical methods development specialists, worldwide, to perform necessary validations of their calculational techniques. The IRPhEP Handbook is among the most frequently quoted reference in the nuclear industry and is expected to be a valuable resource for future decades.

  16. The Effects of Mass Media Exposure on Acceptance of Violence against Women: A Field Experiment.

    ERIC Educational Resources Information Center

    Malamuth, Neil M.; Check, James V. P.

    1981-01-01

    Students (N=271) were subjects in an experiment on the effects of exposure to films that portray sexual violence as having positive consequences. Results indicated that exposure to the films portraying violent sexuality increased male subjects' acceptance of interpersonal violence against women. Females exhibited tendencies in the opposite…

  17. The Influence of Provider Communication Behaviors on Parental Vaccine Acceptance and Visit Experience

    PubMed Central

    Opel, Douglas J.; Mangione-Smith, Rita; Robinson, Jeffrey D.; Heritage, John; DeVere, Victoria; Salas, Halle S.; Zhou, Chuan; Taylor, James A.

    2015-01-01

    Objectives We investigated how provider vaccine communication behaviors influence parental vaccination acceptance and visit experience. Methods In a cross-sectional observational study, we videotaped provider–parent vaccine discussions (n = 111). We coded visits for the format providers used for initiating the vaccine discussion (participatory vs presumptive), parental verbal resistance to vaccines after provider initiation (yes vs no), and provider pursuit of recommendations in the face of parental resistance (pursuit vs mitigated or no pursuit). Main outcomes were parental verbal acceptance of recommended vaccines at visit’s end (all vs ≥ 1 refusal) and parental visit experience (highly vs lower rated). Results In multivariable models, participatory (vs presumptive) initiation formats were associated with decreased odds of accepting all vaccines at visit’s end (adjusted odds ratio [AOR] = 0.04; 95% confidence interval [CI] = 0.01, 0.15) and increased odds of a highly rated visit experience (AOR = 17.3; 95% CI = 1.5, 200.3). Conclusions In the context of 2 general communication formats used by providers to initiate vaccine discussions, there appears to be an inverse relationship between parental acceptance of vaccines and visit experience. Further exploration of this inverse relationship in longitudinal studies is needed. PMID:25790386

  18. Surgeon’s experiences of receiving peer benchmarked feedback using patient-reported outcome measures: a qualitative study

    PubMed Central

    2014-01-01

    Background The use of patient-reported outcome measures (PROMs) to provide healthcare professionals with peer benchmarked feedback is growing. However, there is little evidence on the opinions of professionals on the value of this information in practice. The purpose of this research is to explore surgeon’s experiences of receiving peer benchmarked PROMs feedback and to examine whether this information led to changes in their practice. Methods This qualitative research employed a Framework approach. Semi-structured interviews were undertaken with surgeons who received peer benchmarked PROMs feedback. The participants included eleven consultant orthopaedic surgeons in the Republic of Ireland. Results Five themes were identified: conceptual, methodological, practical, attitudinal, and impact. A typology was developed based on the attitudinal and impact themes from which three distinct groups emerged. ‘Advocates’ had positive attitudes towards PROMs and confirmed that the information promoted a self-reflective process. ‘Converts’ were uncertain about the value of PROMs, which reduced their inclination to use the data. ‘Sceptics’ had negative attitudes towards PROMs and claimed that the information had no impact on their behaviour. The conceptual, methodological and practical factors were linked to the typology. Conclusion Surgeons had mixed opinions on the value of peer benchmarked PROMs data. Many appreciated the feedback as it reassured them that their practice was similar to their peers. However, PROMs information alone was considered insufficient to help identify opportunities for quality improvements. The reasons for the observed reluctance of participants to embrace PROMs can be categorised into conceptual, methodological, and practical factors. Policy makers and researchers need to increase professionals’ awareness of the numerous purposes and benefits of using PROMs, challenge the current methods to measure performance using PROMs, and reduce

  19. Self-Compassion Promotes Personal Improvement From Regret Experiences via Acceptance.

    PubMed

    Zhang, Jia Wei; Chen, Serena

    2016-02-01

    Why do some people report more personal improvement from their regret experiences than others? Three studies examined whether self-compassion promotes personal improvement derived from recalled regret experiences. In Study 1, we coded anonymous regret descriptions posted on a blog website. People who spontaneously described their regret with greater self-compassion were also judged as having expressed more personal improvement. In Study 2, higher trait self-compassion predicted greater self-reported and observer-rated personal improvement derived from recalled regret experiences. In Study 3, people induced to take a self-compassionate perspective toward a recalled regret experience reported greater acceptance, forgiveness, and personal improvement. A multiple mediation analysis comparing acceptance and forgiveness showed self-compassion led to greater personal improvement, in part, through heightened acceptance. Furthermore, self-compassion's effects on personal improvement were distinct from self-esteem and were not explained by adaptive emotional responses. Overall, the results suggest that self-compassion spurs positive adjustment in the face of regrets. PMID:26791595

  20. Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program

    SciTech Connect

    Bess, John D.; Montierth, Leland; Köberl, Oliver; Snoj, Luka

    2014-10-09

    Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the ²³⁵U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of keff with MCNP5 and ENDF/B-VII.0 neutron nuclear data are greater than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of keff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.

  1. Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program

    DOE PAGESBeta

    Bess, John D.; Montierth, Leland; Köberl, Oliver; Snoj, Luka

    2014-10-09

    Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the ²³⁵U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of keff with MCNP5 and ENDF/B-VII.0 neutron nuclear data are greatermore » than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of keff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  2. Core Benchmarks Descriptions

    SciTech Connect

    Pavlovichev, A.M.

    2001-05-24

    Actual regulations while designing of new fuel cycles for nuclear power installations comprise a calculational justification to be performed by certified computer codes. It guarantees that obtained calculational results will be within the limits of declared uncertainties that are indicated in a certificate issued by Gosatomnadzor of Russian Federation (GAN) and concerning a corresponding computer code. A formal justification of declared uncertainties is the comparison of calculational results obtained by a commercial code with the results of experiments or of calculational tests that are calculated with an uncertainty defined by certified precision codes of MCU type or of other one. The actual level of international cooperation provides an enlarging of the bank of experimental and calculational benchmarks acceptable for a certification of commercial codes that are being used for a design of fuel loadings with MOX fuel. In particular, the work is practically finished on the forming of calculational benchmarks list for a certification of code TVS-M as applied to MOX fuel assembly calculations. The results on these activities are presented.

  3. Physics of Colloids in Space--Plus (PCS+) Experiment Completed Flight Acceptance Testing

    NASA Technical Reports Server (NTRS)

    Doherty, Michael P.

    2004-01-01

    The Physics of Colloids in Space--Plus (PCS+) experiment successfully completed system-level flight acceptance testing in the fall of 2003. This testing included electromagnetic interference (EMI) testing, vibration testing, and thermal testing. PCS+, an Expedite the Process of Experiments to Space Station (EXPRESS) Rack payload will deploy a second set of colloid samples within the PCS flight hardware system that flew on the International Space Station (ISS) from April 2001 to June 2002. PCS+ is slated to return to the ISS in late 2004 or early 2005.

  4. The effects of video compression on acceptability of images for monitoring life sciences experiments

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Chuang, Sherry L.

    1992-01-01

    Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters

  5. Integrating CFD, CAA, and Experiments Towards Benchmark Datasets for Airframe Noise Problems

    NASA Technical Reports Server (NTRS)

    Choudhari, Meelan M.; Yamamoto, Kazuomi

    2012-01-01

    Airframe noise corresponds to the acoustic radiation due to turbulent flow in the vicinity of airframe components such as high-lift devices and landing gears. The combination of geometric complexity, high Reynolds number turbulence, multiple regions of separation, and a strong coupling with adjacent physical components makes the problem of airframe noise highly challenging. Since 2010, the American Institute of Aeronautics and Astronautics has organized an ongoing series of workshops devoted to Benchmark Problems for Airframe Noise Computations (BANC). The BANC workshops are aimed at enabling a systematic progress in the understanding and high-fidelity predictions of airframe noise via collaborative investigations that integrate state of the art computational fluid dynamics, computational aeroacoustics, and in depth, holistic, and multifacility measurements targeting a selected set of canonical yet realistic configurations. This paper provides a brief summary of the BANC effort, including its technical objectives, strategy, and selective outcomes thus far.

  6. Trust, confidence, procedural fairness, outcome fairness, moral conviction, and the acceptance of GM field experiments.

    PubMed

    Siegrist, Michael; Connor, Melanie; Keller, Carmen

    2012-08-01

    In 2005, Swiss citizens endorsed a moratorium on gene technology, resulting in the prohibition of the commercial cultivation of genetically modified crops and the growth of genetically modified animals until 2013. However, scientific research was not affected by this moratorium, and in 2008, GMO field experiments were conducted that allowed us to examine the factors that influence their acceptance by the public. In this study, trust and confidence items were analyzed using principal component analysis. The analysis revealed the following three factors: "economy/health and environment" (value similarity based trust), "trust and honesty of industry and scientists" (value similarity based trust), and "competence" (confidence). The results of a regression analysis showed that all the three factors significantly influenced the acceptance of GM field experiments. Furthermore, risk communication scholars have suggested that fairness also plays an important role in the acceptance of environmental hazards. We, therefore, included measures for outcome fairness and procedural fairness in our model. However, the impact of fairness may be moderated by moral conviction. That is, fairness may be significant for people for whom GMO is not an important issue, but not for people for whom GMO is an important issue. The regression analysis showed that, in addition to the trust and confidence factors, moral conviction, outcome fairness, and procedural fairness were significant predictors. The results suggest that the influence of procedural fairness is even stronger for persons having high moral convictions compared with persons having low moral convictions. PMID:22150405

  7. Benchmarking HRD.

    ERIC Educational Resources Information Center

    Ford, Donald J.

    1993-01-01

    Discusses benchmarking, the continuous process of measuring one's products, services, and practices against those recognized as leaders in that field to identify areas for improvement. Examines ways in which benchmarking can benefit human resources functions. (JOW)

  8. Benchmark experiments and numerical modelling of the columnar-equiaxed dendritic growth in the transparent alloy Neopentylglycol-(d)Camphor

    NASA Astrophysics Data System (ADS)

    Sturz, L.; Wu, M.; Zimmermann, G.; Ludwig, A.; Ahmadein, M.

    2015-06-01

    Solidification benchmark experiments on columnar and equiaxed dendritic growth, as well as the columnar-equiaxed transition have been carried out under diffusion-dominated conditions for heat and mass transfer in a low-gravity environment. The system under investigation is the transparent organic alloy system Neopentylglycol-37.5wt.-%(d)Camphor, processed aboard a TEXUS sounding rocket flight. Solidifications was observed by standard optical methods in addition to measurements of the thermal fields within the sheet like experimental cells of 1 mm thickness. The dendrite tip kinetic, primary dendrite arm spacing, temporal and spatial temperature evolution, columnar tip velocity and the critical parameters at the CET have been analysed. Here we focus on a detailed comparison of the experiment “TRACE” with a 5-phase volume averaging model to validate the numerical model and to give insight into the corresponding physical mechanisms and parameters leading to CET. The results are discussed in terms of sensitivity versus numerical parameters.

  9. Public acceptability of population-level interventions to reduce alcohol consumption: a discrete choice experiment.

    PubMed

    Pechey, Rachel; Burge, Peter; Mentzakis, Emmanouil; Suhrcke, Marc; Marteau, Theresa M

    2014-07-01

    Public acceptability influences policy action, but the most acceptable policies are not always the most effective. This discrete choice experiment provides a novel investigation of the acceptability of different interventions to reduce alcohol consumption and the effect of information on expected effectiveness, using a UK general population sample of 1202 adults. Policy options included high, medium and low intensity versions of: Minimum Unit Pricing (MUP) for alcohol; reducing numbers of alcohol retail outlets; and regulating alcohol advertising. Outcomes of interventions were predicted for: alcohol-related crimes; alcohol-related hospital admissions; and heavy drinkers. First, the models obtained were used to predict preferences if expected outcomes of interventions were not taken into account. In such models around half of participants or more were predicted to prefer the status quo over implementing outlet reductions or higher intensity MUP. Second, preferences were predicted when information on expected outcomes was considered, with most participants now choosing any given intervention over the status quo. Acceptability of MUP interventions increased by the greatest extent: from 43% to 63% preferring MUP of £1 to the status quo. Respondents' own drinking behaviour also influenced preferences, with around 90% of non-drinkers being predicted to choose all interventions over the status quo, and with more moderate than heavy drinkers favouring a given policy over the status quo. Importantly, the study findings suggest public acceptability of alcohol interventions is dependent on both the nature of the policy and its expected effectiveness. Policy-makers struggling to mobilise support for hitherto unpopular but promising policies should consider giving greater prominence to their expected outcomes. PMID:24858928

  10. TESTING AND ACCEPTANCE OF FUEL PLATES FOR RERTR FUEL DEVELOPMENT EXPERIMENTS

    SciTech Connect

    J.M. Wight; G.A. Moore; S.C. Taylor

    2008-10-01

    This paper discusses how candidate fuel plates for RERTR Fuel Development experiments are examined and tested for acceptance prior to reactor insertion. These tests include destructive and nondestructive examinations (DE and NDE). The DE includes blister annealing for dispersion fuel plates, bend testing of adjacent cladding, and microscopic examination of archive fuel plates. The NDE includes Ultrasonic (UT) scanning and radiography. UT tests include an ultrasonic scan for areas of “debonds” and a high frequency ultrasonic scan to determine the "minimum cladding" over the fuel. Radiography inspections include identifying fuel outside of the maximum fuel zone and measurements and calculations for fuel density. Details of each test are provided and acceptance criteria are defined. These tests help to provide a high level of confidence the fuel plate will perform in the reactor without a breach in the cladding.

  11. ENDF/B-VII.1 Neutron Cross Section Data Testing with Critical Assembly Benchmarks and Reactor Experiments

    SciTech Connect

    G. Palmiotti

    2011-12-01

    The ENDF/B-VII.1 library is the latest revision to the United States' Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 418 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [1]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unmoderated and uranium reflected 235U and 239Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also confirmed for selected actinide reaction rates such as 236U capture. Other deficiencies, such as the overprediction of Pu solution system critical eigenvalues and a decreasing trend in calculated eigenvalue for

  12. ENDF/B-VII.1 Neutron Cross Section Data Testing with Critical Assembly Benchmarks and Reactor Experiments

    NASA Astrophysics Data System (ADS)

    Kahler, A. C.; MacFarlane, R. E.; Mosteller, R. D.; Kiedrowski, B. C.; Frankle, S. C.; Chadwick, M. B.; McKnight, R. D.; Lell, R. M.; Palmiotti, G.; Hiruta, H.; Herman, M.; Arcilla, R.; Mughabghab, S. F.; Sublet, J. C.; Trkov, A.; Trumbull, T. H.; Dunn, M.

    2011-12-01

    The ENDF/B-VII.1 library is the latest revision to the United States' Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 423 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [M. B. Chadwick et al., "ENDF/B-VII.1 Nuclear Data for Science and Technology: Cross Sections, Covariances, Fission Product Yields and Decay Data," Nuclear Data Sheets, 112, 2887 (2011)]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unmoderated and uranium reflected 235U and 239Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also confirmed for selected

  13. ENDF/B-VII.1 Neutron Cross Section Data Testing with Critical Assembly Benchmarks and Reactor Experiments

    SciTech Connect

    Kahler, A.C.; Herman, M.; Kahler,A.C.; MacFarlane,R.E.; Mosteller,R.D.; Kiedrowski,B.C.; Frankle,S.C.; Chadwick,M.B.; McKnight,R.D.; Lell,R.M.; Palmiotti,G.; Hiruta,H.; Herman,M.; Arcilla,R.; Mughabghab,S.F.; Sublet,J.C.; Trkov,A.; Trumbull,T.H.; Dunn,M.

    2011-12-01

    The ENDF/B-VII.1 library is the latest revision to the United States Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 423 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [M. B. Chadwick et al., 'ENDF/B-VII.1 Nuclear Data for Science and Technology: Cross Sections, Covariances, Fission Product Yields and Decay Data,' Nuclear Data Sheets, 112, 2887 (2011)]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unmoderated and uranium reflected {sup 235}U and {sup 239}Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also confirmed for

  14. The journey to accepting support: how parents of profoundly disabled children experience support in their lives.

    PubMed

    Brett, Jane

    2004-10-01

    Advances in medical knowledge and care have extended the lives of children with profound and multiple disabilities. In most cases it is the parents who meet the often complex and continual needs of their child with disabilities in their own home. This study explored the experience of support in the lives of such parents. The interpretive, hermeneutic phenomenology of Heidegger was employed to create a detailed and authentic account of the parents' experiences of support. Five interrelated themes emerged from data from in-depth interviews with six parents randomly selected from a purposive sample in a special school setting. The themes were: parents' feelings about support, the journey to accepting support, support as a loss, disability and the parent and the supportive relationship. Understanding the experience of support from the parent's perspective may lead to a consideration of flexible systems that challenge practice to ensure that supporters listen, learn, develop and deliver support in ways that are helpful. PMID:15537108

  15. Benchmark of the FLUKA model of crystal channeling against the UA9-H8 experiment

    NASA Astrophysics Data System (ADS)

    Schoofs, P.; Cerutti, F.; Ferrari, A.; Smirnov, G.

    2015-07-01

    Channeling in bent crystals is increasingly considered as an option for the collimation of high-energy particle beams. The installation of crystals in the LHC has taken place during this past year and aims at demonstrating the feasibility of crystal collimation and a possible cleaning efficiency improvement. The performance of CERN collimation insertions is evaluated with the Monte Carlo code FLUKA, which is capable to simulate energy deposition in collimators as well as beam loss monitor signals. A new model of crystal channeling was developed specifically so that similar simulations can be conducted in the case of crystal-assisted collimation. In this paper, most recent results of this model are brought forward in the framework of a joint activity inside the UA9 collaboration to benchmark the different simulation tools available. The performance of crystal STF 45, produced at INFN Ferrara, was measured at the H8 beamline at CERN in 2010 and serves as the basis to the comparison. Distributions of deflected particles are shown to be in very good agreement with experimental data. Calculated dechanneling lengths and crystal performance in the transition region between amorphous regime and volume reflection are also close to the measured ones.

  16. Three dimensional modeling of Laser-Plasma interaction: benchmarking our predictive modeling tools vs. experiments

    SciTech Connect

    Divol, L; Berger, R; Meezan, N; Froula, D H; Dixit, S; Suter, L; Glenzer, S H

    2007-11-08

    We have developed a new target platform to study Laser Plasma Interaction in ignition-relevant condition at the Omega laser facility (LLE/Rochester)[1]. By shooting an interaction beam along the axis of a gas-filled hohlraum heated by up to 17 kJ of heater beam energy, we were able to create a millimeter-scale underdense uniform plasma at electron temperatures above 3 keV. Extensive Thomson scattering measurements allowed us to benchmark our hydrodynamic simulations performed with HYDRA[2]. As a result of this effort, we can use with much confidence these simulations as input parameters for our LPI simulation code pF3d[3]. In this paper, we show that by using accurate hydrodynamic profiles and full three-dimensional simulations including a realistic modeling of the laser intensity pattern generated by various smoothing options, whole beam three-dimensional linear kinetic modeling of stimulated Brillouin scattering reproduces quantitatively the experimental measurements(SBS thresholds, reflectivity values and the absence of measurable SRS). This good agreement was made possible by the recent increase in computing power routinely available for such simulations. These simulations accurately predicted the strong reduction of SBS measured when polarization smoothing is used.

  17. Three-dimensional modeling of laser-plasma interaction: Benchmarking our predictive modeling tools versus experiments

    SciTech Connect

    Divol, L.; Berger, R. L.; Meezan, N. B.; Froula, D. H.; Dixit, S.; Michel, P.; London, R.; Strozzi, D.; Ross, J.; Williams, E. A.; Still, B.; Suter, L. J.; Glenzer, S. H.

    2008-05-15

    New experimental capabilities [Froula et al., Phys. Rev. Lett. 98, 085001 (2007)] have been developed to study laser-plasma interaction (LPI) in ignition-relevant condition at the Omega laser facility (LLE/Rochester). By shooting an interaction beam along the axis of a gas-filled hohlraum heated by up to 17 kJ of heater beam energy, a millimeter-scale underdense uniform plasma at electron temperatures above 3 keV was created. Extensive Thomson scattering measurements allowed to benchmark hydrodynamic simulations performed with HYDRA [Meezan et al., Phys. Plasmas 14, 056304 (2007)]. As a result of this effort, these simulations can be used with much confidence as input parameters for the LPI simulation code PF3D [Berger et al., Phys. Plasmas 5, 4337 (1998)]. In this paper, it is shown that by using accurate hydrodynamic profiles and full three-dimensional simulations including a realistic modeling of the laser intensity pattern generated by various smoothing options, whole beam three-dimensional linear kinetic modeling of stimulated Brillouin scattering (SBS) reproduces quantitatively the experimental measurements (SBS thresholds, reflectivity values, and the absence of measurable stimulated Raman scattering). This good agreement was made possible by the recent increase in computing power routinely available for such simulations. These simulations accurately predicted the strong reduction of SBS measured when polarization smoothing is used.

  18. TWODANT benchmark. Progress report

    SciTech Connect

    Lee, Sung

    1994-01-11

    TWODANT (Two-Dimensional, Diffusion-Accelerated, Neutral-Particle Transport) code has been benchmarked against 6 critical experiments (Jezebel plutonium critical assembly) and their k effective values compared with those of KENO and MCNP codes.

  19. NASA Controller Acceptability Study 1(CAS-1) Experiment Description and Initial Observations

    NASA Technical Reports Server (NTRS)

    Chamberlain, James P.; Consiglio, Maria C.; Comstock, James R., Jr.; Ghatas, Rania W.; Munoz, Cesar

    2015-01-01

    This paper describes the Controller Acceptability Study 1 (CAS-1) experiment that was conducted by NASA Langley Research Center personnel from January through March 2014 and presents partial CAS-1 results. CAS-1 employed 14 air traffic controller volunteers as research subjects to assess the viability of simulated future unmanned aircraft systems (UAS) operating alongside manned aircraft in moderate-density, moderate-complexity Class E airspace. These simulated UAS were equipped with a prototype pilot-in-the-loop (PITL) Detect and Avoid (DAA) system, specifically the Self-Separation (SS) function of such a system based on Stratway+ software to replace the see-and-avoid capabilities of manned aircraft pilots. A quantitative CAS-1 objective was to determine horizontal miss distance (HMD) values for SS encounters that were most acceptable to air traffic controllers, specifically HMD values that were assessed as neither unsafely small nor disruptively large. HMD values between 0.5 and 3.0 nautical miles (nmi) were assessed for a wide array of encounter geometries between UAS and manned aircraft. The paper includes brief introductory material about DAA systems and their SS functions, followed by descriptions of the CAS-1 simulation environment, prototype PITL SS capability, and experiment design, and concludes with presentation and discussion of partial CAS-1 data and results.

  20. Effects of an Educational Experience Incorporating an Inventory of Factors Potentially Influencing Student Acceptance of Biological Evolution

    ERIC Educational Resources Information Center

    Wiles, Jason R.; Alters, Brian

    2011-01-01

    This investigation provides an extensive review of scientific, religious, and otherwise non-scientific factors that may influence student acceptance of biological evolution. We also measure the extent to which students' levels of acceptance changed following an educational experience designed to address an inclusive inventory of factors identified…

  1. Willingness-To-Accept Pharmaceutical Retail Inconvenience: Evidence from a Contingent Choice Experiment

    PubMed Central

    Finlay, Keith; Stoecker, Charles; Cunningham, Scott

    2015-01-01

    Objectives Restrictions on retail purchases of pseudoephedrine are one regulatory approach to reduce the social costs of methamphetamine production and use, but may impose costs on legitimate users of nasal decongestants. This is the first study to evaluate the costs of restricting access to medications on consumer welfare. Our objective was to measure the inconvenience cost consumers place on restrictions for cold medication purchases including identification requirements, purchase limits, over-the-counter availability, prescription requirements, and the active ingredient. Methods We conducted a contingent choice experiment with Amazon Mechanical Turk workers that presented participants with randomized, hypothetical product prices and combinations of restrictions that reflect the range of public policies. We used a conditional logit model to calculate willingness-to-accept each restriction. Results Respondents’ willingness-to-accept prescription requirements was $14.17 ($9.76–$18.58) and behind-the-counter restrictions was $9.68 ($7.03–$12.33) per box of pseudoephedrine product. Participants were willing to pay $4.09 ($1.66–$6.52) per box to purchase pseudoephedrine-based products over phenylephrine-based products. Conclusions Restricting access to medicines as a means of reducing the social costs of non-medical use can imply large inconvenience costs for legitimate consumers. These results are relevant to discussions of retail access restrictions on other medications. PMID:26024444

  2. 'Feel the Feeling': Psychological practitioners' experience of acceptance and commitment therapy well-being training in the workplace.

    PubMed

    Wardley, Matt Nj; Flaxman, Paul E; Willig, Carla; Gillanders, David

    2016-08-01

    This empirical study investigates psychological practitioners' experience of worksite training in acceptance and commitment therapy using an interpretative phenomenological analysis methodology. Semi-structured interviews were conducted with eight participants, and three themes emerged from the interpretative phenomenological analysis data analysis: influence of previous experiences, self and others and impact and application The significance of the experiential nature of the acceptance and commitment therapy training is explored as well as the dual aspects of developing participants' self-care while also considering their own clinical practice. Consistencies and inconsistencies across acceptance and commitment therapy processes are considered as well as clinical implications, study limitations and future research suggestions. PMID:25476570

  3. Rape myth acceptance and judgments of vulnerability to sexual assault: an Internet experiment.

    PubMed

    Bohner, Gerd; Danner, Unna N; Siebler, Frank; Samson, Gary B

    2002-01-01

    Processing strategies in risk assessment were studied in an Internet experiment. Women (N = 399) who were either low or high in rape myth acceptance (RMA) were asked to recall either two or six behaviors that either increase or decrease the risk of being sexually assaulted. Later they judged their personal vulnerability to sexual assault under either no time pressure (no response deadline) or time pressure (response deadline of 5 s). Without time pressure, the results were opposite to previous research: Women low in RMA relied on ease of recall and reported higher vulnerability after recalling few rather than many risk increasing behaviors, or many rather than few risk-decreasing behaviors; women high in RMA relied on the amount of information recalled, which resulted in an opposite pattern of vulnerability judgments. No influences of ease of recall or amount recalled on vulnerability judgments were detected under time pressure. PMID:12455332

  4. Speech recognition acceptance by physicians: A temporal replication of a survey of expectations and experiences.

    PubMed

    Lyons, Joseph P; Sanders, Salvatore A; Fredrick Cesene, Daniel; Palmer, Christopher; Mihalik, Valerie L; Weigel, Tracy

    2016-09-01

    A replication survey of physicians' expectations and experience with speech recognition technology was conducted before and after its implementation. The expectations survey was administered to emergency medicine physicians prior to training with the speech recognition system. The experience survey consisting of similar items was administered after physicians gained speech recognition technology experience. In this study, 82 percent of the physicians were initially optimistic that the use of speech recognition technology with the electronic medical record was a good idea. After using the technology for 6 months, 87 percent of the physicians agreed that speech recognition technology was a good idea. In addition, 72 percent of the physicians in this study had an expectation that the use of speech recognition technology would save time. After use in the clinical environment, 51 percent of the participants reported time savings. The increased acceptance of speech recognition technology by physicians in this study was attributed to improvements in the technology and the electronic medical record. PMID:26187989

  5. A high resolution, broad energy acceptance spectrometer for laser wakefield acceleration experiments.

    PubMed

    Sears, Christopher M S; Cuevas, Sofia Benavides; Schramm, Ulrich; Schmid, Karl; Buck, Alexander; Habs, Dieter; Krausz, Ferenc; Veisz, Laszlo

    2010-07-01

    Laser wakefield experiments present a unique challenge in measuring the resulting electron energy properties due to the large energy range of interest, typically several 100 MeV, and the large electron beam divergence and pointing jitter >1 mrad. In many experiments the energy resolution and accuracy are limited by the convolved transverse spot size and pointing jitter of the beam. In this paper we present an electron energy spectrometer consisting of two magnets designed specifically for laser wakefield experiments. In the primary magnet the field is produced by permanent magnets. A second optional electromagnet can be used to obtain better resolution for electron energies above 75 MeV. The spectrometer has an acceptance of 2.5-400 MeV (E(max)/E(min)>100) with a resolution of better than 1% rms for electron energies above 25 MeV. This high resolution is achieved by refocusing electrons in the energy plane and without any postprocessing image deconvolution. Finally, the spectrometer employs two complimentary detection mechanisms: (1) absolutely calibrated scintillation screens imaged by cameras outside the vacuum chamber and (2) an array of scintillating fibers coupled to a low-noise charge-coupled device. PMID:20687714

  6. A high resolution, broad energy acceptance spectrometer for laser wakefield acceleration experiments

    SciTech Connect

    Sears, Christopher M. S.; Cuevas, Sofia Benavides; Veisz, Laszlo; Schramm, Ulrich; Schmid, Karl; Buck, Alexander; Habs, Dieter; Krausz, Ferenc

    2010-07-15

    Laser wakefield experiments present a unique challenge in measuring the resulting electron energy properties due to the large energy range of interest, typically several 100 MeV, and the large electron beam divergence and pointing jitter >1 mrad. In many experiments the energy resolution and accuracy are limited by the convolved transverse spot size and pointing jitter of the beam. In this paper we present an electron energy spectrometer consisting of two magnets designed specifically for laser wakefield experiments. In the primary magnet the field is produced by permanent magnets. A second optional electromagnet can be used to obtain better resolution for electron energies above 75 MeV. The spectrometer has an acceptance of 2.5-400 MeV (E{sub max}/E{sub min}>100) with a resolution of better than 1% rms for electron energies above 25 MeV. This high resolution is achieved by refocusing electrons in the energy plane and without any postprocessing image deconvolution. Finally, the spectrometer employs two complimentary detection mechanisms: (1) absolutely calibrated scintillation screens imaged by cameras outside the vacuum chamber and (2) an array of scintillating fibers coupled to a low-noise charge-coupled device.

  7. Verification and validation benchmarks.

    SciTech Connect

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  8. Validation of the Serpent 2 code on TRIGA Mark II benchmark experiments.

    PubMed

    Ćalić, Dušan; Žerovnik, Gašper; Trkov, Andrej; Snoj, Luka

    2016-01-01

    The main aim of this paper is the development and validation of a 3D computational model of TRIGA research reactor using Serpent 2 code. The calculated parameters were compared to the experimental results and to calculations performed with the MCNP code. The results show that the calculated normalized reaction rates and flux distribution within the core are in good agreement with MCNP and experiment, while in the reflector the flux distribution differ up to 3% from the measurements. PMID:26516989

  9. Electron-impact ionization of helium: A comprehensive experiment benchmarks theory

    SciTech Connect

    Ren, X.; Pflueger, T.; Senftleben, A.; Xu, S.; Dorn, A.; Ullrich, J.; Bray, I.; Fursa, D.V.; Colgan, J.; Pindzola, M.S.

    2011-05-15

    Single ionization of helium by 70.6-eV electron impact is studied in a comprehensive experiment covering a major part of the entire collision kinematics and the full 4{pi} solid angle for the emitted electron. The absolutely normalized triple-differential experimental cross sections are compared with results from the convergent close-coupling (CCC) and the time-dependent close-coupling (TDCC) theories. Whereas excellent agreement with the TDCC prediction is only found for equal energy sharing, the CCC calculations are in excellent agreement with essentially all experimentally observed dynamical features, including the absolute magnitude of the cross sections.

  10. Acceptability of Financial Incentives for Health Behaviours: A Discrete Choice Experiment

    PubMed Central

    Giles, Emma L.; Becker, Frauke; Ternent, Laura; Sniehotta, Falko F.; McColl, Elaine

    2016-01-01

    Background Healthy behaviours are important determinants of health and disease, but many people find it difficult to perform these behaviours. Systematic reviews support the use of personal financial incentives to encourage healthy behaviours. There is concern that financial incentives may be unacceptable to the public, those delivering services and policymakers, but this has been poorly studied. Without widespread acceptability, financial incentives are unlikely to be widely implemented. We sought to answer two questions: what are the relative preferences of UK adults for attributes of financial incentives for healthy behaviours? Do preferences vary according to the respondents’ socio-demographic characteristics? Methods We conducted an online discrete choice experiment. Participants were adult members of a market research panel living in the UK selected using quota sampling. Preferences were examined for financial incentives for: smoking cessation, regular physical activity, attendance for vaccination, and attendance for screening. Attributes of interest (and their levels) were: type of incentive (none, cash, shopping vouchers or lottery tickets); value of incentive (a continuous variable); schedule of incentive (same value each week, or value increases as behaviour change is sustained); other information provided (none, written information, face-to-face discussion, or both); and recipients (all eligible individuals, people living in low-income households, or pregnant women). Results Cash or shopping voucher incentives were preferred as much as, or more than, no incentive in all cases. Lower value incentives and those offered to all eligible individuals were preferred. Preferences for additional information provided alongside incentives varied between behaviours. Younger participants and men were more likely to prefer incentives. There were no clear differences in preference according to educational attainment. Conclusions Cash or shopping voucher

  11. Challenge of benchmarking simulation codes for the LANL beam-halo experiment.

    SciTech Connect

    Wangler, Thomas P.,; Lysenko, W. P.; Qiang, J.; Garnett, R. W.

    2003-01-01

    We compare macroparticle simulations with beam-profile measurements from a proton beam-halo experiment in a study of beam-halo formation in mismatched beams in a 52-quadrupole periodic-focusing channel. The lack of detailed measurement of the initial distribution is an important issue for being able to make reliable predictions of the halo. We have found earlier that different initial distributions with the same Courant-Snyder parameters and emittances produce similar matched-beam profiles, but different mismatched-beam profiles in the transport system. Also, input distributions with greater population in the tails produce larger rates of emittance growth. We have concluded that using only the known Courant-Snyder parameters and emittances as input parameters is insufficient information for reliable simulations of beam halo formed in mismatched beams. The question is how to obtain the best estimate of the input beam distribution needed for more accurate simulations. In this paper, we investigate a new least squares fitting procedure, which is applied to the simulations used to determine the injected beam distribution, in an attempt to obtain a more accurate description of halo formation than fiom simulation alone.

  12. Benchmark Experiments of Thermal Neutron and Capture Gamma-Ray Distributions in Concrete Using {sup 252}Cf

    SciTech Connect

    Asano, Yoshihiro; Sugita, Takeshi; Hirose, Hideyuki; Suzaki, Takenori

    2005-10-15

    The distributions of thermal neutrons and capture gamma rays in ordinary concrete were investigated by using {sup 252}Cf. Two subjects are considered. One is the benchmark experiments for the thermal neutron and the capture gamma-ray distributions in ordinary concrete. The thermal neutron and the capture gamma-ray distributions were measured by using gold-foil activation detectors and thermoluminescence detectors. These were compared with the simulations by using the discrete ordinates code ANISN with two different group structure types of cross-section library of a new Japanese version, JENDL-3.3, showing reasonable agreement with both fine and rough structure groups of thermal neutron energy. The other is a comparison of the simulations with two different cross-section libraries, JENDL-3.3 and ENDF/B-VI, for the deep penetration of neutrons in the concrete, showing close agreement in 0- to 100-cm-thick concrete. However, the differences in flux grow with an increase in concrete thickness, reaching up to approximately eight times near 4-m thickness.

  13. Limitations of Community College Benchmarking and Benchmarks

    ERIC Educational Resources Information Center

    Bers, Trudy H.

    2006-01-01

    This chapter distinguishes between benchmarks and benchmarking, describes a number of data and cultural limitations to benchmarking projects, and suggests that external demands for accountability are the dominant reason for growing interest in benchmarking among community colleges.

  14. Benchmark Evaluation of Start-Up and Zero-Power Measurements at the High-Temperature Engineering Test Reactor

    SciTech Connect

    Bess, John D.; Fujimoto, Nozomu

    2014-10-09

    Benchmark models were developed to evaluate six cold-critical and two warm-critical, zero-power measurements of the HTTR. Additional measurements of a fully-loaded subcritical configuration, core excess reactivity, shutdown margins, six isothermal temperature coefficients, and axial reaction-rate distributions were also evaluated as acceptable benchmark experiments. Insufficient information is publicly available to develop finely-detailed models of the HTTR as much of the design information is still proprietary. However, the uncertainties in the benchmark models are judged to be of sufficient magnitude to encompass any biases and bias uncertainties incurred through the simplification process used to develop the benchmark models. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the impurity content of the various graphite blocks that comprise the HTTR. Monte Carlo calculations of keff are between approximately 0.9 % and 2.7 % greater than the benchmark values. Reevaluation of the HTTR models as additional information becomes available could improve the quality of this benchmark and possibly reduce the computational biases. High-quality characterization of graphite impurities would significantly improve the quality of the HTTR benchmark assessment. Simulation of the other reactor physics measurements are in good agreement with the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.

  15. Benchmark Evaluation of Start-Up and Zero-Power Measurements at the High-Temperature Engineering Test Reactor

    DOE PAGESBeta

    Bess, John D.; Fujimoto, Nozomu

    2014-10-09

    Benchmark models were developed to evaluate six cold-critical and two warm-critical, zero-power measurements of the HTTR. Additional measurements of a fully-loaded subcritical configuration, core excess reactivity, shutdown margins, six isothermal temperature coefficients, and axial reaction-rate distributions were also evaluated as acceptable benchmark experiments. Insufficient information is publicly available to develop finely-detailed models of the HTTR as much of the design information is still proprietary. However, the uncertainties in the benchmark models are judged to be of sufficient magnitude to encompass any biases and bias uncertainties incurred through the simplification process used to develop the benchmark models. Dominant uncertainties in themore » experimental keff for all core configurations come from uncertainties in the impurity content of the various graphite blocks that comprise the HTTR. Monte Carlo calculations of keff are between approximately 0.9 % and 2.7 % greater than the benchmark values. Reevaluation of the HTTR models as additional information becomes available could improve the quality of this benchmark and possibly reduce the computational biases. High-quality characterization of graphite impurities would significantly improve the quality of the HTTR benchmark assessment. Simulation of the other reactor physics measurements are in good agreement with the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  16. High School Students' Perceptions of Evolution Instruction: Acceptance and Evolution Learning Experiences

    ERIC Educational Resources Information Center

    Donnelly, Lisa A.; Kazempour, Mahsa; Amirshokoohi, Aidin

    2009-01-01

    Evolution is an important and sometimes controversial component of high school biology. In this study, we used a mixed methods approach to explore students' evolution acceptance and views of evolution teaching and learning. Students explained their acceptance and rejection of evolution in terms of evidence and conflicts with religion and…

  17. A Quantitative Examination of User Experience as an Antecedent to Student Perception in Technology Acceptance Modeling

    ERIC Educational Resources Information Center

    Butler, Rory

    2013-01-01

    Internet-enabled mobile devices have increased the accessibility of learning content for students. Given the ubiquitous nature of mobile computing technology, a thorough understanding of the acceptance factors that impact a learner's intention to use mobile technology as an augment to their studies is warranted. Student acceptance of mobile…

  18. Effects of an Educational Experience Incorporating an Inventory of Factors Potentially Influencing Student Acceptance of Biological Evolution

    NASA Astrophysics Data System (ADS)

    Wiles, Jason R.; Alters, Brian

    2011-12-01

    This investigation provides an extensive review of scientific, religious, and otherwise non-scientific factors that may influence student acceptance of biological evolution. We also measure the extent to which students' levels of acceptance changed following an educational experience designed to address an inclusive inventory of factors identified as potentially affecting student acceptance of evolution (n = 81, pre-test/post-test) n = 37, one-year longitudinal). Acceptance of evolution was measured using the Measure of Acceptance of the Theory of Evolution (MATE) instrument among participants enrolled in a secondary-level academic programme during the summer prior to their final year of high school and as they transitioned to the post-secondary level. Student acceptance of evolution was measured to be significantly higher than initial levels both immediately following and over one year after the educational experience. Results reported herein carry implications for future quantitative and qualitative research as well as for cross-disciplinary instruction plans related to evolutionary science and non-scientific factors which may influence student understanding of evolution.

  19. Benchmark Evaluation of the NRAD Reactor LEU Core Startup Measurements

    SciTech Connect

    J. D. Bess; T. L. Maddock; M. A. Marshall

    2011-09-01

    The Neutron Radiography (NRAD) reactor is a 250-kW TRIGA-(Training, Research, Isotope Production, General Atomics)-conversion-type reactor at the Idaho National Laboratory; it is primarily used for neutron radiography analysis of irradiated and unirradiated fuels and materials. The NRAD reactor was converted from HEU to LEU fuel with 60 fuel elements and brought critical on March 31, 2010. This configuration of the NRAD reactor has been evaluated as an acceptable benchmark experiment and is available in the 2011 editions of the International Handbook of Evaluated Criticality Safety Benchmark Experiments (ICSBEP Handbook) and the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook). Significant effort went into precisely characterizing all aspects of the reactor core dimensions and material properties; detailed analyses of reactor parameters minimized experimental uncertainties. The largest contributors to the total benchmark uncertainty were the 234U, 236U, Er, and Hf content in the fuel; the manganese content in the stainless steel cladding; and the unknown level of water saturation in the graphite reflector blocks. A simplified benchmark model of the NRAD reactor was prepared with a keff of 1.0012 {+-} 0.0029 (1s). Monte Carlo calculations with MCNP5 and KENO-VI and various neutron cross section libraries were performed and compared with the benchmark eigenvalue for the 60-fuel-element core configuration; all calculated eigenvalues are between 0.3 and 0.8% greater than the benchmark value. Benchmark evaluations of the NRAD reactor are beneficial in understanding biases and uncertainties affecting criticality safety analyses of storage, handling, or transportation applications with LEU-Er-Zr-H fuel.

  20. Benchmarking: A Method for Continuous Quality Improvement in Health

    PubMed Central

    Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe

    2012-01-01

    Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields of application and examples of how it has been used in the healthcare sector. Benchmarking is often thought to consist simply of comparing indicators and is not perceived in its entirety, that is, as a tool based on voluntary and active collaboration among several organizations to create a spirit of competition and to apply best practices. The key feature of benchmarking is its integration within a comprehensive and participatory policy of continuous quality improvement (CQI). Conditions for successful benchmarking focus essentially on careful preparation of the process, monitoring of the relevant indicators, staff involvement and inter-organizational visits. Compared to methods previously implemented in France (CQI and collaborative projects), benchmarking has specific features that set it apart as a healthcare innovation. This is especially true for healthcare or medical–social organizations, as the principle of inter-organizational visiting is not part of their culture. Thus, this approach will need to be assessed for feasibility and acceptability before it is more widely promoted. PMID:23634166

  1. Growth and Expansion of the International Criticality Safety Benchmark Evaluation Project and the Newly Organized International Reactor Physics Experiment Evaluation Project

    SciTech Connect

    J. Blair Briggs; Lori Scott; Yolanda Rugama; Enrico Satori

    2007-05-01

    Since ICNC 2003, the International Criticality Safety Benchmark Evaluation Project (ICSBEP) has continued to expand its efforts and broaden its scope. Criticality-alarm / shielding type benchmarks and fundamental physics measurements that are relevant to criticality safety applications are not only included in the scope of the project, but benchmark data are also included in the latest version of the handbook. A considerable number of improvements have been made to the searchable database, DICE and the criticality-alarm / shielding benchmarks and fundamental physics measurements have been included in the database. There were 12 countries participating on the ICSBEP in 2003. That number has increased to 18 with recent contributions of data and/or resources from Brazil, Czech Republic, Poland, India, Canada, and China. South Africa, Germany, Argentina, and Australia have been invited to participate. Since ICNC 2003, the contents of the “International Handbook of Evaluated Criticality Safety Benchmark Experiments” have increased from 350 evaluations (28,000 pages) containing benchmark specifications for 3070 critical or subcritical configurations to 442 evaluations (over 38,000 pages) containing benchmark specifications for 3957 critical or subcritical configurations, 23 criticality-alarm-placement / shielding configurations with multiple dose points for each, and 20 configurations that have been categorized as fundamental physics measurements that are relevant to criticality safety applications in the 2006 Edition of the ICSBEP Handbook. Approximately 30 new evaluations and 250 additional configurations are expected to be added to the 2007 Edition of the Handbook. Since ICNC 2003, a reactor physics counterpart to the ICSBEP, The International Reactor Physics Experiment Evaluation Project (IRPhEP) was initiated. Beginning in 1999, the IRPhEP was conducted as a pilot activity by the by the Organization of Economic Cooperation and Development (OECD) Nuclear Energy

  2. Orientation of Oblique Airborne Image Sets - Experiences from the Isprs/eurosdr Benchmark on Multi-Platform Photogrammetry

    NASA Astrophysics Data System (ADS)

    Gerke, M.; Nex, F.; Remondino, F.; Jacobsen, K.; Kremer, J.; Karel, W.; Hu, H.; Ostrowski, W.

    2016-06-01

    During the last decade the use of airborne multi camera systems increased significantly. The development in digital camera technology allows mounting several mid- or small-format cameras efficiently onto one platform and thus enables image capture under different angles. Those oblique images turn out to be interesting for a number of applications since lateral parts of elevated objects, like buildings or trees, are visible. However, occlusion or illumination differences might challenge image processing. From an image orientation point of view those multi-camera systems bring the advantage of a better ray intersection geometry compared to nadir-only image blocks. On the other hand, varying scale, occlusion and atmospheric influences which are difficult to model impose problems to the image matching and bundle adjustment tasks. In order to understand current limitations of image orientation approaches and the influence of different parameters such as image overlap or GCP distribution, a commonly available dataset was released. The originally captured data comprises of a state-of-the-art image block with very high overlap, but in the first stage of the so-called ISPRS/EUROSDR benchmark on multi-platform photogrammetry only a reduced set of images was released. In this paper some first results obtained with this dataset are presented. They refer to different aspects like tie point matching across the viewing directions, influence of the oblique images onto the bundle adjustment, the role of image overlap and GCP distribution. As far as the tie point matching is concerned we observed that matching of overlapping images pointing to the same cardinal direction, or between nadir and oblique views in general is quite successful. Due to the quite different perspective between images of different viewing directions the standard tie point matching, for instance based on interest points does not work well. How to address occlusion and ambiguities due to different views onto

  3. Evaluation of the concrete shield compositions from the 2010 criticality accident alarm system benchmark experiments at the CEA Valduc SILENE facility

    SciTech Connect

    Miller, Thomas Martin; Celik, Cihangir; Dunn, Michael E; Wagner, John C; McMahan, Kimberly L; Authier, Nicolas; Jacquet, Xavier; Rousseau, Guillaume; Wolff, Herve; Savanier, Laurence; Baclet, Nathalie; Lee, Yi-kang; Trama, Jean-Christophe; Masse, Veronique; Gagnier, Emmanuel; Naury, Sylvie; Blanc-Tranchant, Patrick; Hunter, Richard; Kim, Soon; Dulik, George Michael; Reynolds, Kevin H.

    2015-01-01

    In October 2010, a series of benchmark experiments were conducted at the French Commissariat a l'Energie Atomique et aux Energies Alternatives (CEA) Valduc SILENE facility. These experiments were a joint effort between the United States Department of Energy Nuclear Criticality Safety Program and the CEA. The purpose of these experiments was to create three benchmarks for the verification and validation of radiation transport codes and evaluated nuclear data used in the analysis of criticality accident alarm systems. This series of experiments consisted of three single-pulsed experiments with the SILENE reactor. For the first experiment, the reactor was bare (unshielded), whereas in the second and third experiments, it was shielded by lead and polyethylene, respectively. The polyethylene shield of the third experiment had a cadmium liner on its internal and external surfaces, which vertically was located near the fuel region of SILENE. During each experiment, several neutron activation foils and thermoluminescent dosimeters (TLDs) were placed around the reactor. Nearly half of the foils and TLDs had additional high-density magnetite concrete, high-density barite concrete, standard concrete, and/or BoroBond shields. CEA Saclay provided all the concrete, and the US Y-12 National Security Complex provided the BoroBond. Measurement data from the experiments were published at the 2011 International Conference on Nuclear Criticality (ICNC 2011) and the 2013 Nuclear Criticality Safety Division (NCSD 2013) topical meeting. Preliminary computational results for the first experiment were presented in the ICNC 2011 paper, which showed poor agreement between the computational results and the measured values of the foils shielded by concrete. Recently the hydrogen content, boron content, and density of these concrete shields were further investigated within the constraints of the previously available data. New computational results for the first experiment are now available that

  4. Exploring Consumer Acceptance of Entomophagy: A Survey and Experiment in Australia and the Netherlands.

    PubMed

    Lensvelt, Eveline J S; Steenbekkers, L P A

    2014-01-01

    Insects are nutritious and suitable for human consumption. In this article an overview of research on consumer acceptance of entomophagy is given. This study furthermore provides insight into which factors are effective to influence consumer acceptance of entomophagy among Dutch and Australian participants. Based on the findings of this study, information about entomophagy and providing the participants with the opportunity to try insect food, both seem to be equally important when trying to positively influence their attitude toward entomophagy. The outcomes of this study show that "educating" consumers about entomophagy should be practiced in its broadest sense. PMID:25105864

  5. FLOWTRAN benchmarking with onset of flow instability data from 1988 Columbia University single-tube OFI experiment

    SciTech Connect

    Chen, K.; Paul, P.K.; Barbour, K.L.

    1990-06-01

    Benchmarking FLOWTRAN, Version 16.2, with an Onset of Significant Voiding (OSV) criterion against measured Onset of Flow Instability (OFI) data from the 1988--89 Columbia University downflow tests has shown that FLOWTRAN with OSV is a conservative OFI predictor. Calculated limiting flow rates based on the Savannah River Site (SRS) OSV criterion were always higher than the measured flow rates at OFI. This work supplements recent FLOWTRAN benchmarking against 1963 downflow tests at Columbia University and 1988 downflow tests at the Heat Transfer Laboratory. These studies provide confidence that using FLOWTRAN with an OSV based criterion for SRS reactor limits analyses will generate operating limits that are conservative with respect to OFI, the criterion selected to prevent fuel damage.

  6. Applications of Integral Benchmark Data

    SciTech Connect

    Giuseppe Palmiotti; Teruhiko Kugo; Fitz Trumble; Albert C. Kahler; Dale Lancaster

    2014-10-09

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) provide evaluated integral benchmark data that may be used for validation of reactor physics / nuclear criticality safety analytical methods and data, nuclear data testing, advanced modeling and simulation, and safety analysis licensing activities. The handbooks produced by these programs are used in over 30 countries. Five example applications are presented in this paper: (1) Use of IRPhEP Data in Uncertainty Analyses and Cross Section Adjustment, (2) Uncertainty Evaluation Methods for Reactor Core Design at JAEA Using Reactor Physics Experimental Data, (3) Application of Benchmarking Data to a Broad Range of Criticality Safety Problems, (4) Cross Section Data Testing with ICSBEP Benchmarks, and (5) Use of the International Handbook of Evaluated Reactor Physics Benchmark Experiments to Support the Power Industry.

  7. DOES CRITICAL MASS DECREASE AS TEMPERATURE INCREASES: A REVIEW OF FIVE BENCHMARK EXPERIMENTS THAT SPAN A RANGE OF ELEVATED TEMPERATURES AND CRITICAL CONFIGURATIONS

    SciTech Connect

    Yates, K.

    2009-06-10

    Five sets of benchmark experiments are reviewed herein that cover a diverse set of fissile system configurations. The review specifically focused on the change in critical mass of these systems at elevated temperatures and the temperature reactivity coefficient ({alpha}{sub T}) on the system. Because plutonium-based critical benchmark experiments at varying temperatures were not found at the time this review was prepared, only uranium-based systems are included, as follows: (1) HEU-SOL-THERM-010 - UO{sub 2}F{sub 2} solutions with high U{sup 235} enrichment; (2) HEU-COMP-THERM-016 - uranium-graphite blocks with low U concentration; (3) LEU-COMP-THERM-032 - water moderated lattices of UO{sub 2} with stainless steel cladding, and intermediate U{sup 235} enrichment; (4) IEU-COMP-THERM-002 - water moderated lattices of annular UO{sub 2} with/without absorbers, and intermediate U{sup 235} enrichment; and (5) LEU-COMP-THERM-026 - water moderated lattices of UO{sub 2} at different pitches, and low U{sup 235} enrichment. In three of the five benchmarks (1, 3 and 5), modeling of the critical system at room temperature is conservative compared to modeling the system at elevated temperatures, i.e., a greater fissile mass is required at elevated temperature. In one benchmark (4), there was no difference in the fissile mass between the room temperature system and the system at the examined elevated temperature. In benchmark (2), the system clearly had a negative temperature reactivity coefficient. Some of the high temperature benchmark experiments were treated with appropriate (and comprehensive) adjustments to the cross section sets and thermal expansion coefficients, while other experiments were treated with partial adjustments. Regardless of the temperature treatment, modeling the systems at room temperature was found to be conservative for the examined systems, i.e., a smaller critical mass was obtained. While the five benchmarks presented herein demonstrate that, for the

  8. Testing (Validating?) Cross Sections with ICSBEP Benchmarks

    SciTech Connect

    Kahler, Albert C. III

    2012-06-28

    We discuss how to use critical benchmarks from the International Handbook of Evaluated Criticality Safety Benchmark Experiments to determine the applicability of specific cross sections to the end-user's problem of interest. Particular attention is paid to making sure the selected suite of benchmarks includes the user's range of applicability (ROA).

  9. Phase-covariant quantum benchmarks

    SciTech Connect

    Calsamiglia, J.; Aspachs, M.; Munoz-Tapia, R.; Bagan, E.

    2009-05-15

    We give a quantum benchmark for teleportation and quantum storage experiments suited for pure and mixed test states. The benchmark is based on the average fidelity over a family of phase-covariant states and certifies that an experiment cannot be emulated by a classical setup, i.e., by a measure-and-prepare scheme. We give an analytical solution for qubits, which shows important differences with standard state estimation approach, and compute the value of the benchmark for coherent and squeezed states, both pure and mixed.

  10. Comparison of the PARET/ANL and RELAP5/MOD3 codes for the analysis of IAEA benchmark transients and the SPERT experiments

    SciTech Connect

    Woodruff, W.L.; Hanan, N.A.; Smith, R.S.; Matos, J.E.

    1997-12-01

    The RELAP5/MOD3 code is a coupled kinetics-hydrodynamics code for modelling all components of pressurized water reactor systems. To our knowledge, RELAP5 has not been tested against the SPERT reactivity insertion experiments or more conventional research reactor models such as the 10-MW low-enriched uranium (LEU) benchmark reactor in the International Atomic Energy Agency (IAEA) Guidebook, where loss-of-flow (LOF) and reactivity insertion transients were computed by laboratories in four countries, including Argonne National Laboratory (ANL). The ANL computations used the PARET/ANL code, which has been used extensively for research reactor analysis and compared with the SPERT-I and SPERT-II experiments. RELAP5/MOD3 and PARET/ANL results are compared in this paper. Attempts to compare RELAP/MOD3 with the SPERT experiments are included.

  11. Large Area Crop Inventory Experiment (LACIE). Review of LACIE methodology, a project evaluation of technical acceptability

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The author has identified the following significant results. Results indicated that the LANDSAT data and the classification technology can estimate the small grains area within a sample segment accurately and reliably enough to meet the LACIE goals. Overall, the LACIE estimates in a 9 x 11 kilometer segment agree well with ground and aircraft determined area within these segments. The estimated c.v. of the random classification error was acceptably small. These analyses confirmed that bias introduced by various factors, such as LANDSAT spatial resolution, lack of spectral resolution, classifier bias, and repeatability, was not excessive in terms of the required performance criterion. Results of these tests did indicate a difficulty in differentiating wheat from other closely related small grains. However, satisfactory wheat area estimates were obtained through the reduction of the small grain area estimates in accordance with relative amounts of these crops as determined from historic data; these procedures are being further refined.

  12. The ORSphere Benchmark Evaluation and Its Potential Impact on Nuclear Criticality Safety

    SciTech Connect

    John D. Bess; Margaret A. Marshall; J. Blair Briggs

    2013-10-01

    In the early 1970’s, critical experiments using an unreflected metal sphere of highly enriched uranium (HEU) were performed with the focus to provide a “very accurate description…as an ideal benchmark for calculational methods and cross-section data files.” Two near-critical configurations of the Oak Ridge Sphere (ORSphere) were evaluated as acceptable benchmark experiments for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments (ICSBEP Handbook). The results from those benchmark experiments were then compared with additional unmoderated and unreflected HEU metal benchmark experiment configurations currently found in the ICSBEP Handbook. For basic geometries (spheres, cylinders, and slabs) the eigenvalues calculated using MCNP5 and ENDF/B-VII.0 were within 3 of their respective benchmark values. There appears to be generally a good agreement between calculated and benchmark values for spherical and slab geometry systems. Cylindrical geometry configurations tended to calculate low, including more complex bare HEU metal systems containing cylinders. The ORSphere experiments do not calculate within their 1s uncertainty and there is a possibility that the effect of the measured uncertainties for the GODIVA I benchmark may need reevaluated. There is significant scatter in the calculations for the highly-correlated ORCEF cylinder experiments, which are constructed from close-fitting HEU discs and annuli. Selection of a nuclear data library can have a larger impact on calculated eigenvalue results than the variation found within calculations of a given experimental series, such as the ORCEF cylinders, using a single nuclear data set.

  13. Vegetable and Fruit Acceptance during Infancy: Impact of Ontogeny, Genetics, and Early Experiences.

    PubMed

    Mennella, Julie A; Reiter, Ashley R; Daniels, Loran M

    2016-01-01

    Many of the chronic illnesses that plague modern society derive in large part from poor food choices. Thus, it is not surprising that the Dietary Guidelines for Americans, aimed at the population ≥2 y of age, recommends limiting consumption of salt, fat, and simple sugars, all of which have sensory properties that we humans find particularly palatable, and increasing the variety and contribution of fruits and vegetables in the diet, to promote health and prevent disease. Similar recommendations may soon be targeted at even younger Americans: the B-24 Project, led by the US Department of Health and Human Services and the USDA, is currently evaluating evidence to include infants and children from birth to 2 y of age in the dietary guidelines. This article reviews the underinvestigated behavioral phenomena surrounding how to introduce vegetables and fruits into infants' diets, for which there is much medical lore but, to our knowledge, little evidence-based research. Because the chemical senses are the major determinants of whether young children will accept a food (e.g., they eat only what they like), these senses take on even greater importance in understanding the bases for food choices in children. We focus on early life, in contrast with many other studies that attempt to modify food habits in older children and thus may miss sensitive periods that modulate long-term acceptance. Our review also takes into consideration ontogeny and sources of individual differences in taste perception, in particular, the role of genetic variation in bitter taste perception. PMID:26773029

  14. The Role of Age and Motivation for the Experience of Social Acceptance and Rejection

    ERIC Educational Resources Information Center

    Nikitin, Jana; Schoch, Simone; Freund, Alexandra M.

    2014-01-01

    A study with n = 55 younger (18-33 years, M = 23.67) and n = 58 older (61-85 years, M = 71.44) adults investigated age-related differences in social approach and avoidance motivation and their consequences for the experience of social interactions. Results confirmed the hypothesis that a predominant habitual approach motivation in younger adults…

  15. Correlation of nuclear criticality safety computer codes with plutonium benchmark experiments and derivation of subcritical limits. [MGBS, TGAN, KEFF, HRXN, GLASS, ANISN, SPBL, and KENO

    SciTech Connect

    Clark, H.K.

    1981-10-01

    A compilation of benchmark critical experiments was made for essentially one-dimensional systems containing plutonium. The systems consist of spheres, series of experiments with cylinders and cuboids that permit extrapolation to infinite cylinders and slabs, and large cylinders for which separability of the neutron flux into a product of spatial components is a good approximation. Data from the experiments were placed in a form readily usable as computer code input. Aqueous solutions of Pu(NO/sub 3/)/sub 4/ are treated as solutions of PuO/sub 2/ in nitric acid. The apparent molal volume of PuO/sub 2/ as a function of plutonium concentration was derived from analyses of solution density data and was incorporated in the Savannah River Laboratory computer codes along with density tables for nitric acid. The biases of three methods of calculation were established by correlation with the benchmark experiments. The oldest method involves two-group diffusion theory and has been used extensively at the Savannah River Laboratory. The other two involve S/sub n/ transport theory with, in one method, Hansen-Roach cross sections and, in the other, cross sections derived from ENDF/B-IV. Subcritical limits were calculated by all three methods. Significant differences were found among the results and between the results and limits currently in the American National Standard for Nuclear Criticality Safety in Operations with Fissionable Materials Outside Reactor (ANSI N16.1), which were calculated by yet another method, despite the normalization of all four methods to the same experimental data. The differences were studied, and a set of subcritical limits was proposed to supplement and in some cases to replace those in the ANSI Standard, which is currently being reviewed.

  16. An additional condition for Bell experiments for accepting local realistic theories

    NASA Astrophysics Data System (ADS)

    Nagata, Koji; Nakamura, Tadao

    2013-12-01

    We assume that one source of two uncorrelated spin-carrying particles emits them in a state, which can be described as a spin-1/2 bipartite pure uncorrelated state. We consider a Bell-Clauser-Horne-Shimony-Holt (Bell-CHSH) experiment with two-orthogonal-settings. We propose an additional condition for the state to be reproducible by the property of local realistic theories. We use the proposed measurement theory in order to construct the additional condition (Nagata and Nakamura in Int J Theor Phys 49:162, 2010). The condition is that local measurement outcome is . Otherwise, such an experiment does not allow for the existence of local realistic theories even in the situation that all Bell-CHSH inequalities hold. Also we derive new set of Bell inequalities when local measurement outcome is.

  17. Proton Form Factor Puzzle and the CEBAF Large Acceptance Spectrometer (CLAS) two-photon exchange experiment

    NASA Astrophysics Data System (ADS)

    Rimal, Dipak

    The electromagnetic form factors are the most fundamental observables that encode information about the internal structure of the nucleon. The electric (GE) and the magnetic ( GM) form factors contain information about the spatial distribution of the charge and magnetization inside the nucleon. A significant discrepancy exists between the Rosenbluth and the polarization transfer measurements of the electromagnetic form factors of the proton. One possible explanation for the discrepancy is the contributions of two-photon exchange (TPE) effects. Theoretical calculations estimating the magnitude of the TPE effect are highly model dependent, and limited experimental evidence for such effects exists. Experimentally, the TPE effect can be measured by comparing the ratio of positron-proton elastic scattering cross section to that of the electron-proton [R = sigma(e +p)/sigma(e+p)]. The ratio R was measured over a wide range of kinematics, utilizing a 5.6 GeV primary electron beam produced by the Continuous Electron Beam Accelerator Facility (CEBAF) at Jefferson Lab. This dissertation explored dependence of R on kinematic variables such as squared four-momentum transfer (Q2) and the virtual photon polarization parameter (epsilon). A mixed electron-positron beam was produced from the primary electron beam in experimental Hall B. The mixed beam was scattered from a liquid hydrogen (LH2) target. Both the scattered lepton and the recoil proton were detected by the CEBAF Large Acceptance Spectrometer (CLAS). The elastic events were then identified by using elastic scattering kinematics. This work extracted the Q2 dependence of R at high epsilon(epsilon > 0.8) and the $epsilon dependence of R at approx 0.85 GeV2. In these kinematics, our data confirm the validity of the hadronic calculations of the TPE effect by Blunden, Melnitchouk, and Tjon. This hadronic TPE effect, with additional corrections contributed by higher excitations of the intermediate state nucleon, largely

  18. The effects of video compression on acceptability of images for monitoring life sciences' experiments

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Chuang, Sherry L.

    1993-01-01

    Current plans indicate that there will be a large number of life science experiments carried out during the thirty year-long mission of the Biological Flight Research Laboratory (BFRL) on board Space Station Freedom (SSF). Non-human life science experiments will be performed in the BFRL. Two distinct types of activities have already been identified for this facility: (1) collect, store, distribute, analyze and manage engineering and science data from the Habitats, Glovebox and Centrifuge, (2) perform a broad range of remote science activities in the Glovebox and Habitat chambers in conjunction with the remotely located principal investigator (PI). These activities require extensive video coverage, viewing and/or recording and distribution to video displays on board SSF and to the ground. This paper concentrates mainly on the second type of activity. Each of the two BFRL habitat racks are designed to be configurable for either six rodent habitats per rack, four plant habitats per rack, or a combination of the above. Two video cameras will be installed in each habitat with a spare attachment for a third camera when needed. Therefore, a video system that can accommodate up to 12-18 camera inputs per habitat rack must be considered.

  19. Benchmarking in Student Affairs.

    ERIC Educational Resources Information Center

    Mosier, Robert E.; Schwarzmueller, Gary J.

    2002-01-01

    Discusses the use of benchmarking in student affairs, focusing on issues related to student housing. Provides examples of how benchmarking has influenced administrative practice at many institutions. (EV)

  20. Laser-plasma interaction in ignition relevant plasmas: benchmarking our 3D modelling capabilities versus recent experiments

    SciTech Connect

    Divol, L; Froula, D H; Meezan, N; Berger, R; London, R A; Michel, P; Glenzer, S H

    2007-09-27

    We have developed a new target platform to study Laser Plasma Interaction in ignition-relevant condition at the Omega laser facility (LLE/Rochester)[1]. By shooting an interaction beam along the axis of a gas-filled hohlraum heated by up to 17 kJ of heater beam energy, we were able to create a millimeter-scale underdense uniform plasma at electron temperatures above 3 keV. Extensive Thomson scattering measurements allowed us to benchmark our hydrodynamic simulations performed with HYDRA [1]. As a result of this effort, we can use with much confidence these simulations as input parameters for our LPI simulation code pF3d [2]. In this paper, we show that by using accurate hydrodynamic profiles and full three-dimensional simulations including a realistic modeling of the laser intensity pattern generated by various smoothing options, fluid LPI theory reproduces the SBS thresholds and absolute reflectivity values and the absence of measurable SRS. This good agreement was made possible by the recent increase in computing power routinely available for such simulations.

  1. In situ and real time characterization of interface microstructure in 3D alloy solidification: benchmark microgravity experiments in the DECLIC-Directional Solidification Insert on ISS

    NASA Astrophysics Data System (ADS)

    Ramirez, A.; Chen, L.; Bergeon, N.; Billia, B.; Gu, Jiho; Trivedi, R.

    2012-01-01

    Dynamical microstructure formation and selection during solidification processing, which has a major influence on the properties in the use of elaborated materials, occur during the growth process. In situ observation of the solid-liquid interface morphology evolution is thus necessary. On earth, convection effects dominate in bulk samples and may strongly interact with microstructure dynamics and alter pattern characterization. Series of solidification experiments with 3D cylindrical sample geometry were conducted in succinonitrile (SCN) -0.24 wt%camphor (model transparent system), in microgravity environment in the Directional Solidification Insert of the DECLIC facility of CNES (French space agency) on the International Space Station (ISS). Microgravity enabled homogeneous values of control parameters over the whole interface allowing the obtaining of homogeneous patterns suitable to get quantitative benchmark data. First analyses of the characteristics of the pattern (spacing, order, etc.) and of its dynamics in microgravity will be presented.

  2. Impact of Dialectical Behavior Therapy versus Community Treatment by Experts on Emotional Experience, Expression, and Acceptance in Borderline Personality Disorder

    PubMed Central

    Neacsiu, Andrada D.; Lungu, Anita; Harned, Melanie S.; Rizvi, Shireen L.; Linehan, Marsha M.

    2014-01-01

    Evidence suggests that heightened negative affectivity is a prominent feature of Borderline Personality Disorder (BPD) that often leads to maladaptive behaviors. Nevertheless, there is little research examining treatment effects on the experience and expression of specific negative emotions. Dialectical Behavior Therapy (DBT) is an effective treatment for BPD, hypothesized to reduce negative affectivity (Linehan, 1993a). The present study analyzes secondary data from a randomized controlled trial with the aim to assess the unique effectiveness of DBT when compared to Community Treatment by Experts (CTBE) in changing the experience, expression, and acceptance of negative emotions. Suicidal and/or self-injuring women with BPD (n = 101) were randomly assigned to DBT or CTBE for one year of treatment and one year of follow-up. Several indices of emotional experience and expression were assessed. Results indicate that DBT decreased experiential avoidance and expressed anger significantly more than CTBE. No differences between DBT and CTBE were found in improving guilt, shame, anxiety, or anger suppression, trait, and control. These results suggest that DBT has unique effects on improving the expression of anger and experiential avoidance, whereas changes in the experience of specific negative emotions may be accounted for by general factors associated with expert therapy. Implications of the findings are discussed. PMID:24418652

  3. Benchmarks for GADRAS performance validation.

    SciTech Connect

    Mattingly, John K.; Mitchell, Dean James; Rhykerd, Charles L., Jr.

    2009-09-01

    The performance of the Gamma Detector Response and Analysis Software (GADRAS) was validated by comparing GADRAS model results to experimental measurements for a series of benchmark sources. Sources for the benchmark include a plutonium metal sphere, bare and shielded in polyethylene, plutonium oxide in cans, a highly enriched uranium sphere, bare and shielded in polyethylene, a depleted uranium shell and spheres, and a natural uranium sphere. The benchmark experimental data were previously acquired and consist of careful collection of background and calibration source spectra along with the source spectra. The calibration data were fit with GADRAS to determine response functions for the detector in each experiment. A one-dimensional model (pie chart) was constructed for each source based on the dimensions of the benchmark source. The GADRAS code made a forward calculation from each model to predict the radiation spectrum for the detector used in the benchmark experiment. The comparisons between the GADRAS calculation and the experimental measurements are excellent, validating that GADRAS can correctly predict the radiation spectra for these well-defined benchmark sources.

  4. Benchmarking and Performance Measurement.

    ERIC Educational Resources Information Center

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  5. Use of Sensitivity and Uncertainty Analysis in the Design of Reactor Physics and Criticality Benchmark Experiments for Advanced Nuclear Fuel

    SciTech Connect

    Rearden, B.T.; Anderson, W.J.; Harms, G.A.

    2005-08-15

    Framatome ANP, Sandia National Laboratories (SNL), Oak Ridge National Laboratory (ORNL), and the University of Florida are cooperating on the U.S. Department of Energy Nuclear Energy Research Initiative (NERI) project 2001-0124 to design, assemble, execute, analyze, and document a series of critical experiments to validate reactor physics and criticality safety codes for the analysis of commercial power reactor fuels consisting of UO{sub 2} with {sup 235}U enrichments {>=}5 wt%. The experiments will be conducted at the SNL Pulsed Reactor Facility.Framatome ANP and SNL produced two series of conceptual experiment designs based on typical parameters, such as fuel-to-moderator ratios, that meet the programmatic requirements of this project within the given restraints on available materials and facilities. ORNL used the Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) to assess, from a detailed physics-based perspective, the similarity of the experiment designs to the commercial systems they are intended to validate. Based on the results of the TSUNAMI analysis, one series of experiments was found to be preferable to the other and will provide significant new data for the validation of reactor physics and criticality safety codes.

  6. SAS Code for Calculating Intraclass Correlation Coefficients and Effect Size Benchmarks for Site-Randomized Education Experiments

    ERIC Educational Resources Information Center

    Brandon, Paul R.; Harrison, George M.; Lawton, Brian E.

    2013-01-01

    When evaluators plan site-randomized experiments, they must conduct the appropriate statistical power analyses. These analyses are most likely to be valid when they are based on data from the jurisdictions in which the studies are to be conducted. In this method note, we provide software code, in the form of a SAS macro, for producing statistical…

  7. Benchmark experiment for the cross section of the 100Mo(p,2n)99mTc and 100Mo(p,pn)99Mo reactions

    NASA Astrophysics Data System (ADS)

    Takács, S.; Ditrói, F.; Aikawa, M.; Haba, H.; Otuka, N.

    2016-05-01

    As nuclear medicine community has shown an increasing interest in accelerator produced 99mTc radionuclide, the possible alternative direct production routes for producing 99mTc were investigated intensively. One of these accelerator production routes is based on the 100Mo(p,2n)99mTc reaction. The cross section of this nuclear reaction was studied by several laboratories earlier but the available data-sets are not in good agreement. For large scale accelerator production of 99mTc based on the 100Mo(p,2n)99mTc reaction, a well-defined excitation function is required to optimise the production process effectively. One of our recent publications pointed out that most of the available experimental excitation functions for the 100Mo(p,2n)99mTc reaction have the same general shape while their amplitudes are different. To confirm the proper amplitude of the excitation function, results of three independent experiments were presented (Takács et al., 2015). In this work we present results of a thick target count rate measurement of the Eγ = 140.5 keV gamma-line from molybdenum irradiated by Ep = 17.9 MeV proton beam, as an integral benchmark experiment, to prove the cross section data reported for the 100Mo(p,2n)99mTc and 100Mo(p,pn)99Mo reactions in Takács et al. (2015).

  8. Nuclear data verification based on Monte Carlo simulations of the LLNL pulsed-sphere benchmark experiments (1979 & 1986) using the Mercury code

    SciTech Connect

    Descalle, M; Pruet, J

    2008-06-09

    Livermore's nuclear data group developed a new verification and validation test suite to ensure the quality of data used in application codes. This is based on models of LLNL's pulsed sphere fusion shielding benchmark experiments. Simulations were done with Mercury, a 3D particle transport Monte Carlo code using continuous-energy cross-section libraries. Results were compared to measurements of neutron leakage spectra generated by 14MeV neutrons in 17 target assemblies (for a blank target assembly, H{sub 2}O, Teflon, C, N{sub 2}, Al, Si, Ti, Fe, Cu, Ta, W, Au, Pb, {sup 232}Th, {sup 235}U, {sup 238}U, and {sup 239}Pu). We also tested the fidelity of simulations for photon production associated with neutron interactions in the different materials. Gamma-ray leakage energy per neutron was obtained from a simple 1D spherical geometry assembly and compared to three codes (TART, COG, MCNP5) and several versions of the Evaluated Nuclear Data File (ENDF) and Evaluated Nuclear Data Libraries (ENDL) cross-section libraries. These tests uncovered a number of errors in photon production cross-sections, and were instrumental to the V&V of different cross-section libraries. Development of the pulsed sphere tests also uncovered the need for new Mercury capabilities. To enable simulations of neutron time-of-flight experiments the nuclear data group implemented an improved treatment of biased angular scattering in MCAPM.

  9. The Acceptability of Acupuncture for Low Back Pain: A Qualitative Study of Patient’s Experiences Nested within a Randomised Controlled Trial

    PubMed Central

    Hopton, Ann; Thomas, Kate; MacPherson, Hugh

    2013-01-01

    Introduction The National Institute for Health and Clinical Excellence guidelines recommend acupuncture as a clinically effective treatment for chronic back pain. However, there is insufficient knowledge of what factors contribute to patients’ positive and negative experiences of acupuncture, and how those factors interact in terms of the acceptability of treatment. This study used patient interviews following acupuncture treatment for back pain to identify, understand and describe the elements that contribute or detract from acceptability of treatment. Methods The study used semi-structured interviews. Twelve patients were interviewed using an interview schedule as a sub-study nested within a randomised controlled trial of acupuncture for chronic back pain. The interviews were analysed using thematic analysis. Results and Discussion Three over-arching themes emerged from the analysis. The first entitled facilitators of acceptability contained five subthemes; experience of pain relief, improvements in physical activity, relaxation, psychological benefit, reduced reliance on medication. The second over-arching theme identified barriers to acceptability, which included needle-related discomfort and temporary worsening of symptoms, pressure to continue treatment and financial cost. The third over-arching theme comprised mediators of acceptability, which included pre-treatment mediators such as expectation and previous experience, and treatment-related mediators of time, therapeutic alliance, lifestyle advice and the patient’s active involvement in recovery. These themes inform our understanding of the acceptability of acupuncture to patients with low back pain. Conclusion The acceptability of acupuncture treatment for low back pain is complex and multifaceted. The therapeutic relationship between the practitioner and patient emerged as a strong driver for acceptability, and as a useful vehicle to develop the patients’ self-efficacy in pain management in the

  10. A Uranium Bioremediation Reactive Transport Benchmark

    SciTech Connect

    Yabusaki, Steven B.; Sengor, Sevinc; Fang, Yilin

    2015-06-01

    A reactive transport benchmark problem set has been developed based on in situ uranium bio-immobilization experiments that have been performed at a former uranium mill tailings site in Rifle, Colorado, USA. Acetate-amended groundwater stimulates indigenous microorganisms to catalyze the reduction of U(VI) to a sparingly soluble U(IV) mineral. The interplay between the flow, acetate loading periods and rates, microbially-mediated and geochemical reactions leads to dynamic behavior in metal- and sulfate-reducing bacteria, pH, alkalinity, and reactive mineral surfaces. The benchmark is based on an 8.5 m long one-dimensional model domain with constant saturated flow and uniform porosity. The 159-day simulation introduces acetate and bromide through the upgradient boundary in 14-day and 85-day pulses separated by a 10 day interruption. Acetate loading is tripled during the second pulse, which is followed by a 50 day recovery period. Terminal electron accepting processes for goethite, phyllosilicate Fe(III), U(VI), and sulfate are modeled using Monod-type rate laws. Major ion geochemistry modeled includes mineral reactions, as well as aqueous and surface complexation reactions for UO2++, Fe++, and H+. In addition to the dynamics imparted by the transport of the acetate pulses, U(VI) behavior involves the interplay between bioreduction, which is dependent on acetate availability, and speciation-controlled surface complexation, which is dependent on pH, alkalinity and available surface complexation sites. The general difficulty of this benchmark is the large number of reactions (74), multiple rate law formulations, a multisite uranium surface complexation model, and the strong interdependency and sensitivity of the reaction processes. Results are presented for three simulators: HYDROGEOCHEM, PHT3D, and PHREEQC.

  11. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  12. A Blind Test Experiment in Volcano Geodesy: a Benchmark for Inverse Methods of Ground Deformation and Gravity Data

    NASA Astrophysics Data System (ADS)

    D'Auria, Luca; Fernandez, Jose; Puglisi, Giuseppe; Rivalta, Eleonora; Camacho, Antonio; Nikkhoo, Mehdi; Walter, Thomas

    2016-04-01

    The inversion of ground deformation and gravity data is affected by an intrinsic ambiguity because of the mathematical formulation of the inverse problem. Current methods for the inversion of geodetic data rely on both parametric (i.e. assuming a source geometry) and non-parametric approaches. The former are able to catch the fundamental features of the ground deformation source but, if the assumptions are wrong or oversimplified, they could provide misleading results. On the other hand, the latter class of methods, even if not relying on stringent assumptions, could suffer from artifacts, especially when dealing with poor datasets. In the framework of the EC-FP7 MED-SUV project we aim at comparing different inverse approaches to verify how they cope with basic goals of Volcano Geodesy: determining the source depth, the source shape (size and geometry), the nature of the source (magmatic/hydrothermal) and hinting the complexity of the source. Other aspects that are important in volcano monitoring are: volume/mass transfer toward shallow depths, propagation of dikes/sills, forecasting the opening of eruptive vents. On the basis of similar experiments already done in the fields of seismic tomography and geophysical imaging, we have devised a bind test experiment. Our group was divided into one model design team and several inversion teams. The model design team devised two physical models representing volcanic events at two distinct volcanoes (one stratovolcano and one caldera). They provided the inversion teams with: the topographic reliefs, the calculated deformation field (on a set of simulated GPS stations and as InSAR interferograms) and the gravity change (on a set of simulated campaign stations). The nature of the volcanic events remained unknown to the inversion teams until after the submission of the inversion results. Here we present the preliminary results of this comparison in order to determine which features of the ground deformation and gravity source

  13. A Blind Test Experiment in Volcano Geodesy: a Benchmark for Inverse Methods of Ground Deformation and Gravity Data

    NASA Astrophysics Data System (ADS)

    D'Auria, L.; Fernandez, J.; Puglisi, G.; Rivalta, E.; Camacho, A. G.; Nikkhoo, M.; Walter, T. R.

    2015-12-01

    The inversion of ground deformation and gravity data is affected by an intrinsic ambiguity because of the mathematical formulation of the inverse problem. Current methods for the inversion of geodetic data rely on both parametric (i.e. assuming a source geometry) and non-parametric approaches. The former are able to catch the fundamental features of the ground deformation source but, if the assumptions are wrong or oversimplified, they could provide misleading results. On the other hand, the latter class of methods, even if not relying on stringent assumptions, could suffer from artifacts, especially when dealing with poor datasets. In the framework of the EC-FP7 MED-SUV project we aim at comparing different inverse approaches to verify how they cope with basic goals of Volcano Geodesy: determining the source depth, the source shape (size and geometry), the nature of the source (magmatic/hydrothermal) and hinting the complexity of the source. Other aspects that are important in volcano monitoring are: volume/mass transfer toward shallow depths, propagation of dikes/sills, forecasting the opening of eruptive vents. On the basis of similar experiments already done in the fields of seismic tomography and geophysical imaging, we have devised a bind test experiment. Our group was divided into one model design team and several inversion teams. The model design team devised two physical models representing volcanic events at two distinct volcanoes (one stratovolcano and one caldera). They provided the inversion teams with: the topographic reliefs, the calculated deformation field (on a set of simulated GPS stations and as InSAR interferograms) and the gravity change (on a set of simulated campaign stations). The nature of the volcanic events remained unknown to the inversion teams until after the submission of the inversion results. Here we present the preliminary results of this comparison in order to determine which features of the ground deformation and gravity source

  14. NASA Software Engineering Benchmarking Study

    NASA Technical Reports Server (NTRS)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    .onsolidate, collect and, if needed, develop common processes principles and other assets across the Agency in order to provide more consistency in software development and acquisition practices and to reduce the overall cost of maintaining or increasing current NASA CMMI maturity levels. 6. Provide additional support for small projects that includes: (a) guidance for appropriate tailoring of requirements for small projects, (b) availability of suitable tools, including support tool set-up and training, and (c) training for small project personnel, assurance personnel and technical authorities on the acceptable options for tailoring requirements and performing assurance on small projects. 7. Develop software training classes for the more experienced software engineers using on-line training, videos, or small separate modules of training that can be accommodated as needed throughout a project. 8. Create guidelines to structure non-classroom training opportunities such as mentoring, peer reviews, lessons learned sessions, and on-the-job training. 9. Develop a set of predictive software defect data and a process for assessing software testing metric data against it. 10. Assess Agency-wide licenses for commonly used software tools. 11. Fill the knowledge gap in common software engineering practices for new hires and co-ops.12. Work through the Science, Technology, Engineering and Mathematics (STEM) program with universities in strengthening education in the use of common software engineering practices and standards. 13. Follow up this benchmark study with a deeper look into what both internal and external organizations perceive as the scope of software assurance, the value they expect to obtain from it, and the shortcomings they experience in the current practice. 14. Continue interactions with external software engineering environment through collaborations, knowledge sharing, and benchmarking.

  15. Benchmark Data Set for Wheat Growth Models: Field Experiments and AgMIP Multi-Model Simulations.

    NASA Technical Reports Server (NTRS)

    Asseng, S.; Ewert, F.; Martre, P.; Rosenzweig, C.; Jones, J. W.; Hatfield, J. L.; Ruane, A. C.; Boote, K. J.; Thorburn, P.J.; Rotter, R. P.

    2015-01-01

    The data set includes a current representative management treatment from detailed, quality-tested sentinel field experiments with wheat from four contrasting environments including Australia, The Netherlands, India and Argentina. Measurements include local daily climate data (solar radiation, maximum and minimum temperature, precipitation, surface wind, dew point temperature, relative humidity, and vapor pressure), soil characteristics, frequent growth, nitrogen in crop and soil, crop and soil water and yield components. Simulations include results from 27 wheat models and a sensitivity analysis with 26 models and 30 years (1981-2010) for each location, for elevated atmospheric CO2 and temperature changes, a heat stress sensitivity analysis at anthesis, and a sensitivity analysis with soil and crop management variations and a Global Climate Model end-century scenario.

  16. Methodology of full-core Monte Carlo calculations with leakage parameter evaluations for benchmark critical experiment analysis

    NASA Astrophysics Data System (ADS)

    Sboev, A. G.; Ilyashenko, A. S.; Vetrova, O. A.

    1997-02-01

    The method of bucking evaluation, realized in the MOnte Carlo code MCS, is described. This method was applied for calculational analysis of well known light water experiments TRX-1 and TRX-2. The analysis of this comparison shows, that there is no coincidence between Monte Carlo calculations, obtained by different ways: the MCS calculations with given experimental bucklings; the MCS calculations with given bucklings evaluated on base of full core MCS direct simulations; the full core MCNP and MCS direct simulations; the MCNP and MCS calculations, where the results of cell calculations are corrected by the coefficients taking into the account the leakage from the core. Also the buckling values evaluated by full core MCS calculations have differed from experimental ones, especially in the case of TRX-1, when this difference has corresponded to 0.5 percent increase of Keff value.

  17. Criticality Benchmark Analysis of Water-Reflected Uranium Oxyfluoride Slabs

    SciTech Connect

    Margaret A. Marshall; John D. Bess

    2009-11-01

    A series of twelve experiments were conducted in the mid 1950's at the Oak Ridge National Laboratory Critical Experiments Facility to determine the critical conditions of a semi-infinite water-reflected slab of aqueous uranium oxyfluoride (UO2F2). A different slab thickness was used for each experiment. Results from the twelve experiment recorded in the laboratory notebook were published in Reference 1. Seven of the twelve experiments were determined to be acceptable benchmark experiments for the inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. This evaluation will not only be available to handbook users for the validation of computer codes and integral cross-section data, but also for the reevaluation of experimental data used in the ANSI/ANS-8.1 standard. This evaluation is important as part of the technical basis of the subcritical slab limits in ANSI/ANS-8.1. The original publication of the experimental results was used for the determination of bias and bias uncertainties for subcritical slab limits, as documented by Hugh Clark's paper 'Subcritical Limits for Uranium-235 Systems'.

  18. Benchmarking for Higher Education.

    ERIC Educational Resources Information Center

    Jackson, Norman, Ed.; Lund, Helen, Ed.

    The chapters in this collection explore the concept of benchmarking as it is being used and developed in higher education (HE). Case studies and reviews show how universities in the United Kingdom are using benchmarking to aid in self-regulation and self-improvement. The chapters are: (1) "Introduction to Benchmarking" (Norman Jackson and Helen…

  19. Acceptability of Interventions Delivered Online and Through Mobile Phones for People Who Experience Severe Mental Health Problems: A Systematic Review

    PubMed Central

    Lobban, Fiona; Emsley, Richard; Bucci, Sandra

    2016-01-01

    Background Psychological interventions are recommended for people with severe mental health problems (SMI). However, barriers exist in the provision of these services and access is limited. Therefore, researchers are beginning to develop and deliver interventions online and via mobile phones. Previous research has indicated that interventions delivered in this format are acceptable for people with SMI. However, a comprehensive systematic review is needed to investigate the acceptability of online and mobile phone-delivered interventions for SMI in depth. Objective This systematic review aimed to 1) identify the hypothetical acceptability (acceptability prior to or without the delivery of an intervention) and actual acceptability (acceptability where an intervention was delivered) of online and mobile phone-delivered interventions for SMI, 2) investigate the impact of factors such as demographic and clinical characteristics on acceptability, and 3) identify common participant views in qualitative studies that pinpoint factors influencing acceptability. Methods We conducted a systematic search of the databases PubMed, Embase, PsycINFO, CINAHL, and Web of Science in April 2015, which yielded a total of 8017 search results, with 49 studies meeting the full inclusion criteria. Studies were included if they measured acceptability through participant views, module completion rates, or intervention use. Studies delivering interventions were included if the delivery method was online or via mobile phones. Results The hypothetical acceptability of online and mobile phone-delivered interventions for SMI was relatively low, while actual acceptability tended to be high. Hypothetical acceptability was higher for interventions delivered via text messages than by emails. The majority of studies that assessed the impact of demographic characteristics on acceptability reported no significant relationships between the two. Additionally, actual acceptability was higher when

  20. ATLAS ACCEPTANCE TEST

    SciTech Connect

    J.C. COCHRANE; J.V. PARKER; ET AL

    2001-06-01

    The acceptance test program for Atlas, a 23 MJ pulsed power facility for use in the Los Alamos High Energy Density Hydrodynamics program, has been completed. Completion of this program officially releases Atlas from the construction phase and readies it for experiments. Details of the acceptance test program results and of machine capabilities for experiments will be presented.

  1. Nomenclatural Benchmarking: The roles of digital typification and telemicroscopy

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The process of nomenclatural benchmarking is the examination of type specimens of all available names to ascertain which currently accepted species the specimen bearing the name falls within. We propose a strategy for addressing four challenges for nomenclatural benchmarking. First, there is the mat...

  2. Enhancing user acceptance of mandated mobile health information systems: the ePOC (electronic point-of-care project) experience.

    PubMed

    Burgess, Lois; Sargent, Jason

    2007-01-01

    From a clinical perspective, the use of mobile technologies, such as Personal Digital Assistants (PDAs) within hospital environments is not new. A paradigm shift however is underway towards the acceptance and utility of these systems within mobile-based healthcare environments. Introducing new technologies and associated work practices has intrinsic risks which must be addressed. This paper contends that intervening to address user concerns as they arise throughout the system development lifecycle will lead to greater levels of user acceptance, while ultimately enhancing the deliverability of a system that provides a best fit with end user needs. It is envisaged this research will lead to the development of a formalised user acceptance framework based on an agile approach to user acceptance measurement. The results of an ongoing study of user perceptions towards a mandated electronic point-of-care information system in the Northern Illawarra Ambulatory Care Team (TACT) are presented. PMID:17911883

  3. Experiences and Acceptance of Intimate Partner Violence: Associations with STI Symptoms and Ability to Negotiate Sexual Safety among Young Liberian Women

    PubMed Central

    Callands, Tamora A.; Sipsma, Heather L.; Betancourt, Theresa S.; Hansen, Nathan B.

    2013-01-01

    Women who experience intimate partner violence may be at elevated risk for poor sexual health outcomes including sexual transmitted infections (STIs). This association however, has not been consistently demonstrated in low-income or post-conflict countries; furthermore, the role that attitudes towards intimate partner violence play in sexual health outcomes and behaviour has rarely been examined. We examined associations between intimate partner violence experiences, accepting attitudes towards physical intimate partner violence, and sexual health and behavioural outcomes among 592 young women in post-conflict Liberia. Participants’ experiences with either moderate or severe physical violence or sexual violence were common. Additionally, accepting attitudes towards physical intimate partner violence were positively associated with reporting STI symptoms, intimate partner violence experiences and the ability to negotiate safe sex. Findings suggest that for sexual health promotion and risk reduction intervention efforts to achieve full impact, interventions must address the contextual influence of violence, including individual attitudes toward intimate partner violence. PMID:23586393

  4. A framework for benchmarking land models

    SciTech Connect

    Luo, Yiqi; Randerson, J.; Abramowitz, G.; Bacour, C.; Blyth, E.; Carvalhais, N.; Ciais, Philippe; Dalmonech, D.; Fisher, J.B.; Fisher, R.; Friedlingstein, P.; Hibbard, Kathleen A.; Hoffman, F. M.; Huntzinger, Deborah; Jones, C.; Koven, C.; Lawrence, David M.; Li, D.J.; Mahecha, M.; Niu, S.L.; Norby, Richard J.; Piao, S.L.; Qi, X.; Peylin, P.; Prentice, I.C.; Riley, William; Reichstein, M.; Schwalm, C.; Wang, Y.; Xia, J. Y.; Zaehle, S.; Zhou, X. H.

    2012-10-09

    Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1) targeted aspects of model performance to be evaluated, (2) a set of benchmarks as defined references to test model performance, (3) metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4) model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1) a priori thresholds of acceptable model performance and (2) a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties of land models

  5. Variations in View of Acceptability: Report on an Experiment. Further Contrastive Papers, Jyvaskyla Contrastive Studies, 6. Reports from the Department of English, No. 7.

    ERIC Educational Resources Information Center

    Rasanen, Anne

    The effect of Finnish language experiences on the way native speakers of English evaluate errors made by Finns in producing English was examined. The study was designed to show the role of the criterion of acceptability in the evaluation process and to establish some of the sociolinguistic and psycholinguistic factors that may affect the…

  6. Benchmark test of 14-MeV neutron-induced gamma-ray production data in JENDL-3.2 and FENDL/E-1.0 through analysis of the OKTAVIAN experiments

    SciTech Connect

    Maekawa, F.; Oyama, F.

    1996-06-01

    Secondary gamma rays play an important role along with neutrons in influencing nuclear design parameters, such as nuclear heating, radiation dose, and material damage on the plasma-facing components, vacuum vessel, and superconducting magnets, of fusion devices. Because evaluated nuclear data libraries are used in the designs, one must examine the accuracy of secondary gamma-ray data in these libraries through benchmark tests of existing experiments. The validity of the data should be confirmed, or problems with the data should be pointed out through these benchmark tests to ensure the quality of the design. Here, gamma-ray production data of carbon, fluorine, aluminum, silicon, titanium, chromium, manganese, cobalt, copper, niobium, molybdenum, tungsten, and lead in JENDL-3.2 and FENDL/E-1.0 induced by 14-MeV neutrons are tested through benchmark analyses of leakage gamma-ray spectrum measurements conducted at the OKTAVIAN deuterium-tritium neutron source facility. The MCNP transport code is used along with the flagging method for detailed analyses of the spectra. As a result, several moderate problems are pointed out for secondary gamma-ray data of titanium, chromium, manganese, and lead in FENDL/E-1.0. Because no fatal errors are found, however, secondary gamma-ray data for the 13 elements in both libraries are reasonably well validated through these benchmark tests as far as 14-MeV neutron incidence is concerned.

  7. The Mindful Way Through the Semester: Evaluating the Impact of Integrating an Acceptance-Based Behavioral Program Into a First-Year Experience Course for Undergraduates.

    PubMed

    Danitz, Sara B; Suvak, Michael K; Orsillo, Susan M

    2016-07-01

    Preventing and reducing depression in first-year college students are crucial areas in need of attention and resources. Programs that are cost-effective and time-efficient, that have replicable benefits across samples, are sorely needed. This study aims to examine whether a previously studied acceptance-based behavioral (ABBT) program, the Mindful Way Through the Semester (MWTS), is effective in comparison to a control condition at decreasing levels of depression and enhancing acceptance and academic values when integrated into a first-year undergraduate experience course. The current study also sought to examine the association between change in acceptance, mindfulness practice, and values practice on outcomes. Two hundred thirteen students were assigned to either the MWTS workshop condition or the control condition (in which the first-year experience curriculum as usual was received). Results revealed that the workshop condition produced larger decreases in depression over the course of the semester relative to the control condition, but only for participants endorsing higher levels of depression at baseline. Further, for participants in the workshop condition, changes in depression were negatively associated with changes in acceptance (i.e., larger increases in acceptance associated with larger decreases in depression), an association that was not statistically significant in the control group. Lastly, for participants in the workshop condition who endorsed higher levels of depression at baseline, mindfulness and values practice was associated with greater reductions in depression. Implications of these findings for future interventions are discussed. PMID:27423165

  8. Exploiting Cloud Radar Doppler Spectra of Mixed-Phase Clouds during ACCEPT Field Experiment to Identify Microphysical Processes

    NASA Astrophysics Data System (ADS)

    Kalesse, H.; Myagkov, A.; Seifert, P.; Buehl, J.

    2015-12-01

    Clouds with Extended Polarization Techniques (ACCEPT) field experiment in Cabauw, Netherlands in Fall 2014. There, another MIRA-35 was operated in simultaneous transmission and simultaneous reception (STSR) mode for obtaining measurements of differential reflectivity (ZDR) and correlation coefficient ρhv.

  9. RISKIND verification and benchmark comparisons

    SciTech Connect

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models.

  10. Suicide acceptability among U.S. Veterans with active duty experience: results from the 2010 General Social Survey.

    PubMed

    Blosnich, John; Bossarte, Robert

    2013-01-01

    The objective of this study was to examine whether U.S. Veterans more frequently indicate suicide acceptability than non-Veterans. The 2010 General Social Survey, which employed a probability-based sample of U.S. adults, was analyzed by self-reported Veteran status on suicide acceptability in four, separate hypothetical situations regarding ending one's life (i.e., incurable illness, bankruptcy, bringing dishonor/shame upon family, tired of living and ready to die). Veterans were no more likely to endorse suicide as acceptable than their non-Veteran counterparts. Results suggest that attitudes approving of suicide are not different among Veterans in general and non-Veterans. However, future research may need to examine whether subpopulations of Veterans with elevated risk for suicide may report differential attitudes about suicide. PMID:23387403

  11. Custodial Homes, Therapeutic Homes, and Parental Acceptance: Parental Experiences of Autism in Kerala, India and Atlanta, GA USA.

    PubMed

    Sarrett, Jennifer C

    2015-06-01

    The home is a critical place to learn about cultural values of childhood disability, including autism and intellectual disabilities. The current article describes how the introduction of autism into a home and the availability of intervention options change the structure and meaning of a home and reflect parental acceptance of a child's autistic traits. Using ethnographic data from Kerala, India and Atlanta, GA USA, a description of two types of homes are developed: the custodial home, which is primarily focused on caring for basic needs, and the therapeutic home, which is focused on changing a child's autistic traits. The type of home environment is respondent to cultural practices of child rearing in the home and influences daily activities, management, and care in the home. Further, these homes differ in parental acceptance of their autistic children's disabilities, which is critical to understand when engaging in international work related to autism and intellectual disability. It is proposed that parental acceptance can be fostered through the use of neurodiverse notions that encourage autism acceptance. PMID:25772598

  12. Pregnant and Postpartum Women's Experiences and Perspectives on the Acceptability and Feasibility of Copackaged Medicine for Antenatal Care and PMTCT in Lesotho

    PubMed Central

    Gill, Michelle M.; Hoffman, Heather J.; Tiam, Appolinaire; Mohai, Florence M.; Mokone, Majoalane; Isavwa, Anthony; Mohale, Sesomo; Makhohlisa, Matela; Ankrah, Victor; Luo, Chewe; Guay, Laura

    2015-01-01

    Objective. To improve PMTCT and antenatal care-related service delivery, a pack with centrally prepackaged medicine was rolled out to all pregnant women in Lesotho in 2011. This study assessed acceptability and feasibility of this copackaging mechanism for drug delivery among pregnant and postpartum women. Methods. Acceptability and feasibility were assessed in a mixed method, cross-sectional study through structured interviews (SI) and semistructured interviews (SSI) conducted in 2012 and 2013. Results. 290 HIV-negative women and 437 HIV-positive women (n = 727) participated. Nearly all SI participants found prepackaged medicines acceptable, though modifications such as size reduction of the pack were suggested. Positive experiences included that the pack helped women take pills as instructed and contents promoted healthy pregnancies. Negative experiences included inadvertent pregnancy disclosure and discomfort carrying the pack in communities. Implementation was also feasible; 85.2% of SI participants reported adequate counseling time, though 37.8% felt pack use caused clinic delays. SSI participants reported improvement in service quality following pack introduction, due to more comprehensive counseling. Conclusions. A prepackaged drug delivery mechanism for ANC/PMTCT medicines was acceptable and feasible. Findings support continued use of this approach in Lesotho with improved design modifications to reflect the current PMTCT program of lifelong treatment for all HIV-positive pregnant women. PMID:26649193

  13. HLW Return from France to Germany - 15 Years of Experience in Public Acceptance and Technical Aspects - 12149

    SciTech Connect

    Graf, Wilhelm

    2012-07-01

    Germany over the whole 15-year long project running time could be faced efficiently. It has to be concluded that despite of all problems the anti-nuclear activities have caused so far, all transports of vitrified HLW have always been completed successfully by adapting the commonly established safety, security and public acceptance measures to the special conditions and needs in Germany and coordinating the activities of all parties involved but at the expense of high costs for industry and government and a challenging operational complexity. Apart from an anticipatory project planning a good communication between all involved industrial parties and the French and the German government was the key to the effective management of such shipments and to minimize the radiological, economic, environmental, public and political impact. The future will show how efficiently the gained experience can be used for further return projects which are to be realized since no reprocessed waste has yet been returned from UK and neither the medium-level nor the low-level radioactive waste has been transferred from France to Germany. (author)

  14. Relationships between early experience to dietary diversity, acceptance of novel flavors, and open field behavior in sheep.

    PubMed

    Villalba, Juan J; Catanese, Francisco; Provenza, Frederick D; Distel, Roberto A

    2012-01-18

    This study determined whether early experiences by sheep to monotonous or diverse diets influence: (1) plasmatic profiles of cortisol, a hormone involved in stress responses by mammals, before and after an ACTH challenge, (2) the readiness to eat new foods in a new environment, (3) general fearfulness and response to separation--as measured by the open field test (OFT) and stress induced hyperthermia (SIH)--and (4) the link between (2) and (3). Thirty, 2-mo-old lambs were randomly assigned to 3 treatments (10 lambs/treatment). Lambs in one treatment (Diversity--DV) received in successive periods of exposure all possible 4-way choice combinations of 2 foods high in energy and 2 foods high in protein from an array of 6 foods: 3 high in energy (beet pulp, oat grain, and a mix of grape pomace:milo [40:60]) and 3 high in protein (soybean meal, alfalfa, corn gluten meal). Lambs in another treatment (DV+T) received the same exposure described for DV but two phytochemicals, oxalic acid (1.5%) and quebracho tannins (10%) were randomly added within any period of exposure to foods high in energy or to foods high in protein. Lambs in the third treatment (Monotony--MO) received a monotonous balanced ration containing all 6 foods fed to the other groups. After exposure, lambs were offered a choice of the aforementioned 6 foods (DV; DV+T) or the monotonous diet (MO). Lambs were intravenously injected with ACTH 1 h after food presentation, and sampled at 1, 2, and 3 h post feeding for determinations of plasma cortisol concentrations. Reluctance to eat novel flavored foods (onion-, coconut- and cinnamon-flavored wheat bran), open field behavior, and SIH was assessed in all treatments. Lambs in MO showed greater concentrations of plasma cortisol 1 h after food presentation than lambs in the DV or DV+T treatments (P=0.04). However, the difference was small and no differences among treatments were detected after an ACTH challenge (P>0.1). Lambs in DV consumed more onion-flavored wheat

  15. Performance Evaluation and Benchmarking of Intelligent Systems

    SciTech Connect

    Madhavan, Raj; Messina, Elena; Tunstel, Edward

    2009-09-01

    To design and develop capable, dependable, and affordable intelligent systems, their performance must be measurable. Scientific methodologies for standardization and benchmarking are crucial for quantitatively evaluating the performance of emerging robotic and intelligent systems technologies. There is currently no accepted standard for quantitatively measuring the performance of these systems against user-defined requirements; and furthermore, there is no consensus on what objective evaluation procedures need to be followed to understand the performance of these systems. The lack of reproducible and repeatable test methods has precluded researchers working towards a common goal from exchanging and communicating results, inter-comparing system performance, and leveraging previous work that could otherwise avoid duplication and expedite technology transfer. Currently, this lack of cohesion in the community hinders progress in many domains, such as manufacturing, service, healthcare, and security. By providing the research community with access to standardized tools, reference data sets, and open source libraries of solutions, researchers and consumers will be able to evaluate the cost and benefits associated with intelligent systems and associated technologies. In this vein, the edited book volume addresses performance evaluation and metrics for intelligent systems, in general, while emphasizing the need and solutions for standardized methods. To the knowledge of the editors, there is not a single book on the market that is solely dedicated to the subject of performance evaluation and benchmarking of intelligent systems. Even books that address this topic do so only marginally or are out of date. The research work presented in this volume fills this void by drawing from the experiences and insights of experts gained both through theoretical development and practical implementation of intelligent systems in a variety of diverse application domains. The book presents

  16. A Seafloor Benchmark for 3-dimensional Geodesy

    NASA Astrophysics Data System (ADS)

    Chadwell, C. D.; Webb, S. C.; Nooner, S. L.

    2014-12-01

    We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone

  17. Making Benchmark Testing Work

    ERIC Educational Resources Information Center

    Herman, Joan L.; Baker, Eva L.

    2005-01-01

    Many schools are moving to develop benchmark tests to monitor their students' progress toward state standards throughout the academic year. Benchmark tests can provide the ongoing information that schools need to guide instructional programs and to address student learning problems. The authors discuss six criteria that educators can use to…

  18. Thermal Performance Benchmarking (Presentation)

    SciTech Connect

    Moreno, G.

    2014-11-01

    This project will benchmark the thermal characteristics of automotive power electronics and electric motor thermal management systems. Recent vehicle systems will be benchmarked to establish baseline metrics, evaluate advantages and disadvantages of different thermal management systems, and identify areas of improvement to advance the state-of-the-art.

  19. Benchmark Evaluation of Plutonium Nitrate Solution Arrays

    SciTech Connect

    M. A. Marshall; J. D. Bess

    2011-09-01

    In October and November of 1981 thirteen approach-to-critical experiments were performed on a remote split table machine (RSTM) in the Critical Mass Laboratory of Pacific Northwest Laboratory (PNL) in Richland, Washington, using planar arrays of polyethylene bottles filled with plutonium (Pu) nitrate solution. Arrays of up to sixteen bottles were used to measure the critical number of bottles and critical array spacing with a tight fitting Plexiglas{reg_sign} reflector on all sides of the arrays except the top. Some experiments used Plexiglas shells fitted around each bottles to determine the effect of moderation on criticality. Each bottle contained approximately 2.4 L of Pu(NO3)4 solution with a Pu content of 105 g Pu/L and a free acid molarity H+ of 5.1. The plutonium was of low 240Pu (2.9 wt.%) content. These experiments were performed to fill a gap in experimental data regarding criticality limits for storing and handling arrays of Pu solution in reprocessing facilities. Of the thirteen approach-to-critical experiments eleven resulted in extrapolations to critical configurations. Four of the approaches were extrapolated to the critical number of bottles; these were not evaluated further due to the large uncertainty associated with the modeling of a fraction of a bottle. The remaining seven approaches were extrapolated to critical array spacing of 3-4 and 4-4 arrays; these seven critical configurations were evaluation for inclusion as acceptable benchmark experiments in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbook. Detailed and simple models of these configurations were created and the associated bias of these simplifications was determined to range from 0.00116 and 0.00162 {+-} 0.00006 ?keff. Monte Carlo analysis of all models was completed using MCNP5 with ENDF/BVII.0 neutron cross section libraries. A thorough uncertainty analysis of all critical, geometric, and material parameters was performed using parameter

  20. Acceptance in the domestic environment: the experience of senior housing for lesbian, gay, bisexual, and transgender seniors.

    PubMed

    Sullivan, Kathleen M

    2014-01-01

    The social environment impacts the ability of older adults to interact successfully with their community and age-in-place. This study asked, for the first time, residents of existing Lesbian, Gay, Bisexual, and Transgender (LGBT) senior living communities to explain why they chose to live in those communities and what, if any, benefit the community afforded them. Focus groups were conducted at 3 retirement communities. Analysis found common categories across focus groups that explain the phenomenon of LGBT senior housing. Acceptance is paramount for LGBT seniors and social networks expanded, contrary to socioemotional selectivity theory. Providers are encouraged to develop safe spaces for LGBT seniors. PMID:24313822

  1. ICSBEP Benchmarks For Nuclear Data Applications

    NASA Astrophysics Data System (ADS)

    Briggs, J. Blair

    2005-05-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organization for Economic Cooperation and Development (OECD) — Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Serbia and Montenegro (formerly Yugoslavia), Kazakhstan, Spain, Israel, Brazil, Poland, and the Czech Republic are now participating. South Africa, India, China, and Germany are considering participation. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled "International Handbook of Evaluated Criticality Safety Benchmark Experiments." The 2004 Edition of the Handbook contains benchmark specifications for 3331 critical or subcritical configurations that are intended for use in validation efforts and for testing basic nuclear data. New to the 2004 Edition of the Handbook is a draft criticality alarm / shielding type benchmark that should be finalized in 2005 along with two other similar benchmarks. The Handbook is being used extensively for nuclear data testing and is expected to be a valuable resource for code and data validation and improvement efforts for decades to come. Specific benchmarks that are useful for testing structural materials such as iron, chromium, nickel, and manganese; beryllium; lead; thorium; and 238U are highlighted.

  2. Geothermal Heat Pump Benchmarking Report

    SciTech Connect

    1997-01-17

    A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.

  3. A performance geodynamo benchmark

    NASA Astrophysics Data System (ADS)

    Matsui, H.; Heien, E. M.

    2014-12-01

    In the last ten years, a number of numerical dynamo models have successfully represented basic characteristics of the geomagnetic field. However, to approach the parameters regime of the Earth's outer core, we need massively parallel computational environment for extremely large spatial resolutions. Local methods are expected to be more suitable for massively parallel computation because the local methods needs less data communication than the spherical harmonics expansion, but only a few groups have reported results of the dynamo benchmark using local methods (Harder and Hansen, 2005; Matsui and Okuda, 2005; Chan et al., 2007) because of the difficulty treating magnetic boundary conditions based on the local methods. On the other hand, some numerical dynamo models using spherical harmonics expansion has performed successfully with thousands of processes. We perform benchmark tests to asses various numerical methods to asses the next generation of geodynamo simulations. The purpose of the present benchmark test is to assess numerical geodynamo models on a massively parallel computational platform. To compare among many numerical methods as possible, we consider the model with the insulated magnetic boundary by Christensen et al. (2001) and with the pseudo vacuum magnetic boundary, because the pseudo vacuum boundaries are implemented easier by using the local method than the magnetic insulated boundaries. In the present study, we consider two kinds of benchmarks, so-called accuracy benchmark and performance benchmark. In the present study, we will report the results of the performance benchmark. We perform the participated dynamo models under the same computational environment (XSEDE TACC Stampede), and investigate computational performance. To simplify the problem, we choose the same model and parameter regime as the accuracy benchmark test, but perform the simulations with much finer spatial resolutions as possible to investigate computational capability (e

  4. Toxicological Benchmarks for Wildlife

    SciTech Connect

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red-tailed hawk

  5. Benchmarking expert system tools

    NASA Technical Reports Server (NTRS)

    Riley, Gary

    1988-01-01

    As part of its evaluation of new technologies, the Artificial Intelligence Section of the Mission Planning and Analysis Div. at NASA-Johnson has made timing tests of several expert system building tools. Among the production systems tested were Automated Reasoning Tool, several versions of OPS5, and CLIPS (C Language Integrated Production System), an expert system builder developed by the AI section. Also included in the test were a Zetalisp version of the benchmark along with four versions of the benchmark written in Knowledge Engineering Environment, an object oriented, frame based expert system tool. The benchmarks used for testing are studied.

  6. From traditional cognitive–behavioural therapy to acceptance and commitment therapy for chronic pain: a mixed-methods study of staff experiences of change

    PubMed Central

    Barker, Estelle

    2014-01-01

    Health care organizations, both large and small, frequently undergo processes of change. In fact, if health care organizations are to improve over time, they must change; this includes pain services. The purpose of the present study was to examine a process of change in treatment model within a specialty interdisciplinary pain service in the UK. This change entailed a switch from traditional cognitive–behavioural therapy to a form of cognitive–behavioural therapy called acceptance and commitment therapy. An anonymous online survey, including qualitative and quantitative components, was carried out approximately 15 months after the initial introduction of the new treatment model and methods. Fourteen out of 16 current clinical staff responded to the survey. Three themes emerged in qualitative analyses: positive engagement in change; uncertainty and discomfort; and group cohesion versus discord. Quantitative results from closed questions showed a pattern of uncertainty about the superiority of one model over the other, combined with more positive views on progress reflected, and the experience of personal benefits, from adopting the new model. The psychological flexibility model, the model behind acceptance and commitment therapy, may clarify both processes in patient behaviour and processes of staff experience and skilful treatment delivery. This integration of processes on both sides of treatment delivery may be a strength of acceptance and commitment therapy. PMID:26516541

  7. From traditional cognitive-behavioural therapy to acceptance and commitment therapy for chronic pain: a mixed-methods study of staff experiences of change.

    PubMed

    Barker, Estelle; McCracken, Lance M

    2014-08-01

    Health care organizations, both large and small, frequently undergo processes of change. In fact, if health care organizations are to improve over time, they must change; this includes pain services. The purpose of the present study was to examine a process of change in treatment model within a specialty interdisciplinary pain service in the UK. This change entailed a switch from traditional cognitive-behavioural therapy to a form of cognitive-behavioural therapy called acceptance and commitment therapy. An anonymous online survey, including qualitative and quantitative components, was carried out approximately 15 months after the initial introduction of the new treatment model and methods. Fourteen out of 16 current clinical staff responded to the survey. Three themes emerged in qualitative analyses: positive engagement in change; uncertainty and discomfort; and group cohesion versus discord. Quantitative results from closed questions showed a pattern of uncertainty about the superiority of one model over the other, combined with more positive views on progress reflected, and the experience of personal benefits, from adopting the new model. The psychological flexibility model, the model behind acceptance and commitment therapy, may clarify both processes in patient behaviour and processes of staff experience and skilful treatment delivery. This integration of processes on both sides of treatment delivery may be a strength of acceptance and commitment therapy. PMID:26516541

  8. The Lasting Influences of Early Food-Related Variety Experience: A Longitudinal Study of Vegetable Acceptance from 5 Months to 6 Years in Two Populations.

    PubMed

    Maier-Nöth, Andrea; Schaal, Benoist; Leathwood, Peter; Issanchou, Sylvie

    2016-01-01

    Children's vegetable consumption falls below current recommendations, highlighting the need to identify strategies that can successfully promote better acceptance of vegetables. Recently, experimental studies have reported promising interventions that increase acceptance of vegetables. The first, offering infants a high variety of vegetables at weaning, increased acceptance of new foods, including vegetables. The second, offering an initially disliked vegetable at 8 subsequent meals markedly increased acceptance for that vegetable. So far, these effects have been shown to persist for at least several weeks. We now present follow-up data at 15 months, 3 and 6 years obtained through questionnaire (15 mo, 3y) and experimental (6y) approaches. At 15 months, participants who had been breast-fed were reported as eating and liking more vegetables than those who had been formula-fed. The initially disliked vegetable that became accepted after repeated exposure was still liked and eaten by 79% of the children. At 3 years, the initially disliked vegetable was still liked and eaten by 73% of the children. At 6 years, observations in an experimental setting showed that children who had been breast-fed and children who had experienced high vegetable variety at the start of weaning ate more of new vegetables and liked them more. They were also more willing to taste vegetables than formula-fed children or the no or low variety groups. The initially disliked vegetable was still liked by 57% of children. This follow-up study suggests that experience with chemosensory variety in the context of breastfeeding or at the onset of complementary feeding can influence chemosensory preferences for vegetables into childhood. PMID:26968029

  9. The Lasting Influences of Early Food-Related Variety Experience: A Longitudinal Study of Vegetable Acceptance from 5 Months to 6 Years in Two Populations

    PubMed Central

    Maier-Nöth, Andrea; Schaal, Benoist; Leathwood, Peter; Issanchou, Sylvie

    2016-01-01

    Children’s vegetable consumption falls below current recommendations, highlighting the need to identify strategies that can successfully promote better acceptance of vegetables. Recently, experimental studies have reported promising interventions that increase acceptance of vegetables. The first, offering infants a high variety of vegetables at weaning, increased acceptance of new foods, including vegetables. The second, offering an initially disliked vegetable at 8 subsequent meals markedly increased acceptance for that vegetable. So far, these effects have been shown to persist for at least several weeks. We now present follow-up data at 15 months, 3 and 6 years obtained through questionnaire (15 mo, 3y) and experimental (6y) approaches. At 15 months, participants who had been breast-fed were reported as eating and liking more vegetables than those who had been formula-fed. The initially disliked vegetable that became accepted after repeated exposure was still liked and eaten by 79% of the children. At 3 years, the initially disliked vegetable was still liked and eaten by 73% of the children. At 6 years, observations in an experimental setting showed that children who had been breast-fed and children who had experienced high vegetable variety at the start of weaning ate more of new vegetables and liked them more. They were also more willing to taste vegetables than formula-fed children or the no or low variety groups. The initially disliked vegetable was still liked by 57% of children. This follow-up study suggests that experience with chemosensory variety in the context of breastfeeding or at the onset of complementary feeding can influence chemosensory preferences for vegetables into childhood. PMID:26968029

  10. Clinical trials in cancer: the role of surrogate patients in defining what constitutes an ethically acceptable clinical experiment.

    PubMed Central

    Mackillop, W. J.; Palmer, M. J.; O'Sullivan, B.; Ward, G. K.; Steele, R.; Dotsikas, G.

    1989-01-01

    Doctors who treat lung cancer in Ontario were previously asked how they would wish to be managed if they developed non-small cell lung cancer and whether they would consent to participate in six clinical trials for which they might be eligible. The proportion of these expert surrogate patients who would consent to each clinical trial ranged from 11 to 64%. The results of this study were transmitted to the same group of doctors who were asked to comment on the ethical acceptability of each trial in the light of this information. The majority of physicians said that those trials to which less than 50% of expert surrogates consented should not have been opened to patients. Sixty-nine per cent of doctors thought that new trials should be evaluated in this way. We also present the results of a survey of 400 lay people in Ontario who were asked to imagine that they had lung cancer and whether they would consent to participate in two of these same clinical trials. Fifty per cent of lay people consented to a randomised trial of lobectomy versus segmentectomy in early, operable disease (LCSC-821) compared to 64% of expert surrogates, and 48% of lay people consented to a randomised trial of five different forms of chemotherapy in metastatic disease (SWOG-8241) compared to 19% of doctors. It was concluded that the lay people were unable to discern differences in the acceptability of clinical trials which were clear to experts in the field. Subsequently, respondents were told about the decisions which doctors would make in the same circumstances and asked if this information would modify their previous decisions. There is no net change in the proportion of patients consenting to the surgery trial but the proportion of people consenting to the chemotherapy trial decreased by 40%. The majority of lay people said that they would wish to have access to this type of information before consenting to participate in a clinical trial. PMID:2930704

  11. XWeB: The XML Warehouse Benchmark

    NASA Astrophysics Data System (ADS)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  12. BENCHMARKING SUSTAINABILITY ENGINEERING EDUCATION

    EPA Science Inventory

    The goals of this project are to develop and apply a methodology for benchmarking curricula in sustainability engineering and to identify individuals active in sustainability engineering education.

  13. Diagnostic Algorithm Benchmarking

    NASA Technical Reports Server (NTRS)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  14. Benchmarking ENDF/B-VII.0

    NASA Astrophysics Data System (ADS)

    van der Marck, Steven C.

    2006-12-01

    The new major release VII.0 of the ENDF/B nuclear data library has been tested extensively using benchmark calculations. These were based upon MCNP-4C3 continuous-energy Monte Carlo neutronics simulations, together with nuclear data processed using the code NJOY. Three types of benchmarks were used, viz., criticality safety benchmarks, (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 700 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6Li, 7Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D 2O, H 2O, concrete, polyethylene and teflon). For testing delayed neutron data more than thirty measurements in widely varying systems were used. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, and two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. In criticality safety, many benchmarks were chosen from the category with a thermal spectrum, low-enriched uranium, compound fuel (LEU-COMP-THERM), because this is typical of most current-day reactors, and because these benchmarks were previously underpredicted by as much as 0.5% by most nuclear data libraries (such as ENDF/B-VI.8, JEFF-3.0). The calculated results presented here show that this underprediction is no longer there for ENDF/B-VII.0. The average over 257

  15. The KMAT: Benchmarking Knowledge Management.

    ERIC Educational Resources Information Center

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  16. Shielding Integral Benchmark Archive and Database (SINBAD)

    SciTech Connect

    Kirk, Bernadette Lugue; Grove, Robert E; Kodeli, I.; Sartori, Enrico; Gulliford, J.

    2011-01-01

    The Shielding Integral Benchmark Archive and Database (SINBAD) collection of benchmarks was initiated in the early 1990 s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development s Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD is a major attempt to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD is also a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories fission, fusion, and accelerator benchmarks. Where possible, each experiment is described and analyzed using deterministic or probabilistic (Monte Carlo) radiation transport software.

  17. From Being Bullied to Being Accepted: The Lived Experiences of a Student with Asperger's Enrolled in a Christian University

    ERIC Educational Resources Information Center

    Reid, Denise P.

    2015-01-01

    Thirteen participants from two private universities located in the western region of the United States shared their lived experiences of being a college student who does not request accommodations. In one's educational pursuit, bullying is often experienced. While the rates of bullying have increased, students with disabilities are more likely to…

  18. Workshops and problems for benchmarking eddy current codes

    SciTech Connect

    Turner, L.R.; Davey, K.; Ida, N.; Rodger, D.; Kameari, A.; Bossavit, A.; Emson, C.R.I.

    1988-08-01

    A series of six workshops was held in 1986 and 1987 to compare eddy current codes, using six benchmark problems. The problems included transient and steady-state ac magnetic fields, close and far boundary conditions, magnetic and non-magnetic materials. All the problems were based either on experiments or on geometries that can be solved analytically. The workshops and solutions to the problems are described. Results show that many different methods and formulations give satisfactory solutions, and that in many cases reduced dimensionality or coarse discretization can give acceptable results while reducing the computer time required. A second two-year series of TEAM (Testing Electromagnetic Analysis Methods) workshops, using six more problems, is underway. 12 refs., 15 figs., 4 tabs.

  19. Development of a HEX-Z Partially Homogenized Benchmark Model for the FFTF Isothermal Physics Measurements

    SciTech Connect

    John D. Bess

    2012-05-01

    A series of isothermal physics measurements were performed as part of an acceptance testing program for the Fast Flux Test Facility (FFTF). A HEX-Z partially-homogenized benchmark model of the FFTF fully-loaded core configuration was developed for evaluation of these measurements. Evaluated measurements include the critical eigenvalue of the fully-loaded core, two neutron spectra, 32 reactivity effects measurements, an isothermal temperature coefficient, and low-energy gamma and electron spectra. Dominant uncertainties in the critical configuration include the placement of radial shielding around the core, reactor core assembly pitch, composition of the stainless steel components, plutonium content in the fuel pellets, and boron content in the absorber pellets. Calculations of criticality, reactivity effects measurements, and the isothermal temperature coefficient using MCNP5 and ENDF/B-VII.0 cross sections with the benchmark model are in good agreement with the benchmark experiment measurements. There is only some correlation between calculated and measured spectral measurements; homogenization of many of the core components may have impacted computational assessment of these measurements. This benchmark evaluation has been added to the IRPhEP Handbook.

  20. Benchmarking infrastructure for mutation text mining

    PubMed Central

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  1. Inventory of Safety-related Codes and Standards for Energy Storage Systems with some Experiences related to Approval and Acceptance

    SciTech Connect

    Conover, David R.

    2014-09-11

    The purpose of this document is to identify laws, rules, model codes, codes, standards, regulations, specifications (CSR) related to safety that could apply to stationary energy storage systems (ESS) and experiences to date securing approval of ESS in relation to CSR. This information is intended to assist in securing approval of ESS under current CSR and to identification of new CRS or revisions to existing CRS and necessary supporting research and documentation that can foster the deployment of safe ESS.

  2. PNNL Information Technology Benchmarking

    SciTech Connect

    DD Hostetler

    1999-09-08

    Benchmarking is a methodology for searching out industry best practices that lead to superior performance. It is exchanging information, not just with any organization, but with organizations known to be the best within PNNL, in industry, or in dissimilar industries with equivalent functions. It is used as a continuous improvement tool for business and technical processes, products, and services. Information technology--comprising all computer and electronic communication products and services--underpins the development and/or delivery of many PNNL products and services. This document describes the Pacific Northwest National Laboratory's (PNNL's) approach to information technology (IT) benchmarking. The purpose is to engage other organizations in the collaborative process of benchmarking in order to improve the value of IT services provided to customers. TM document's intended audience consists of other US Department of Energy (DOE) national laboratories and their IT staff. Although the individual participants must define the scope of collaborative benchmarking, an outline of IT service areas for possible benchmarking is described.

  3. Effectiveness and acceptability of parental financial incentives and quasi-mandatory schemes for increasing uptake of vaccinations in preschool children: systematic review, qualitative study and discrete choice experiment.

    PubMed Central

    Adams, Jean; Bateman, Belinda; Becker, Frauke; Cresswell, Tricia; Flynn, Darren; McNaughton, Rebekah; Oluboyede, Yemi; Robalino, Shannon; Ternent, Laura; Sood, Benjamin Gardner; Michie, Susan; Shucksmith, Janet; Sniehotta, Falko F; Wigham, Sarah

    2015-01-01

    BACKGROUND Uptake of preschool vaccinations is less than optimal. Financial incentives and quasi-mandatory policies (restricting access to child care or educational settings to fully vaccinated children) have been used to increase uptake internationally, but not in the UK. OBJECTIVE To provide evidence on the effectiveness, acceptability and economic costs and consequences of parental financial incentives and quasi-mandatory schemes for increasing the uptake of preschool vaccinations. DESIGN Systematic review, qualitative study and discrete choice experiment (DCE) with questionnaire. SETTING Community, health and education settings in England. PARTICIPANTS Qualitative study - parents and carers of preschool children, health and educational professionals. DCE - parents and carers of preschool children identified as 'at high risk' and 'not at high risk' of incompletely vaccinating their children. DATA SOURCES Qualitative study - focus groups and individual interviews. DCE - online questionnaire. REVIEW METHODS The review included studies exploring the effectiveness, acceptability or economic costs and consequences of interventions that offered contingent rewards or penalties with real material value for preschool vaccinations, or quasi-mandatory schemes that restricted access to 'universal' services, compared with usual care or no intervention. Electronic database, reference and citation searches were conducted. RESULTS Systematic review - there was insufficient evidence to conclude that the interventions considered are effective. There was some evidence that the quasi-mandatory interventions were acceptable. There was insufficient evidence to draw conclusions on economic costs and consequences. Qualitative study - there was little appetite for parental financial incentives. Quasi-mandatory schemes were more acceptable. Optimising current services was consistently preferred to the interventions proposed. DCE and questionnaire - universal parental financial incentives

  4. The FTIO Benchmark

    NASA Technical Reports Server (NTRS)

    Fagerstrom, Frederick C.; Kuszmaul, Christopher L.; Woo, Alex C. (Technical Monitor)

    1999-01-01

    We introduce a new benchmark for measuring the performance of parallel input/ouput. This benchmark has flexible initialization. size. and scaling properties that allows it to satisfy seven criteria for practical parallel I/O benchmarks. We obtained performance results while running on the a SGI Origin2OOO computer with various numbers of processors: with 4 processors. the performance was 68.9 Mflop/s with 0.52 of the time spent on I/O, with 8 processors the performance was 139.3 Mflop/s with 0.50 of the time spent on I/O, with 16 processors the performance was 173.6 Mflop/s with 0.43 of the time spent on I/O. and with 32 processors the performance was 259.1 Mflop/s with 0.47 of the time spent on I/O.

  5. Benchmarking. It's the future.

    PubMed

    Fazzi, Robert A; Agoglia, Robert V; Harlow, Lynn

    2002-11-01

    You can't go to a state conference, read a home care publication or log on to an Internet listserv ... without hearing or reading someone ... talk about benchmarking. What are your average case mix weights? How many visits are your nurses averaging per day? What is your average caseload for full time nurses in the field? What is your profit or loss per episode? The benchmark systems now available in home care potentially can serve as an early warning and partial protection for agencies. Agencies can collect data, analyze the outcomes, and through comparative benchmarking, determine where they are competitive and where they need to improve. These systems clearly provide agencies with the opportunity to be more proactive. PMID:12436898

  6. Accelerated randomized benchmarking

    NASA Astrophysics Data System (ADS)

    Granade, Christopher; Ferrie, Christopher; Cory, D. G.

    2015-01-01

    Quantum information processing offers promising advances for a wide range of fields and applications, provided that we can efficiently assess the performance of the control applied in candidate systems. That is, we must be able to determine whether we have implemented a desired gate, and refine accordingly. Randomized benchmarking reduces the difficulty of this task by exploiting symmetries in quantum operations. Here, we bound the resources required for benchmarking and show that, with prior information, we can achieve several orders of magnitude better accuracy than in traditional approaches to benchmarking. Moreover, by building on state-of-the-art classical algorithms, we reach these accuracies with near-optimal resources. Our approach requires an order of magnitude less data to achieve the same accuracies and to provide online estimates of the errors in the reported fidelities. We also show that our approach is useful for physical devices by comparing to simulations.

  7. COG validation: SINBAD Benchmark Problems

    SciTech Connect

    Lent, E M; Sale, K E; Buck, R M; Descalle, M

    2004-02-23

    We validated COG, a 3D Monte Carlo radiation transport code, against experimental data and MNCP4C simulations from the Shielding Integral Benchmark Archive Database (SINBAD) compiled by RSICC. We modeled three experiments: the Osaka Nickel and Aluminum sphere experiments conducted at the OKTAVIAN facility, and the liquid oxygen experiment conducted at the FNS facility. COG results are in good agreement with experimental data and generally within a few % of MCNP results. There are several possible sources of discrepancy between MCNP and COG results: (1) the cross-section database versions are different, MCNP uses ENDFB VI 1.1 while COG uses ENDFB VIR7, (2) the code implementations are different, and (3) the models may differ slightly. We also limited the use of variance reduction methods when running the COG version of the problems.

  8. Benchmarking image fusion algorithm performance

    NASA Astrophysics Data System (ADS)

    Howell, Christopher L.

    2012-06-01

    Registering two images produced by two separate imaging sensors having different detector sizes and fields of view requires one of the images to undergo transformation operations that may cause its overall quality to degrade with regards to visual task performance. This possible change in image quality could add to an already existing difference in measured task performance. Ideally, a fusion algorithm would take as input unaltered outputs from each respective sensor used in the process. Therefore, quantifying how well an image fusion algorithm performs should be base lined to whether the fusion algorithm retained the performance benefit achievable by each independent spectral band being fused. This study investigates an identification perception experiment using a simple and intuitive process for discriminating between image fusion algorithm performances. The results from a classification experiment using information theory based image metrics is presented and compared to perception test results. The results show an effective performance benchmark for image fusion algorithms can be established using human perception test data. Additionally, image metrics have been identified that either agree with or surpass the performance benchmark established.

  9. NAS Parallel Benchmarks Results

    NASA Technical Reports Server (NTRS)

    Subhash, Saini; Bailey, David H.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    The NAS Parallel Benchmarks (NPB) were developed in 1991 at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a pencil and paper fashion i.e. the complete details of the problem to be solved are given in a technical document, and except for a few restrictions, benchmarkers are free to select the language constructs and implementation techniques best suited for a particular system. In this paper, we present new NPB performance results for the following systems: (a) Parallel-Vector Processors: Cray C90, Cray T'90 and Fujitsu VPP500; (b) Highly Parallel Processors: Cray T3D, IBM SP2 and IBM SP-TN2 (Thin Nodes 2); (c) Symmetric Multiprocessing Processors: Convex Exemplar SPP1000, Cray J90, DEC Alpha Server 8400 5/300, and SGI Power Challenge XL. We also present sustained performance per dollar for Class B LU, SP and BT benchmarks. We also mention NAS future plans of NPB.

  10. Benchmarks: WICHE Region 2012

    ERIC Educational Resources Information Center

    Western Interstate Commission for Higher Education, 2013

    2013-01-01

    Benchmarks: WICHE Region 2012 presents information on the West's progress in improving access to, success in, and financing of higher education. The information is updated annually to monitor change over time and encourage its use as a tool for informed discussion in policy and education communities. To establish a general context for the…

  11. Python/Lua Benchmarks

    SciTech Connect

    Busby, L.

    2014-08-01

    This is an adaptation of the pre-existing Scimark benchmark code to a variety of Python and Lua implementations. It also measures performance of the Fparser expression parser and C and C++ code on a variety of simple scientific expressions.

  12. Changes in Benchmarked Training.

    ERIC Educational Resources Information Center

    Bassi, Laurie J.; Cheney, Scott

    1996-01-01

    Comparisons of the training practices of large companies confirm that the delivery and financing of training is changing rapidly. Companies in the American Society for Training and Development Benchmarking Forum are delivering less training with permanent staff and more with strategic use of technology, contract staff, and external providers,…

  13. Monte Carlo Benchmark

    2010-10-20

    The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.

  14. Benchmarks Momentum on Increase

    ERIC Educational Resources Information Center

    McNeil, Michele

    2008-01-01

    No longer content with the patchwork quilt of assessments used to measure states' K-12 performance, top policy groups are pushing states toward international benchmarking as a way to better prepare students for a competitive global economy. The National Governors Association, the Council of Chief State School Officers, and the standards-advocacy…

  15. Benchmarking the World's Best

    ERIC Educational Resources Information Center

    Tucker, Marc S.

    2012-01-01

    A century ago, the United States was a world leader in industrial benchmarking. However, after World War II, once no one could compete with the U.S., it became complacent. Many industrialized countries now have higher student achievement and more equitable and efficient education systems. A higher proportion of young people in their workforces…

  16. Air Traffic Management Technology Demostration Phase 1 (ATD) Interval Management for Near-Term Operations Validation of Acceptability (IM-NOVA) Experiment

    NASA Technical Reports Server (NTRS)

    Kibler, Jennifer L.; Wilson, Sara R.; Hubbs, Clay E.; Smail, James W.

    2015-01-01

    The Interval Management for Near-term Operations Validation of Acceptability (IM-NOVA) experiment was conducted at the National Aeronautics and Space Administration (NASA) Langley Research Center (LaRC) in support of the NASA Airspace Systems Program's Air Traffic Management Technology Demonstration-1 (ATD-1). ATD-1 is intended to showcase an integrated set of technologies that provide an efficient arrival solution for managing aircraft using Next Generation Air Transportation System (NextGen) surveillance, navigation, procedures, and automation for both airborne and ground-based systems. The goal of the IMNOVA experiment was to assess if procedures outlined by the ATD-1 Concept of Operations were acceptable to and feasible for use by flight crews in a voice communications environment when used with a minimum set of Flight Deck-based Interval Management (FIM) equipment and a prototype crew interface. To investigate an integrated arrival solution using ground-based air traffic control tools and aircraft Automatic Dependent Surveillance-Broadcast (ADS-B) tools, the LaRC FIM system and the Traffic Management Advisor with Terminal Metering and Controller Managed Spacing tools developed at the NASA Ames Research Center (ARC) were integrated into LaRC's Air Traffic Operations Laboratory (ATOL). Data were collected from 10 crews of current 757/767 pilots asked to fly a high-fidelity, fixed-based simulator during scenarios conducted within an airspace environment modeled on the Dallas-Fort Worth (DFW) Terminal Radar Approach Control area. The aircraft simulator was equipped with the Airborne Spacing for Terminal Area Routes (ASTAR) algorithm and a FIM crew interface consisting of electronic flight bags and ADS-B guidance displays. Researchers used "pseudo-pilot" stations to control 24 simulated aircraft that provided multiple air traffic flows into the DFW International Airport, and recently retired DFW air traffic controllers served as confederate Center, Feeder, Final

  17. Storage-Intensive Supercomputing Benchmark Study

    SciTech Connect

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    DBE Xeon Dual Socket Blackford Server Motherboard; 2 Intel Xeon Dual-Core 2.66 GHz processors; 1 GB DDR2 PC2-5300 RAM (2 x 512); 80GB Hard Drive (Seagate SATA II Barracuda). The Fusion board is presently capable of 4X in a PCIe slot. The image resampling benchmark was run on a dual Xeon workstation with NVIDIA graphics card (see Chapter 5 for full specification). An XtremeData Opteron+FPGA was used for the language classification application. We observed that these benchmarks are not uniformly I/O intensive. The only benchmark that showed greater that 50% of the time in I/O was the graph algorithm when it accessed data files over NFS. When local disk was used, the graph benchmark spent at most 40% of its time in I/O. The other benchmarks were CPU dominated. The image resampling benchmark and language classification showed order of magnitude speedup over software by using co-processor technology to offload the CPU-intensive kernels. Our experiments to date suggest that emerging hardware technologies offer significant benefit to boosting the performance of data-intensive algorithms. Using GPU and FPGA co-processors, we were able to improve performance by more than an order of magnitude on the benchmark algorithms, eliminating the processor bottleneck of CPU-bound tasks. Experiments with a prototype solid state nonvolative memory available today show 10X better throughput on random reads than disk, with a 2X speedup on a graph processing benchmark when compared to the use of local SATA disk.

  18. Experience of domestic violence and acceptance of intimate partner violence among out-of-school adolescent girls in Iwaya Community, Lagos State.

    PubMed

    Kunnuji, Michael O N

    2015-02-01

    Gender-based domestic violence (DV) comes at great costs to the victims and society at large. Yet, many women hold the view that intimate partner violence (IPV) against women is appropriate behavior. This study aimed at exploring the nexus of experience of different forms of DV and acceptance of IPV as appropriate behavior. Using data from a survey of 480 out-of-school adolescent girls, the researcher shows that psychological abuse is a significant predictor of approval of DV resulting from the wife's failure to make food available for her husband with victims of abuse approving of violence against women. Conversely, victims of sexual abuse, more than nonvictims, disapproved of wife beating resulting from the wife going out without informing the husband. The implications of the findings are discussed and the study recommends deconstructing women's negative beliefs upon which DV rests. PMID:24919993

  19. Early experiences on the feasibility, acceptability, and use of malaria rapid diagnostic tests at peripheral health centres in Uganda-insights into some barriers and facilitators

    PubMed Central

    2012-01-01

    Background While feasibility of new health technologies in well-resourced healthcare settings is extensively documented, it is largely unknown in low-resourced settings. Uganda's decision to deploy and scale up malaria rapid diagnostic tests (mRDTs) in public health facilities and at the community level provides a useful entry point for documenting field experience, acceptance, and predictive variables for technology acceptance and use. These findings are important in informing implementation of new health technologies, plans, and budgets in low-resourced national disease control programmes. Methods A cross-sectional qualitative descriptive study at 21 health centres in Uganda was undertaken in 2007 to elucidate the barriers and facilitators in the introduction of mRDTs as a new diagnostic technology at lower-level health facilities. Pre-tested interview questionnaires were administered through pre-structured patient exit interviews and semi-structured health worker interviews to gain an understanding of the response to this implementation. A conceptual framework on technology acceptance and use was adapted for this study and used to prepare the questionnaires. Thematic analysis was used to generate themes from the data. Results A total of 52 of 57 health workers (92%) reported a belief that a positive mRDT result was true, although only 41 of 57 (64%) believed that treatment with anti-malarials was justified for every positive mRDT case. Of the same health workers, only 49% believed that a negative mRDT result was truly negative. Factors linked to these findings were related to mRDT acceptance and use, including the design and characteristics of the device, availability and quality of mRDT ancillary supplies, health worker capacity to investigate febrile cases testing negative with the device and provide appropriate treatment, availability of effective malaria treatments, reliability of the health commodity supply chain, existing national policy recommendations

  20. Benchmark testing of {sup 233}U evaluations

    SciTech Connect

    Wright, R.Q.; Leal, L.C.

    1997-07-01

    In this paper we investigate the adequacy of available {sup 233}U cross-section data (ENDF/B-VI and JENDL-3) for calculation of critical experiments. An ad hoc revised {sup 233}U evaluation is also tested and appears to give results which are improved relative to those obtained with either ENDF/B-VI or JENDL-3 cross sections. Calculations of k{sub eff} were performed for ten fast benchmarks and six thermal benchmarks using the three cross-section sets. Central reaction-rate-ratio calculations were also performed.

  1. Benchmark Airport Charges

    NASA Technical Reports Server (NTRS)

    de Wit, A.; Cohn, N.

    1999-01-01

    The Netherlands Directorate General of Civil Aviation (DGCA) commissioned Hague Consulting Group (HCG) to complete a benchmark study of airport charges at twenty eight airports in Europe and around the world, based on 1996 charges. This study followed previous DGCA research on the topic but included more airports in much more detail. The main purpose of this new benchmark study was to provide insight into the levels and types of airport charges worldwide and into recent changes in airport charge policy and structure. This paper describes the 1996 analysis. It is intended that this work be repeated every year in order to follow developing trends and provide the most up-to-date information possible.

  2. Radiography benchmark 2014

    SciTech Connect

    Jaenisch, G.-R. Deresch, A. Bellon, C.; Schumm, A.; Lucet-Sanchez, F.; Guerin, P.

    2015-03-31

    The purpose of the 2014 WFNDEC RT benchmark study was to compare predictions of various models of radiographic techniques, in particular those that predict the contribution of scattered radiation. All calculations were carried out for homogenous materials and a mono-energetic X-ray point source in the energy range between 100 keV and 10 MeV. The calculations were to include the best physics approach available considering electron binding effects. Secondary effects like X-ray fluorescence and bremsstrahlung production were to be taken into account if possible. The problem to be considered had two parts. Part I examined the spectrum and the spatial distribution of radiation behind a single iron plate. Part II considered two equally sized plates, made of iron and aluminum respectively, only evaluating the spatial distribution. Here we present the results of above benchmark study, comparing them to MCNP as the assumed reference model. The possible origins of the observed deviations are discussed.

  3. Radiography benchmark 2014

    NASA Astrophysics Data System (ADS)

    Jaenisch, G.-R.; Deresch, A.; Bellon, C.; Schumm, A.; Lucet-Sanchez, F.; Guerin, P.

    2015-03-01

    The purpose of the 2014 WFNDEC RT benchmark study was to compare predictions of various models of radiographic techniques, in particular those that predict the contribution of scattered radiation. All calculations were carried out for homogenous materials and a mono-energetic X-ray point source in the energy range between 100 keV and 10 MeV. The calculations were to include the best physics approach available considering electron binding effects. Secondary effects like X-ray fluorescence and bremsstrahlung production were to be taken into account if possible. The problem to be considered had two parts. Part I examined the spectrum and the spatial distribution of radiation behind a single iron plate. Part II considered two equally sized plates, made of iron and aluminum respectively, only evaluating the spatial distribution. Here we present the results of above benchmark study, comparing them to MCNP as the assumed reference model. The possible origins of the observed deviations are discussed.

  4. Benchmark Airport Charges

    NASA Technical Reports Server (NTRS)

    deWit, A.; Cohn, N.

    1999-01-01

    The Netherlands Directorate General of Civil Aviation (DGCA) commissioned Hague Consulting Group (HCG) to complete a benchmark study of airport charges at twenty eight airports in Europe and around the world, based on 1996 charges. This study followed previous DGCA research on the topic but included more airports in much more detail. The main purpose of this new benchmark study was to provide insight into the levels and types of airport charges worldwide and into recent changes in airport charge policy and structure, This paper describes the 1996 analysis. It is intended that this work be repeated every year in order to follow developing trends and provide the most up-to-date information possible.

  5. Sequoia Messaging Rate Benchmark

    SciTech Connect

    Friedley, Andrew

    2008-01-22

    The purpose of this benchmark is to measure the maximal message rate of a single compute node. The first num_cores ranks are expected to reside on the 'core' compute node for which message rate is being tested. After that, the next num_nbors ranks are neighbors for the first core rank, the next set of num_nbors ranks are neighbors for the second core rank, and so on. For example, testing an 8-core node (num_cores = 8) with 4 neighbors (num_nbors = 4) requires 8 + 8 * 4 - 40 ranks. The first 8 of those 40 ranks are expected to be on the 'core' node being benchmarked, while the rest of the ranks are on separate nodes.

  6. Sequoia Messaging Rate Benchmark

    2008-01-22

    The purpose of this benchmark is to measure the maximal message rate of a single compute node. The first num_cores ranks are expected to reside on the 'core' compute node for which message rate is being tested. After that, the next num_nbors ranks are neighbors for the first core rank, the next set of num_nbors ranks are neighbors for the second core rank, and so on. For example, testing an 8-core node (num_cores = 8)more » with 4 neighbors (num_nbors = 4) requires 8 + 8 * 4 - 40 ranks. The first 8 of those 40 ranks are expected to be on the 'core' node being benchmarked, while the rest of the ranks are on separate nodes.« less

  7. MPI Multicore Linktest Benchmark

    2008-01-25

    The MPI Multicore Linktest (LinkTest) measures the aggregate bandwidth from/to a multicore node in a parallel system. It allows the user to specify a variety of different node layout and communication routine variations and reports the maximal observed bandwidth across all specified options. In particular, this benchmark is able to vary the number of tasks on the root node and thereby allows users to study the impact of multicore architectures on MPI communication performance.

  8. Benchmarking the billing office.

    PubMed

    Woodcock, Elizabeth W; Williams, A Scott; Browne, Robert C; King, Gerald

    2002-09-01

    Benchmarking data related to human and financial resources in the billing process allows an organization to allocate its resources more effectively. Analyzing human resources used in the billing process helps determine cost-effective staffing. The deployment of human resources in a billing office affects timeliness of payment and ability to maximize revenue potential. Analyzing financial resource helps an organization allocate those resources more effectively. PMID:12235973

  9. Algebraic Multigrid Benchmark

    SciTech Connect

    2013-05-06

    AMG2013 is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. It has been derived directly from the Boomer AMG solver in the hypre library, a large linear solvers library that is being developed in the Center for Applied Scientific Computing (CASC) at LLNL. The driver provided in the benchmark can build various test problems. The default problem is a Laplace type problem on an unstructured domain with various jumps and an anisotropy in one part.

  10. The NAS Parallel Benchmarks

    SciTech Connect

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage

  11. Acceptance speech.

    PubMed

    Carpenter, M

    1994-01-01

    In Bangladesh, the assistant administrator of USAID gave an acceptance speech at an awards ceremony on the occasion of the 25th anniversary of oral rehydration solution (ORS). The ceremony celebrated the key role of the International Centre for Diarrhoeal Disease Research, Bangladesh (ICDDR,B) in the discovery of ORS. Its research activities over the last 25 years have brought ORS to every village in the world, preventing more than a million deaths each year. ORS is the most important medical advance of the 20th century. It is affordable and client-oriented, a true appropriate technology. USAID has provided more than US$ 40 million to ICDDR,B for diarrheal disease and measles research, urban and rural applied family planning and maternal and child health research, and vaccine development. ICDDR,B began as the relatively small Cholera Research Laboratory and has grown into an acclaimed international center for health, family planning, and population research. It leads the world in diarrheal disease research. ICDDR,B is the leading center for applied health research in South Asia. It trains public health specialists from around the world. The government of Bangladesh and the international donor community have actively joined in support of ICDDR,B. The government applies the results of ICDDR,B research to its programs to improve the health and well-being of Bangladeshis. ICDDR,B now also studies acute respiratory diseases and measles. Population and health comprise 1 of USAID's 4 strategic priorities, the others being economic growth, environment, and democracy, USAID promotes people's participation in these 4 areas and in the design and implementation of development projects. USAID is committed to the use and improvement of ORS and to complementary strategies that further reduce diarrhea-related deaths. Continued collaboration with a strong user perspective and integrated services will lead to sustainable development. PMID:12345470

  12. Acceptance speech.

    PubMed

    Yusuf, C K

    1994-01-01

    I am proud and honored to accept this award on behalf of the Government of Bangladesh, and the millions of Bangladeshi children saved by oral rehydration solution. The Government of Bangladesh is grateful for this recognition of its commitment to international health and population research and cost-effective health care for all. The Government of Bangladesh has already made remarkable strides forward in the health and population sector, and this was recognized in UNICEF's 1993 "State of the World's Children". The national contraceptive prevalence rate, at 40%, is higher than that of many developed countries. It is appropriate that Bangladesh, where ORS was discovered, has the largest ORS production capacity in the world. It was remarkable that after the devastating cyclone in 1991, the country was able to produce enough ORS to meet the needs and remain self-sufficient. Similarly, Bangladesh has one of the most effective, flexible and efficient control of diarrheal disease and epidemic response program in the world. Through the country, doctors have been trained in diarrheal disease management, and stores of ORS are maintained ready for any outbreak. Despite grim predictions after the 1991 cyclone and the 1993 floods, relatively few people died from diarrheal disease. This is indicative of the strength of the national program. I want to take this opportunity to acknowledge the contribution of ICDDR, B and the important role it plays in supporting the Government's efforts in the health and population sector. The partnership between the Government of Bangladesh and ICDDR, B has already borne great fruit, and I hope and believe that it will continue to do so for many years in the future. Thank you. PMID:12345479

  13. RECENT ADDITIONS OF CRITICALITY SAFETY RELATED INTEGRAL BENCHMARK DATA TO THE ICSBEP AND IRPHEP HANDBOOKS

    SciTech Connect

    J. Blair Briggs; Lori Scott; Yolanda Rugama; Enrico Sartori

    2009-09-01

    High-quality integral benchmark experiments have always been a priority for criticality safety. However, interest in integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of future criticality safety needs to support next generation reactor and advanced fuel cycle concepts. The importance of drawing upon existing benchmark data is becoming more apparent because of dwindling availability of critical facilities worldwide and the high cost of performing new experiments. Integral benchmark data from the International Handbook of Evaluated Criticality Safety Benchmark Experiments and the International Handbook of Reactor Physics Benchmark Experiments are widely used. Benchmark data have been added to these two handbooks since the last Nuclear Criticality Safety Division Topical Meeting in Knoxville, Tennessee (September 2005). This paper highlights these additions.

  14. Self-benchmarking Guide for Data Centers: Metrics, Benchmarks, Actions

    SciTech Connect

    Mathew, Paul; Ganguly, Srirupa; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in data centers. This guide is primarily intended for personnel who have responsibility for managing energy use in existing data centers - including facilities managers, energy managers, and their engineering consultants. Additionally, data center designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior data center benchmarking studies supported by the California Energy Commission. Much of the benchmarking data are drawn from the LBNL data center benchmarking database that was developed from these studies. Additional benchmark data were obtained from engineering experts including facility designers and energy managers. This guide also builds on recent research supported by the U.S. Department of Energy's Save Energy Now program.

  15. Benchmark on outdoor scenes

    NASA Astrophysics Data System (ADS)

    Zhang, Hairong; Wang, Cheng; Chen, Yiping; Jia, Fukai; Li, Jonathan

    2016-03-01

    Depth super-resolution is becoming popular in computer vision, and most of test data is based on indoor data sets with ground-truth measurements such as Middlebury. However, indoor data sets mainly are acquired from structured light techniques under ideal conditions, which cannot represent the objective world with nature light. Unlike indoor scenes, the uncontrolled outdoor environment is much more complicated and is rich both in visual and depth texture. For that reason, we develop a more challenging and meaningful outdoor benchmark for depth super-resolution using the state-of-the-art active laser scanning system.

  16. Algebraic Multigrid Benchmark

    2013-05-06

    AMG2013 is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. It has been derived directly from the Boomer AMG solver in the hypre library, a large linear solvers library that is being developed in the Center for Applied Scientific Computing (CASC) at LLNL. The driver provided in the benchmark can build various test problems. The default problem is a Laplace type problem on an unstructured domain with various jumpsmore » and an anisotropy in one part.« less

  17. Benchmarking emerging logic devices

    NASA Astrophysics Data System (ADS)

    Nikonov, Dmitri

    2014-03-01

    As complementary metal-oxide-semiconductor field-effect transistors (CMOS FET) are being scaled to ever smaller sizes by the semiconductor industry, the demand is growing for emerging logic devices to supplement CMOS in various special functions. Research directions and concepts of such devices are overviewed. They include tunneling, graphene based, spintronic devices etc. The methodology to estimate future performance of emerging (beyond CMOS) devices and simple logic circuits based on them is explained. Results of benchmarking are used to identify more promising concepts and to map pathways for improvement of beyond CMOS computing.

  18. A new numerical benchmark of a freshwater lens

    NASA Astrophysics Data System (ADS)

    Stoeckl, L.; Walther, M.; Graf, T.

    2016-04-01

    A numerical benchmark for 2-D variable-density flow and solute transport in a freshwater lens is presented. The benchmark is based on results of laboratory experiments conducted by Stoeckl and Houben (2012) using a sand tank on the meter scale. This benchmark describes the formation and degradation of a freshwater lens over time as it can be found under real-world islands. An error analysis gave the appropriate spatial and temporal discretization of 1 mm and 8.64 s, respectively. The calibrated parameter set was obtained using the parameter estimation tool PEST. Comparing density-coupled and density-uncoupled results showed that the freshwater-saltwater interface position is strongly dependent on density differences. A benchmark that adequately represents saltwater intrusion and that includes realistic features of coastal aquifers or freshwater lenses was lacking. This new benchmark was thus developed and is demonstrated to be suitable to test variable-density groundwater models applied to saltwater intrusion investigations.

  19. Quantum benchmarks for pure single-mode Gaussian states.

    PubMed

    Chiribella, Giulio; Adesso, Gerardo

    2014-01-10

    Teleportation and storage of continuous variable states of light and atoms are essential building blocks for the realization of large-scale quantum networks. Rigorous validation of these implementations require identifying, and surpassing, benchmarks set by the most effective strategies attainable without the use of quantum resources. Such benchmarks have been established for special families of input states, like coherent states and particular subclasses of squeezed states. Here we solve the longstanding problem of defining quantum benchmarks for general pure Gaussian single-mode states with arbitrary phase, displacement, and squeezing, randomly sampled according to a realistic prior distribution. As a special case, we show that the fidelity benchmark for teleporting squeezed states with totally random phase and squeezing degree is 1/2, equal to the corresponding one for coherent states. We discuss the use of entangled resources to beat the benchmarks in experiments. PMID:24483875

  20. Benchmarking Using Basic DBMS Operations

    NASA Astrophysics Data System (ADS)

    Crolotte, Alain; Ghazal, Ahmad

    The TPC-H benchmark proved to be successful in the decision support area. Many commercial database vendors and their related hardware vendors used these benchmarks to show the superiority and competitive edge of their products. However, over time, the TPC-H became less representative of industry trends as vendors keep tuning their database to this benchmark-specific workload. In this paper, we present XMarq, a simple benchmark framework that can be used to compare various software/hardware combinations. Our benchmark model is currently composed of 25 queries that measure the performance of basic operations such as scans, aggregations, joins and index access. This benchmark model is based on the TPC-H data model due to its maturity and well-understood data generation capability. We also propose metrics to evaluate single-system performance and compare two systems. Finally we illustrate the effectiveness of this model by showing experimental results comparing two systems under different conditions.

  1. Seismo-acoustic ray model benchmarking against experimental tank data.

    PubMed

    Camargo Rodríguez, Orlando; Collis, Jon M; Simpson, Harry J; Ey, Emanuel; Schneiderwind, Joseph; Felisberto, Paulo

    2012-08-01

    Acoustic predictions of the recently developed traceo ray model, which accounts for bottom shear properties, are benchmarked against tank experimental data from the EPEE-1 and EPEE-2 (Elastic Parabolic Equation Experiment) experiments. Both experiments are representative of signal propagation in a Pekeris-like shallow-water waveguide over a non-flat isotropic elastic bottom, where significant interaction of the signal with the bottom can be expected. The benchmarks show, in particular, that the ray model can be as accurate as a parabolic approximation model benchmarked in similar conditions. The results of benchmarking are important, on one side, as a preliminary experimental validation of the model and, on the other side, demonstrates the reliability of the ray approach for seismo-acoustic applications. PMID:22894193

  2. Aeroelasticity Benchmark Assessment: Subsonic Fixed Wing Program

    NASA Technical Reports Server (NTRS)

    Florance, Jennifer P.; Chwalowski, Pawel; Wieseman, Carol D.

    2010-01-01

    The fundamental technical challenge in computational aeroelasticity is the accurate prediction of unsteady aerodynamic phenomena and the effect on the aeroelastic response of a vehicle. Currently, a benchmarking standard for use in validating the accuracy of computational aeroelasticity codes does not exist. Many aeroelastic data sets have been obtained in wind-tunnel and flight testing throughout the world; however, none have been globally presented or accepted as an ideal data set. There are numerous reasons for this. One reason is that often, such aeroelastic data sets focus on the aeroelastic phenomena alone (flutter, for example) and do not contain associated information such as unsteady pressures and time-correlated structural dynamic deflections. Other available data sets focus solely on the unsteady pressures and do not address the aeroelastic phenomena. Other discrepancies can include omission of relevant data, such as flutter frequency and / or the acquisition of only qualitative deflection data. In addition to these content deficiencies, all of the available data sets present both experimental and computational technical challenges. Experimental issues include facility influences, nonlinearities beyond those being modeled, and data processing. From the computational perspective, technical challenges include modeling geometric complexities, coupling between the flow and the structure, grid issues, and boundary conditions. The Aeroelasticity Benchmark Assessment task seeks to examine the existing potential experimental data sets and ultimately choose the one that is viewed as the most suitable for computational benchmarking. An initial computational evaluation of that configuration will then be performed using the Langley-developed computational fluid dynamics (CFD) software FUN3D1 as part of its code validation process. In addition to the benchmarking activity, this task also includes an examination of future research directions. Researchers within the

  3. Self-benchmarking Guide for Laboratory Buildings: Metrics, Benchmarks, Actions

    SciTech Connect

    Mathew, Paul; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  4. Self-benchmarking Guide for Cleanrooms: Metrics, Benchmarks, Actions

    SciTech Connect

    Mathew, Paul; Sartor, Dale; Tschudi, William

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  5. International land Model Benchmarking (ILAMB) Package v002.00

    DOE Data Explorer

    Collier, Nathaniel [Oak Ridge National Laboratory; Hoffman, Forrest M. [Oak Ridge National Laboratory; Mu, Mingquan [University of California, Irvine; Randerson, James T. [University of California, Irvine; Riley, William J. [Lawrence Berkeley National Laboratory

    2016-05-09

    As a contribution to International Land Model Benchmarking (ILAMB) Project, we are providing new analysis approaches, benchmarking tools, and science leadership. The goal of ILAMB is to assess and improve the performance of land models through international cooperation and to inform the design of new measurement campaigns and field studies to reduce uncertainties associated with key biogeochemical processes and feedbacks. ILAMB is expected to be a primary analysis tool for CMIP6 and future model-data intercomparison experiments. This team has developed initial prototype benchmarking systems for ILAMB, which will be improved and extended to include ocean model metrics and diagnostics.

  6. International land Model Benchmarking (ILAMB) Package v001.00

    DOE Data Explorer

    Mu, Mingquan [University of California, Irvine; Randerson, James T. [University of California, Irvine; Riley, William J. [Lawrence Berkeley National Laboratory; Hoffman, Forrest M. [Oak Ridge National Laboratory

    2016-05-02

    As a contribution to International Land Model Benchmarking (ILAMB) Project, we are providing new analysis approaches, benchmarking tools, and science leadership. The goal of ILAMB is to assess and improve the performance of land models through international cooperation and to inform the design of new measurement campaigns and field studies to reduce uncertainties associated with key biogeochemical processes and feedbacks. ILAMB is expected to be a primary analysis tool for CMIP6 and future model-data intercomparison experiments. This team has developed initial prototype benchmarking systems for ILAMB, which will be improved and extended to include ocean model metrics and diagnostics.

  7. Benchmarking foreign electronics technologies

    SciTech Connect

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  8. Internal Benchmarking for Institutional Effectiveness

    ERIC Educational Resources Information Center

    Ronco, Sharron L.

    2012-01-01

    Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…

  9. Benchmarking. A Guide for Educators.

    ERIC Educational Resources Information Center

    Tucker, Sue

    This book offers strategies for enhancing a school's teaching and learning by using benchmarking, a team-research and data-driven process for increasing school effectiveness. Benchmarking enables professionals to study and know their systems and continually improve their practices. The book is designed to lead a team step by step through the…

  10. Validation of the BUGJEFF311.BOLIB, BUGENDF70.BOLIB and BUGLE-B7 broad-group libraries on the PCA-Replica (H2O/Fe) neutron shielding benchmark experiment

    NASA Astrophysics Data System (ADS)

    Pescarini, Massimo; Orsi, Roberto; Frisoni, Manuela

    2016-03-01

    The PCA-Replica 12/13 (H2O/Fe) neutron shielding benchmark experiment was analysed using the TORT-3.2 3D SN code. PCA-Replica reproduces a PWR ex-core radial geometry with alternate layers of water and steel including a pressure vessel simulator. Three broad-group coupled neutron/photon working cross section libraries in FIDO-ANISN format with the same energy group structure (47 n + 20 γ) and based on different nuclear data were alternatively used: the ENEA BUGJEFF311.BOLIB (JEFF-3.1.1) and UGENDF70.BOLIB (ENDF/B-VII.0) libraries and the ORNL BUGLE-B7 (ENDF/B-VII.0) library. Dosimeter cross sections derived from the IAEA IRDF-2002 dosimetry file were employed. The calculated reaction rates for the Rh-103(n,n')Rh-103m, In-115(n,n')In-115m and S-32(n,p)P-32 threshold activation dosimeters and the calculated neutron spectra are compared with the corresponding experimental results.

  11. Three-Dimensional (X,Y,Z) Deterministic Analysis of the PCA-Replica Neutron Shielding Benchmark Experiment using the TORT-3.2 Code and Group Cross Section Libraries for LWR Shielding and Pressure Vessel Dosimetry

    NASA Astrophysics Data System (ADS)

    Pescarini, Massimo; Orsi, Roberto; Frisoni, Manuela

    2016-02-01

    The PCA-Replica 12/13 (H2O/Fe) neutron shielding benchmark experiment was analysed using the ORNL TORT-3.2 3D SN code. PCA-Replica, specifically conceived to test the accuracy of nuclear data and transport codes employed in LWR shielding and radiation damage calculations, reproduces a PWR ex-core radial geometry with alternate layers of water and steel including a PWR pressure vessel simulator. Three broad-group coupled neutron/photon working cross section libraries in FIDO-ANISN format with the same energy group structure (47 n + 20 γ) and based on different nuclear data were alternatively used: the ENEA BUGJEFF311.BOLIB (JEFF-3.1.1) and BUGENDF70.BOLIB (ENDF/B-VII.0) libraries and the ORNL BUGLE-96 (ENDF/B-VI.3) library. Dosimeter cross sections derived from the IAEA IRDF-2002 dosimetry file were employed. The calculated reaction rates for the Rh-103(n,n')Rh-103 m, In-115(n,n')In-115m and S-32(n,p)P-32 threshold activation dosimeters and the calculated neutron spectra are compared with the corresponding experimental results.

  12. The impact and applicability of critical experiment evaluations

    SciTech Connect

    Brewer, R.

    1997-06-01

    This paper very briefly describes a project to evaluate previously performed critical experiments. The evaluation is intended for use by criticality safety engineers to verify calculations, and may also be used to identify data which need further investigation. The evaluation process is briefly outlined; the accepted benchmark critical experiments will be used as a standard for verification and validation. The end result of the project will be a comprehensive reference document.

  13. FireHose Streaming Benchmarks

    2015-01-27

    The FireHose Streaming Benchmarks are a suite of stream-processing benchmarks defined to enable comparison of streaming software and hardware, both quantitatively vis-a-vis the rate at which they can process data, and qualitatively by judging the effort involved to implement and run the benchmarks. Each benchmark has two parts. The first is a generator which produces and outputs datums at a high rate in a specific format. The second is an analytic which reads the streammore » of datums and is required to perform a well-defined calculation on the collection of datums, typically to find anomalous datums that have been created in the stream by the generator. The FireHose suite provides code for the generators, sample code for the analytics (which users are free to re-implement in their own custom frameworks), and a precise definition of each benchmark calculation.« less

  14. FireHose Streaming Benchmarks

    SciTech Connect

    Karl Anderson, Steve Plimpton

    2015-01-27

    The FireHose Streaming Benchmarks are a suite of stream-processing benchmarks defined to enable comparison of streaming software and hardware, both quantitatively vis-a-vis the rate at which they can process data, and qualitatively by judging the effort involved to implement and run the benchmarks. Each benchmark has two parts. The first is a generator which produces and outputs datums at a high rate in a specific format. The second is an analytic which reads the stream of datums and is required to perform a well-defined calculation on the collection of datums, typically to find anomalous datums that have been created in the stream by the generator. The FireHose suite provides code for the generators, sample code for the analytics (which users are free to re-implement in their own custom frameworks), and a precise definition of each benchmark calculation.

  15. Benchmarking: applications to transfusion medicine.

    PubMed

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal. PMID:22237134

  16. Defining acceptable conditions in wilderness

    NASA Astrophysics Data System (ADS)

    Roggenbuck, J. W.; Williams, D. R.; Watson, A. E.

    1993-03-01

    The limits of acceptable change (LAC) planning framework recognizes that forest managers must decide what indicators of wilderness conditions best represent resource naturalness and high-quality visitor experiences and how much change from the pristine is acceptable for each indicator. Visitor opinions on the aspects of the wilderness that have great impact on their experience can provide valuable input to selection of indicators. Cohutta, Georgia; Caney Creek, Arkansas; Upland Island, Texas; and Rattlesnake, Montana, wilderness visitors have high shared agreement that littering and damage to trees in campsites, noise, and seeing wildlife are very important influences on wilderness experiences. Camping within sight or sound of other people influences experience quality more than do encounters on the trails. Visitors’ standards of acceptable conditions within wilderness vary considerably, suggesting a potential need to manage different zones within wilderness for different clientele groups and experiences. Standards across wildernesses, however, are remarkably similar.

  17. TsunaFLASH Benchmark and Its Verifications

    NASA Astrophysics Data System (ADS)

    Pranowo, Widodo; Behrens, Joern

    2010-05-01

    In the end of year 2008 TsunAWI (Tsunami unstructured mesh finite element model developed at Alfred Wegener Institute) by Behrens et al. (2006 - 2008) [Behrens, 2008], had been launched as an operational model in the German - Indonesian Tsunami EarlyWarning System (GITEWS) framework. This model has been benchmarked and verified with 2004 Sumatra-Andaman mega tsunami event [Harig et al., 2008]. A new development uses adaptive mesh refinement to improve computational efficiency and accuracy, this approach is called TsunaFLASH [Pranowo et al., 2008]. After the initial development and verification phase with stabilization efforts, and study of refinement criteria, the code is now mature enough to be validated with data. This presentation will demonstrate results of TsunaFLASH for the experiments with diverse mesh refinement criteria, and benchmarks; in particular the problem set-1 of IWLRM, and field data of the Sumatra-Andaman 2004 event.

  18. Model Uncertainty and Bayesian Model Averaged Benchmark Dose Estimation for Continuous Data

    EPA Science Inventory

    The benchmark dose (BMD) approach has gained acceptance as a valuable risk assessment tool, but risk assessors still face significant challenges associated with selecting an appropriate BMD/BMDL estimate from the results of a set of acceptable dose-response models. Current approa...

  19. High acceptance recoil polarimeter

    SciTech Connect

    The HARP Collaboration

    1992-12-05

    In order to detect neutrons and protons in the 50 to 600 MeV energy range and measure their polarization, an efficient, low-noise, self-calibrating device is being designed. This detector, known as the High Acceptance Recoil Polarimeter (HARP), is based on the recoil principle of proton detection from np[r arrow]n[prime]p[prime] or pp[r arrow]p[prime]p[prime] scattering (detected particles are underlined) which intrinsically yields polarization information on the incoming particle. HARP will be commissioned to carry out experiments in 1994.

  20. Correlational effect size benchmarks.

    PubMed

    Bosco, Frank A; Aguinis, Herman; Singh, Kulraj; Field, James G; Pierce, Charles A

    2015-03-01

    Effect size information is essential for the scientific enterprise and plays an increasingly central role in the scientific process. We extracted 147,328 correlations and developed a hierarchical taxonomy of variables reported in Journal of Applied Psychology and Personnel Psychology from 1980 to 2010 to produce empirical effect size benchmarks at the omnibus level, for 20 common research domains, and for an even finer grained level of generality. Results indicate that the usual interpretation and classification of effect sizes as small, medium, and large bear almost no resemblance to findings in the field, because distributions of effect sizes exhibit tertile partitions at values approximately one-half to one-third those intuited by Cohen (1988). Our results offer information that can be used for research planning and design purposes, such as producing better informed non-nil hypotheses and estimating statistical power and planning sample size accordingly. We also offer information useful for understanding the relative importance of the effect sizes found in a particular study in relationship to others and which research domains have advanced more or less, given that larger effect sizes indicate a better understanding of a phenomenon. Also, our study offers information about research domains for which the investigation of moderating effects may be more fruitful and provide information that is likely to facilitate the implementation of Bayesian analysis. Finally, our study offers information that practitioners can use to evaluate the relative effectiveness of various types of interventions. PMID:25314367

  1. Virtual machine performance benchmarking.

    PubMed

    Langer, Steve G; French, Todd

    2011-10-01

    The attractions of virtual computing are many: reduced costs, reduced resources and simplified maintenance. Any one of these would be compelling for a medical imaging professional attempting to support a complex practice on limited resources in an era of ever tightened reimbursement. In particular, the ability to run multiple operating systems optimized for different tasks (computational image processing on Linux versus office tasks on Microsoft operating systems) on a single physical machine is compelling. However, there are also potential drawbacks. High performance requirements need to be carefully considered if they are to be executed in an environment where the running software has to execute through multiple layers of device drivers before reaching the real disk or network interface. Our lab has attempted to gain insight into the impact of virtualization on performance by benchmarking the following metrics on both physical and virtual platforms: local memory and disk bandwidth, network bandwidth, and integer and floating point performance. The virtual performance metrics are compared to baseline performance on "bare metal." The results are complex, and indeed somewhat surprising. PMID:21207096

  2. Issues in Benchmark Metric Selection

    NASA Astrophysics Data System (ADS)

    Crolotte, Alain

    It is true that a metric can influence a benchmark but will esoteric metrics create more problems than they will solve? We answer this question affirmatively by examining the case of the TPC-D metric which used the much debated geometric mean for the single-stream test. We will show how a simple choice influenced the benchmark and its conduct and, to some extent, DBMS development. After examining other alternatives our conclusion is that the “real” measure for a decision-support benchmark is the arithmetic mean.

  3. A Heterogeneous Medium Analytical Benchmark

    SciTech Connect

    Ganapol, B.D.

    1999-09-27

    A benchmark, called benchmark BLUE, has been developed for one-group neutral particle (neutron or photon) transport in a one-dimensional sub-critical heterogeneous plane parallel medium with surface illumination. General anisotropic scattering is accommodated through the Green's Function Method (GFM). Numerical Fourier transform inversion is used to generate the required Green's functions which are kernels to coupled integral equations that give the exiting angular fluxes. The interior scalar flux is then obtained through quadrature. A compound iterative procedure for quadrature order and slab surface source convergence provides highly accurate benchmark qualities (4- to 5- places of accuracy) results.

  4. California commercial building energy benchmarking

    SciTech Connect

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the identities of building owners might be revealed and

  5. Issues in Benchmarking Human Reliability Analysis Methods: A Literature Review

    SciTech Connect

    Ronald L. Boring; Stacey M. L. Hendrickson; John A. Forester; Tuan Q. Tran; Erasmia Lois

    2010-06-01

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessments (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study comparing and evaluating HRA methods in assessing operator performance in simulator experiments is currently underway. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.

  6. Shielding integral benchmark archive and database (SINBAD)

    SciTech Connect

    Kirk, B.L.; Grove, R.E.; Kodeli, I.; Gulliford, J.; Sartori, E.

    2011-07-01

    The shielding integral benchmark archive and database (SINBAD) collection of experiments descriptions was initiated in the early 1990s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development's Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD was designed to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD can serve as a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories - fission, fusion, and accelerator experiments. Many experiments are described and analyzed using deterministic or stochastic (Monte Carlo) radiation transport software. The nuclear cross sections also play an important role as they are necessary in performing computational analysis. (authors)

  7. Benchmarking hypercube hardware and software

    NASA Technical Reports Server (NTRS)

    Grunwald, Dirk C.; Reed, Daniel A.

    1986-01-01

    It was long a truism in computer systems design that balanced systems achieve the best performance. Message passing parallel processors are no different. To quantify the balance of a hypercube design, an experimental methodology was developed and the associated suite of benchmarks was applied to several existing hypercubes. The benchmark suite includes tests of both processor speed in the absence of internode communication and message transmission speed as a function of communication patterns.

  8. Data-Intensive Benchmarking Suite

    2008-11-26

    The Data-Intensive Benchmark Suite is a set of programs written for the study of data-or storage-intensive science and engineering problems, The benchmark sets cover: general graph searching (basic and Hadoop Map/Reduce breadth-first search), genome sequence searching, HTTP request classification (basic and Hadoop Map/Reduce), low-level data communication, and storage device micro-beachmarking

  9. Research on computer systems benchmarking

    NASA Technical Reports Server (NTRS)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  10. Improving Mass Balance Modeling of Benchmark Glaciers

    NASA Astrophysics Data System (ADS)

    van Beusekom, A. E.; March, R. S.; O'Neel, S.

    2009-12-01

    The USGS monitors long-term glacier mass balance at three benchmark glaciers in different climate regimes. The coastal and continental glaciers are represented by Wolverine and Gulkana Glaciers in Alaska, respectively. Field measurements began in 1966 and continue. We have reanalyzed the published balance time series with more modern methods and recomputed reference surface and conventional balances. Addition of the most recent data shows a continuing trend of mass loss. We compare the updated balances to the previously accepted balances and discuss differences. Not all balance quantities can be determined from the field measurements. For surface processes, we model missing information with an improved degree-day model. Degree-day models predict ablation from the sum of daily mean temperatures and an empirical degree-day factor. We modernize the traditional degree-day model as well as derive new degree-day factors in an effort to closer match the balance time series and thus better predict the future state of the benchmark glaciers. For subsurface processes, we model the refreezing of meltwater for internal accumulation. We examine the sensitivity of the balance time series to the subsurface process of internal accumulation, with the goal of determining the best way to include internal accumulation into balance estimates.

  11. Benchmarking numerical freeze/thaw models

    NASA Astrophysics Data System (ADS)

    Rühaak, Wolfram; Anbergen, Hauke; Molson, John; Grenier, Christophe; Sass, Ingo

    2015-04-01

    The modeling of freezing and thawing of water in porous media is of increasing interest, and for which very different application areas exist. For instance, the modeling of permafrost regression with respect to climate change issues is one area, while others include geotechnical applications in tunneling and for borehole heat exchangers which operate at temperatures below the freezing point. The modeling of these processes requires the solution of a coupled non-linear system of partial differential equations for flow and heat transport in space and time. Different code implementations have been developed in the past. Analytical solutions exist only for simple cases. Consequently, an interest has arisen in benchmarking different codes with analytical solutions, experiments and purely numerical results, similar to the long-standing DECOVALEX and the more recent "Geothermal Code Comparison" activities. The name for this freezing/ thawing benchmark consortium is INTERFROST. In addition to the well-known so-called Lunardini solution for a 1D case (case T1), two different 2D problems will be presented, one which represents melting of a frozen inclusion (case TH2) and another which represents the growth or thaw of permafrost around a talik (case TH3). These talik regions are important for controlling groundwater movement within a mainly frozen ground. First results of the different benchmark results will be shown and discussed.

  12. POTENTIAL BENCHMARKS FOR ACTINIDE PRODUCTION IN HANFORD REACTORS

    SciTech Connect

    PUIGH RJ; TOFFER H

    2011-10-19

    A significant experimental program was conducted in the early Hanford reactors to understand the reactor production of actinides. These experiments were conducted with sufficient rigor, in some cases, to provide useful information that can be utilized today in development of benchmark experiments that may be used for the validation of present computer codes for the production of these actinides in low enriched uranium fuel.

  13. Benchmarking for the Learning and Skills Sector.

    ERIC Educational Resources Information Center

    Owen, Jane

    This document is designed to introduce practitioners in the United Kingdom's learning and skills sector to the principles and practice of benchmarking. The first section defines benchmarking and differentiates metric, diagnostic, and process benchmarking. The remainder of the booklet details the following steps of the benchmarking process: (1) get…

  14. NHT-1 I/O Benchmarks

    NASA Technical Reports Server (NTRS)

    Carter, Russell; Ciotti, Bob; Fineberg, Sam; Nitzbert, Bill

    1992-01-01

    The NHT-1 benchmarks am a set of three scalable I/0 benchmarks suitable for evaluating the I/0 subsystems of high performance distributed memory computer systems. The benchmarks test application I/0, maximum sustained disk I/0, and maximum sustained network I/0. Sample codes are available which implement the benchmarks.

  15. How to avoid 'death by benchmarking'.

    PubMed

    Wofford, Dave; Libby, Darin

    2015-08-01

    Hospitals and health systems should adopt four key principles and practices when applying benchmarks to determine physician compensation: Acknowledge that a lower percentile may be appropriate. Use the median as the all-in benchmark. Use peer benchmarks when available. Use alternative benchmarks. PMID:26548140

  16. NRC-BNL BENCHMARK PROGRAM ON EVALUATION OF METHODS FOR SEISMIC ANALYSIS OF COUPLED SYSTEMS.

    SciTech Connect

    XU,J.

    1999-08-15

    A NRC-BNL benchmark program for evaluation of state-of-the-art analysis methods and computer programs for seismic analysis of coupled structures with non-classical damping is described. The program includes a series of benchmarking problems designed to investigate various aspects of complexities, applications and limitations associated with methods for analysis of non-classically damped structures. Discussions are provided on the benchmarking process, benchmark structural models, and the evaluation approach, as well as benchmarking ground rules. It is expected that the findings and insights, as well as recommendations from this program will be useful in developing new acceptance criteria and providing guidance for future regulatory activities involving licensing applications of these alternate methods to coupled systems.

  17. Integral Benchmark Data for Nuclear Data Testing Through the ICSBEP & IRPhEP

    SciTech Connect

    J. Blair Briggs; John D. Bess; Jim Gulliford; Ian Hill

    2013-10-01

    The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and International Reactor Physics Experiment Evaluation Project (IRPhEP) was last discussed directly with the nuclear data community at ND2007. Since ND2007, integral benchmark data that are available for nuclear data testing have increased significantly. The status of the ICSBEP and the IRPhEP is discussed and selected benchmark configurations that have been added to the ICSBEP and IRPhEP Handbooks since ND2007 are highlighted.

  18. 2008 ULTRASONIC BENCHMARK STUDIES OF INTERFACE CURVATURE--A SUMMARY

    SciTech Connect

    Schmerr, L. W.; Huang, R.; Raillon, R.; Mahaut, S.; Leymarie, N.; Lonne, S.; Spies, M.; Lupien, V.

    2009-03-03

    In the 2008 QNDE ultrasonic benchmark session researchers from five different institutions around the world examined the influence that the curvature of a cylindrical fluid-solid interface has on the measured NDE immersion pulse-echo response of a flat-bottom hole (FBH) reflector. This was a repeat of a study conducted in the 2007 benchmark to try to determine the sources of differences seen in 2007 between model-based predictions and experiments. Here, we will summarize the results obtained in 2008 and analyze the model-based results and the experiments.

  19. Closed-Loop Neuromorphic Benchmarks

    PubMed Central

    Stewart, Terrence C.; DeWolf, Travis; Kleinhans, Ashley; Eliasmith, Chris

    2015-01-01

    Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is even more difficult when the task of interest is a closed-loop task; that is, a task where the output from the neuromorphic hardware affects some environment, which then in turn affects the hardware's future input. However, closed-loop situations are one of the primary potential uses of neuromorphic hardware. To address this, we present a methodology for generating closed-loop benchmarks that makes use of a hybrid of real physical embodiment and a type of “minimal” simulation. Minimal simulation has been shown to lead to robust real-world performance, while still maintaining the practical advantages of simulation, such as making it easy for the same benchmark to be used by many researchers. This method is flexible enough to allow researchers to explicitly modify the benchmarks to identify specific task domains where particular hardware excels. To demonstrate the method, we present a set of novel benchmarks that focus on motor control for an arbitrary system with unknown external forces. Using these benchmarks, we show that an error-driven learning rule can consistently improve motor control performance across a randomly generated family of closed-loop simulations, even when there are up to 15 interacting joints to be controlled. PMID:26696820

  20. Closed-Loop Neuromorphic Benchmarks.

    PubMed

    Stewart, Terrence C; DeWolf, Travis; Kleinhans, Ashley; Eliasmith, Chris

    2015-01-01

    Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is even more difficult when the task of interest is a closed-loop task; that is, a task where the output from the neuromorphic hardware affects some environment, which then in turn affects the hardware's future input. However, closed-loop situations are one of the primary potential uses of neuromorphic hardware. To address this, we present a methodology for generating closed-loop benchmarks that makes use of a hybrid of real physical embodiment and a type of "minimal" simulation. Minimal simulation has been shown to lead to robust real-world performance, while still maintaining the practical advantages of simulation, such as making it easy for the same benchmark to be used by many researchers. This method is flexible enough to allow researchers to explicitly modify the benchmarks to identify specific task domains where particular hardware excels. To demonstrate the method, we present a set of novel benchmarks that focus on motor control for an arbitrary system with unknown external forces. Using these benchmarks, we show that an error-driven learning rule can consistently improve motor control performance across a randomly generated family of closed-loop simulations, even when there are up to 15 interacting joints to be controlled. PMID:26696820

  1. FLOWTRAN-TF code benchmarking

    SciTech Connect

    Flach, G.P.

    1990-12-01

    FLOWTRAN-TF is a two-component (air-water), two-phase thermal-hydraulics code designed for performing accident analyses of SRS reactor fuel assemblies during the Emergency Cooling System (ECS) phase of a Double Ended Guillotine Break (DEGB) Loss Of Coolant Accident (LOCA). A description of the code is given by Flach et al. (1990). This report provides benchmarking results for the version of FLOWTRAN-TF used to compute the Recommended K-Reactor Restart ECS Power Limit (Smith et al., 1990a; 1990b). Individual constitutive relations are benchmarked in Sections 2 through 5 while in Sections 6 and 7 integral code benchmarking results are presented. An overall assessment of FLOWTRAN-TF for its intended use in computing the ECS power limit completes the document.

  2. Radiation Detection Computational Benchmark Scenarios

    SciTech Connect

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  3. Current Reactor Physics Benchmark Activities at the Idaho National Laboratory

    SciTech Connect

    John D. Bess; Margaret A. Marshall; Mackenzie L. Gorham; Joseph Christensen; James C. Turnbull; Kim Clark

    2011-11-01

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) [1] and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) [2] were established to preserve integral reactor physics and criticality experiment data for present and future research. These valuable assets provide the basis for recording, developing, and validating our integral nuclear data, and experimental and computational methods. These projects are managed through the Idaho National Laboratory (INL) and the Organisation for Economic Co-operation and Development Nuclear Energy Agency (OECD-NEA). Staff and students at the Department of Energy - Idaho (DOE-ID) and INL are engaged in the development of benchmarks to support ongoing research activities. These benchmarks include reactors or assemblies that support Next Generation Nuclear Plant (NGNP) research, space nuclear Fission Surface Power System (FSPS) design validation, and currently operational facilities in Southeastern Idaho.

  4. Simplified two and three dimensional HTTR benchmark problems

    SciTech Connect

    Zhan Zhang; Dingkang Zhang; Justin M. Pounders; Abderrafi M. Ougouag

    2011-05-01

    To assess the accuracy of diffusion or transport methods for reactor calculations, it is desirable to create heterogeneous benchmark problems that are typical of whole core configurations. In this paper we have created two and three dimensional numerical benchmark problems typical of high temperature gas cooled prismatic cores. Additionally, a single cell and single block benchmark problems are also included. These problems were derived from the HTTR start-up experiment. Since the primary utility of the benchmark problems is in code-to-code verification, minor details regarding geometry and material specification of the original experiment have been simplified while retaining the heterogeneity and the major physics properties of the core from a neutronics viewpoint. A six-group material (macroscopic) cross section library has been generated for the benchmark problems using the lattice depletion code HELIOS. Using this library, Monte Carlo solutions are presented for three configurations (all-rods-in, partially-controlled and all-rods-out) for both the 2D and 3D problems. These solutions include the core eigenvalues, the block (assembly) averaged fission densities, local peaking factors, the absorption densities in the burnable poison and control rods, and pin fission density distribution for selected blocks. Also included are the solutions for the single cell and single block problems.

  5. Developing Financial Benchmarks for Critical Access Hospitals

    PubMed Central

    Pink, George H.; Holmes, George M.; Slifkin, Rebecca T.; Thompson, Roger E.

    2009-01-01

    This study developed and applied benchmarks for five indicators included in the CAH Financial Indicators Report, an annual, hospital-specific report distributed to all critical access hospitals (CAHs). An online survey of Chief Executive Officers and Chief Financial Officers was used to establish benchmarks. Indicator values for 2004, 2005, and 2006 were calculated for 421 CAHs and hospital performance was compared to the benchmarks. Although many hospitals performed better than benchmark on one indicator in 1 year, very few performed better than benchmark on all five indicators in all 3 years. The probability of performing better than benchmark differed among peer groups. PMID:19544935

  6. Benchmarking of the FENDL-3 Neutron Cross-section Data Starter Library for Fusion Applications

    SciTech Connect

    Fischer, U.; Angelone, M.; Bohm, T.; Kondo, K.; Konno, C.; Sawan, M.; Villari, R.; Walker, B.

    2014-06-15

    This paper summarizes the benchmark analyses performed in a joint effort of ENEA (Italy), JAEA (Japan), KIT (Germany), and the University of Wisconsin (USA) on a computational ITER benchmark and a series of 14 MeV neutron benchmark experiments. The computational benchmark revealed a modest increase of the neutron flux levels in the deep penetration regions and a substantial increase of the gas production in steel components. The comparison to experimental results showed good agreement with no substantial differences between FENDL-3.0 and FENDL-2.1 for most of the responses. In general, FENDL-3 shows an improved performance for fusion neutronics applications.

  7. Criticality safety benchmark evaluation project: Recovering the past

    SciTech Connect

    Trumble, E.F.

    1997-06-01

    A very brief summary of the Criticality Safety Benchmark Evaluation Project of the Westinghouse Savannah River Company is provided in this paper. The purpose of the project is to provide a source of evaluated criticality safety experiments in an easily usable format. Another project goal is to search for any experiments that may have been lost or contain discrepancies, and to determine if they can be used. Results of evaluated experiments are being published as US DOE handbooks.

  8. Benchmarking neuromorphic vision: lessons learnt from computer vision

    PubMed Central

    Tan, Cheston; Lallee, Stephane; Orchard, Garrick

    2015-01-01

    Neuromorphic Vision sensors have improved greatly since the first silicon retina was presented almost three decades ago. They have recently matured to the point where they are commercially available and can be operated by laymen. However, despite improved availability of sensors, there remains a lack of good datasets, while algorithms for processing spike-based visual data are still in their infancy. On the other hand, frame-based computer vision algorithms are far more mature, thanks in part to widely accepted datasets which allow direct comparison between algorithms and encourage competition. We are presented with a unique opportunity to shape the development of Neuromorphic Vision benchmarks and challenges by leveraging what has been learnt from the use of datasets in frame-based computer vision. Taking advantage of this opportunity, in this paper we review the role that benchmarks and challenges have played in the advancement of frame-based computer vision, and suggest guidelines for the creation of Neuromorphic Vision benchmarks and challenges. We also discuss the unique challenges faced when benchmarking Neuromorphic Vision algorithms, particularly when attempting to provide direct comparison with frame-based computer vision. PMID:26528120

  9. PRISMATIC CORE COUPLED TRANSIENT BENCHMARK

    SciTech Connect

    J. Ortensi; M.A. Pope; G. Strydom; R.S. Sen; M.D. DeHart; H.D. Gougar; C. Ellis; A. Baxter; V. Seker; T.J. Downar; K. Vierow; K. Ivanov

    2011-06-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  10. Austin Community College Benchmarking Update.

    ERIC Educational Resources Information Center

    Austin Community Coll., TX. Office of Institutional Effectiveness.

    Austin Community College contracted with MGT of America, Inc. in spring 1999 to develop a peer and benchmark (best) practices analysis on key indicators. These indicators were updated in spring 2002 using data from eight Texas community colleges and four non-Texas institutions that represent large, comprehensive, urban community colleges, similar…

  11. Processor Emulator with Benchmark Applications

    SciTech Connect

    Lloyd, G. Scott; Pearce, Roger; Gokhale, Maya

    2015-11-13

    A processor emulator and a suite of benchmark applications have been developed to assist in characterizing the performance of data-centric workloads on current and future computer architectures. Some of the applications have been collected from other open source projects. For more details on the emulator and an example of its usage, see reference [1].

  12. PyMPI Dynamic Benchmark

    2007-02-16

    Pynamic is a benchmark designed to test a system's ability to handle the Dynamic Linking and Loading (DLL) requirements of Python-based scientific applications. This benchmark is developed to add a workload to our testing environment, a workload that represents a newly emerging class of DLL behaviors. Pynamic buildins on pyMPI, and MPI extension to Python C-extension dummy codes and a glue layer that facilitates linking and loading of the generated dynamic modules into the resultingmore » pyMPI. Pynamic is configurable, enabling modeling the static properties of a specific code as described in section 5. It does not, however, model any significant computationss of the target and hence, it is not subjected to the same level of control as the target code. In fact, HPC computer vendors and tool developers will be encouraged to add it to their tesitn suite once the code release is completed. an ability to produce and run this benchmark is an effective test for valifating the capability of a compiler and linker/loader as well as an OS kernel and other runtime system of HPC computer vendors. In addition, the benchmark is designed as a test case for stressing code development tools. Though Python has recently gained popularity in the HPC community, it heavy DLL operations have hindered certain HPC code development tools, notably parallel debuggers, from performing optimally.« less

  13. Real-Time Benchmark Suite

    1992-01-17

    This software provides a portable benchmark suite for real time kernels. It tests the performance of many of the system calls, as well as the interrupt response time and task response time to interrupts. These numbers provide a baseline for comparing various real-time kernels and hardware platforms.

  14. Benchmarking in Czech Higher Education: The Case of Schools of Economics

    ERIC Educational Resources Information Center

    Placek, Michal; Ochrana, František; Pucek, Milan

    2015-01-01

    This article describes the use of benchmarking in universities in the Czech Republic and academics' experiences with it. It is based on research conducted among academics from economics schools in Czech public and private universities. The results identified several issues regarding the utilisation and understanding of benchmarking in the Czech…

  15. Benchmarking Evaluation Results for Prototype Extravehicular Activity Gloves

    NASA Technical Reports Server (NTRS)

    Aitchison, Lindsay; McFarland, Shane

    2012-01-01

    The Space Suit Assembly (SSA) Development Team at NASA Johnson Space Center has invested heavily in the advancement of rear-entry planetary exploration suit design but largely deferred development of extravehicular activity (EVA) glove designs, and accepted the risk of using the current flight gloves, Phase VI, for unique mission scenarios outside the Space Shuttle and International Space Station (ISS) Program realm of experience. However, as design reference missions mature, the risks of using heritage hardware have highlighted the need for developing robust new glove technologies. To address the technology gap, the NASA Game-Changing Technology group provided start-up funding for the High Performance EVA Glove (HPEG) Project in the spring of 2012. The overarching goal of the HPEG Project is to develop a robust glove design that increases human performance during EVA and creates pathway for future implementation of emergent technologies, with specific aims of increasing pressurized mobility to 60% of barehanded capability, increasing the durability by 100%, and decreasing the potential of gloves to cause injury during use. The HPEG Project focused initial efforts on identifying potential new technologies and benchmarking the performance of current state of the art gloves to identify trends in design and fit leading to establish standards and metrics against which emerging technologies can be assessed at both the component and assembly levels. The first of the benchmarking tests evaluated the quantitative mobility performance and subjective fit of four prototype gloves developed by Flagsuit LLC, Final Frontier Designs, LLC Dover, and David Clark Company as compared to the Phase VI. All of the companies were asked to design and fabricate gloves to the same set of NASA provided hand measurements (which corresponded to a single size of Phase Vi glove) and focus their efforts on improving mobility in the metacarpal phalangeal and carpometacarpal joints. Four test

  16. How do I know if my forecasts are better? Using benchmarks in hydrological ensemble prediction

    NASA Astrophysics Data System (ADS)

    Pappenberger, F.; Ramos, M. H.; Cloke, H. L.; Wetterhall, F.; Alfieri, L.; Bogner, K.; Mueller, A.; Salamon, P.

    2015-03-01

    The skill of a forecast can be assessed by comparing the relative proximity of both the forecast and a benchmark to the observations. Example benchmarks include climatology or a naïve forecast. Hydrological ensemble prediction systems (HEPS) are currently transforming the hydrological forecasting environment but in this new field there is little information to guide researchers and operational forecasters on how benchmarks can be best used to evaluate their probabilistic forecasts. In this study, it is identified that the forecast skill calculated can vary depending on the benchmark selected and that the selection of a benchmark for determining forecasting system skill is sensitive to a number of hydrological and system factors. A benchmark intercomparison experiment is then undertaken using the continuous ranked probability score (CRPS), a reference forecasting system and a suite of 23 different methods to derive benchmarks. The benchmarks are assessed within the operational set-up of the European Flood Awareness System (EFAS) to determine those that are 'toughest to beat' and so give the most robust discrimination of forecast skill, particularly for the spatial average fields that EFAS relies upon. Evaluating against an observed discharge proxy the benchmark that has most utility for EFAS and avoids the most naïve skill across different hydrological situations is found to be meteorological persistency. This benchmark uses the latest meteorological observations of precipitation and temperature to drive the hydrological model. Hydrological long term average benchmarks, which are currently used in EFAS, are very easily beaten by the forecasting system and the use of these produces much naïve skill. When decomposed into seasons, the advanced meteorological benchmarks, which make use of meteorological observations from the past 20 years at the same calendar date, have the most skill discrimination. They are also good at discriminating skill in low flows and for all

  17. The MAGNEX large acceptance spectrometer

    SciTech Connect

    Cavallaro, M.; Cappuzzello, F.; Cunsolo, A.; Carbone, D.; Foti, A.

    2010-03-01

    The main features of the MAGNEX large acceptance magnetic spectrometer are described. It has a quadrupole + dipole layout and a hybrid detector located at the focal plane. The aberrations due to the large angular (50 msr) and momentum (+- 13%) acceptance are reduced by an accurate hardware design and then compensated by an innovative software ray-reconstruction technique. The obtained resolution in energy, angle and mass are presented in the paper. MAGNEX has been used up to now for different experiments in nuclear physics and astrophysics confirming to be a multipurpose device.

  18. Benchmarking short sequence mapping tools

    PubMed Central

    2013-01-01

    Background The development of next-generation sequencing instruments has led to the generation of millions of short sequences in a single run. The process of aligning these reads to a reference genome is time consuming and demands the development of fast and accurate alignment tools. However, the current proposed tools make different compromises between the accuracy and the speed of mapping. Moreover, many important aspects are overlooked while comparing the performance of a newly developed tool to the state of the art. Therefore, there is a need for an objective evaluation method that covers all the aspects. In this work, we introduce a benchmarking suite to extensively analyze sequencing tools with respect to various aspects and provide an objective comparison. Results We applied our benchmarking tests on 9 well known mapping tools, namely, Bowtie, Bowtie2, BWA, SOAP2, MAQ, RMAP, GSNAP, Novoalign, and mrsFAST (mrFAST) using synthetic data and real RNA-Seq data. MAQ and RMAP are based on building hash tables for the reads, whereas the remaining tools are based on indexing the reference genome. The benchmarking tests reveal the strengths and weaknesses of each tool. The results show that no single tool outperforms all others in all metrics. However, Bowtie maintained the best throughput for most of the tests while BWA performed better for longer read lengths. The benchmarking tests are not restricted to the mentioned tools and can be further applied to others. Conclusion The mapping process is still a hard problem that is affected by many factors. In this work, we provided a benchmarking suite that reveals and evaluates the different factors affecting the mapping process. Still, there is no tool that outperforms all of the others in all the tests. Therefore, the end user should clearly specify his needs in order to choose the tool that provides the best results. PMID:23758764

  19. Internal Quality Assurance Benchmarking. ENQA Workshop Report 20

    ERIC Educational Resources Information Center

    Blackstock, Douglas; Burquel, Nadine; Comet, Nuria; Kajaste, Matti; dos Santos, Sergio Machado; Marcos, Sandra; Moser, Marion; Ponds, Henri; Scheuthle, Harald; Sixto, Luis Carlos Velon

    2012-01-01

    The Internal Quality Assurance group of ENQA (IQA Group) has been organising a yearly seminar for its members since 2007. The main objective is to share experiences concerning the internal quality assurance of work processes in the participating agencies. The overarching theme of the 2011 seminar was how to use benchmarking as a tool for…

  20. Towards Systematic Benchmarking of Climate Model Performance

    NASA Astrophysics Data System (ADS)

    Gleckler, P. J.

    2014-12-01

    The process by which climate models are evaluated has evolved substantially over the past decade, with the Coupled Model Intercomparison Project (CMIP) serving as a centralizing activity for coordinating model experimentation and enabling research. Scientists with a broad spectrum of expertise have contributed to the CMIP model evaluation process, resulting in many hundreds of publications that have served as a key resource for the IPCC process. For several reasons, efforts are now underway to further systematize some aspects of the model evaluation process. First, some model evaluation can now be considered routine and should not require "re-inventing the wheel" or a journal publication simply to update results with newer models. Second, the benefit of CMIP research to model development has not been optimal because the publication of results generally takes several years and is usually not reproducible for benchmarking newer model versions. And third, there are now hundreds of model versions and many thousands of simulations, but there is no community-based mechanism for routinely monitoring model performance changes. An important change in the design of CMIP6 can help address these limitations. CMIP6 will include a small set standardized experiments as an ongoing exercise (CMIP "DECK": ongoing Diagnostic, Evaluation and Characterization of Klima), so that modeling groups can submit them at any time and not be overly constrained by deadlines. In this presentation, efforts to establish routine benchmarking of existing and future CMIP simulations will be described. To date, some benchmarking tools have been made available to all CMIP modeling groups to enable them to readily compare with CMIP5 simulations during the model development process. A natural extension of this effort is to make results from all CMIP simulations widely available, including the results from newer models as soon as the simulations become available for research. Making the results from routine

  1. Strategy of DIN-PACS benchmark testing

    NASA Astrophysics Data System (ADS)

    Norton, Gary S.; Lyche, David K.; Richardson, Nancy E.; Thomas, Jerry A.; Romlein, John R.; Cawthon, Michael A.; Lawrence, David P.; Shelton, Philip D.; Parr, Laurence F.; Richardson, Ronald R., Jr.; Johnson, Steven L.

    1998-07-01

    The Digital Imaging Network -- Picture Archive and Communication System (DIN-PACS) procurement is the Department of Defense's (DoD) effort to bring military medical treatment facilities into the twenty-first century with nearly filmless digital radiology departments. The DIN-PACS procurement is unique from most of the previous PACS acquisitions in that the Request for Proposals (RFP) required extensive benchmark testing prior to contract award. The strategy for benchmark testing was a reflection of the DoD's previous PACS and teleradiology experiences. The DIN-PACS Technical Evaluation Panel (TEP) consisted of DoD and civilian radiology professionals with unique clinical and technical PACS expertise. The TEP considered nine items, key functional requirements to the DIN-PACS acquisition: (1) DICOM Conformance, (2) System Storage and Archive, (3) Workstation Performance, (4) Network Performance, (5) Radiology Information System (RIS) functionality, (6) Hospital Information System (HIS)/RIS Interface, (7) Teleradiology, (8) Quality Control, and (9) System Reliability. The development of a benchmark test to properly evaluate these key requirements would require the TEP to make technical, operational, and functional decisions that had not been part of a previous PACS acquisition. Developing test procedures and scenarios that simulated inputs from radiology modalities and outputs to soft copy workstations, film processors, and film printers would be a major undertaking. The goals of the TEP were to fairly assess each vendor's proposed system and to provide an accurate evaluation of each system's capabilities to the source selection authority, so the DoD could purchase a PACS that met the requirements in the RFP.

  2. Gatemon Benchmarking and Two-Qubit Operations

    NASA Astrophysics Data System (ADS)

    Casparis, L.; Larsen, T. W.; Olsen, M. S.; Kuemmeth, F.; Krogstrup, P.; Nygârd, J.; Petersson, K. D.; Marcus, C. M.

    2016-04-01

    Recent experiments have demonstrated superconducting transmon qubits with semiconductor nanowire Josephson junctions. These hybrid gatemon qubits utilize field effect tunability characteristic of semiconductors to allow complete qubit control using gate voltages, potentially a technological advantage over conventional flux-controlled transmons. Here, we present experiments with a two-qubit gatemon circuit. We characterize qubit coherence and stability and use randomized benchmarking to demonstrate single-qubit gate errors below 0.7% for all gates, including voltage-controlled Z rotations. We show coherent capacitive coupling between two gatemons and coherent swap operations. Finally, we perform a two-qubit controlled-phase gate with an estimated fidelity of 91%, demonstrating the potential of gatemon qubits for building scalable quantum processors.

  3. Gatemon Benchmarking and Two-Qubit Operation

    NASA Astrophysics Data System (ADS)

    Casparis, Lucas; Larsen, Thorvald; Olsen, Michael; Petersson, Karl; Kuemmeth, Ferdinand; Krogstrup, Peter; Nygard, Jesper; Marcus, Charles

    Recent experiments have demonstrated superconducting transmon qubits with semiconductor nanowire Josephson junctions. These hybrid gatemon qubits utilize field effect tunability singular to semiconductors to allow complete qubit control using gate voltages, potentially a technological advantage over conventional flux-controlled transmons. Here, we present experiments with a two-qubit gatemon circuit. We characterize qubit coherence and stability and use randomized benchmarking to demonstrate single-qubit gate errors of ~0.5 % for all gates, including voltage-controlled Z rotations. We show coherent capacitive coupling between two gatemons and coherent SWAP operations. Finally, we perform a two-qubit controlled-phase gate with an estimated fidelity of ~91 %, demonstrating the potential of gatemon qubits for building scalable quantum processors. We acknowledge financial support from Microsoft Project Q and the Danish National Research Foundation.

  4. Gatemon Benchmarking and Two-Qubit Operations.

    PubMed

    Casparis, L; Larsen, T W; Olsen, M S; Kuemmeth, F; Krogstrup, P; Nygård, J; Petersson, K D; Marcus, C M

    2016-04-15

    Recent experiments have demonstrated superconducting transmon qubits with semiconductor nanowire Josephson junctions. These hybrid gatemon qubits utilize field effect tunability characteristic of semiconductors to allow complete qubit control using gate voltages, potentially a technological advantage over conventional flux-controlled transmons. Here, we present experiments with a two-qubit gatemon circuit. We characterize qubit coherence and stability and use randomized benchmarking to demonstrate single-qubit gate errors below 0.7% for all gates, including voltage-controlled Z rotations. We show coherent capacitive coupling between two gatemons and coherent swap operations. Finally, we perform a two-qubit controlled-phase gate with an estimated fidelity of 91%, demonstrating the potential of gatemon qubits for building scalable quantum processors. PMID:27127949

  5. Benchmarking of Neutron Production of Heavy-Ion Transport Codes

    SciTech Connect

    Remec, Igor; Ronningen, Reginald M.; Heilbronn, Lawrence

    2012-01-01

    Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in design and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models and codes and additional benchmarking are required.

  6. Performance Benchmarking Tsunami Models for NTHMP's Inundation Mapping Activities

    NASA Astrophysics Data System (ADS)

    Horrillo, Juan; Grilli, Stéphan T.; Nicolsky, Dmitry; Roeber, Volker; Zhang, Joseph

    2015-03-01

    The coastal states and territories of the United States (US) are vulnerable to devastating tsunamis from near-field or far-field coseismic and underwater/subaerial landslide sources. Following the catastrophic 2004 Indian Ocean tsunami, the National Tsunami Hazard Mitigation Program (NTHMP) accelerated the development of public safety products for the mitigation of these hazards. In response to this initiative, US coastal states and territories speeded up the process of developing/enhancing/adopting tsunami models that can be used for developing inundation maps and evacuation plans. One of NTHMP's requirements is that all operational and inundation-based numerical (O&I) models used for such purposes be properly validated against established standards to ensure the reliability of tsunami inundation maps as well as to achieve a basic level of consistency between parallel efforts. The validation of several O&I models was considered during a workshop held in 2011 at Texas A&M University (Galveston). This validation was performed based on the existing standard (OAR-PMEL-135), which provides a list of benchmark problems (BPs) covering various tsunami processes that models must meet to be deemed acceptable. Here, we summarize key approaches followed, results, and conclusions of the workshop. Eight distinct tsunami models were validated and cross-compared by using a subset of the BPs listed in the OAR-PMEL-135 standard. Of the several BPs available, only two based on laboratory experiments are detailed here for sake of brevity; since they are considered as sufficiently comprehensive. Average relative errors associated with expected parameters values such as maximum surface amplitude/runup are estimated. The level of agreement with the reference data, reasons for discrepancies between model results, and some of the limitations are discussed. In general, dispersive models were found to perform better than nondispersive models, but differences were relatively small, in part

  7. The Growth of Benchmarking in Higher Education.

    ERIC Educational Resources Information Center

    Schofield, Allan

    2000-01-01

    Benchmarking is used in higher education to improve performance by comparison with other institutions. Types used include internal, external competitive, external collaborative, external transindustry, and implicit. Methods include ideal type (or gold) standard, activity-based benchmarking, vertical and horizontal benchmarking, and comparative…

  8. How Benchmarking and Higher Education Came Together

    ERIC Educational Resources Information Center

    Levy, Gary D.; Ronco, Sharron L.

    2012-01-01

    This chapter introduces the concept of benchmarking and how higher education institutions began to use benchmarking for a variety of purposes. Here, benchmarking is defined as a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to…

  9. The Impact of Previous Schooling Experiences on a Quaker High School's Graduating Students' College Entrance Exam Scores, Parents' Expectations, and College Acceptance Outcomes

    ERIC Educational Resources Information Center

    Galusha, Debbie K.

    2010-01-01

    The purpose of the study is to determine the impact of previous private, public, home, or international schooling experiences on a Quaker high school's graduating students' college entrance composite exam scores, parents' expectations, and college attendance outcomes. The study's results suggest that regardless of previous private, public, home,…

  10. New NAS Parallel Benchmarks Results

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Saphir, William; VanderWijngaart, Rob; Woo, Alex; Kutler, Paul (Technical Monitor)

    1997-01-01

    NPB2 (NAS (NASA Advanced Supercomputing) Parallel Benchmarks 2) is an implementation, based on Fortran and the MPI (message passing interface) message passing standard, of the original NAS Parallel Benchmark specifications. NPB2 programs are run with little or no tuning, in contrast to NPB vendor implementations, which are highly optimized for specific architectures. NPB2 results complement, rather than replace, NPB results. Because they have not been optimized by vendors, NPB2 implementations approximate the performance a typical user can expect for a portable parallel program on distributed memory parallel computers. Together these results provide an insightful comparison of the real-world performance of high-performance computers. New NPB2 features: New implementation (CG), new workstation class problem sizes, new serial sample versions, more performance statistics.

  11. Benchmarking neuromorphic systems with Nengo

    PubMed Central

    Bekolay, Trevor; Stewart, Terrence C.; Eliasmith, Chris

    2015-01-01

    Nengo is a software package for designing and simulating large-scale neural models. Nengo is architected such that the same Nengo model can be simulated on any of several Nengo backends with few to no modifications. Backends translate a model to specific platforms, which include GPUs and neuromorphic hardware. Nengo also contains a large test suite that can be run with any backend and focuses primarily on functional performance. We propose that Nengo's large test suite can be used to benchmark neuromorphic hardware's functional performance and simulation speed in an efficient, unbiased, and future-proof manner. We implement four benchmark models and show that Nengo can collect metrics across five different backends that identify situations in which some backends perform more accurately or quickly. PMID:26539076

  12. HS06 Benchmark for an ARM Server

    NASA Astrophysics Data System (ADS)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  13. MPI Multicore Torus Communication Benchmark

    2008-02-05

    The MPI Multicore Torus Communications Benchmark (TorusTest) measues the aggegate bandwidth across all six links from/to any multicore node in a logical torus. It can run in wo modi: using a static or a random mapping of tasks to torus locations. The former can be used to achieve optimal mappings and aggregate bandwidths that can be achieved with varying node mappings.

  14. Restaurant Energy Use Benchmarking Guideline

    SciTech Connect

    Hedrick, R.; Smith, V.; Field, K.

    2011-07-01

    A significant operational challenge for food service operators is defining energy use benchmark metrics to compare against the performance of individual stores. Without metrics, multiunit operators and managers have difficulty identifying which stores in their portfolios require extra attention to bring their energy performance in line with expectations. This report presents a method whereby multiunit operators may use their own utility data to create suitable metrics for evaluating their operations.

  15. Why Do Women Accept the Rape Myth?

    ERIC Educational Resources Information Center

    Tabone, Christopher; And Others

    The rape myth, defined as prejudicial, stereotyped, or false beliefs about rape, rape victims, and rapists, is accepted by individuals from varied walks of life, including women. It has been suggested that rape myth acceptance (RMA) among women serves a protective function by enabling women to dissociate themselves from a rape victim's experience.…

  16. An introduction to benchmarking in healthcare.

    PubMed

    Benson, H R

    1994-01-01

    Benchmarking--the process of establishing a standard of excellence and comparing a business function or activity, a product, or an enterprise as a whole with that standard--will be used increasingly by healthcare institutions to reduce expenses and simultaneously improve product and service quality. As a component of total quality management, benchmarking is a continuous process by which an organization can measure and compare its own processes with those of organizations that are leaders in a particular area. Benchmarking should be viewed as a part of quality management programs, not as a replacement. There are four kinds of benchmarking: internal, competitive, functional and generic. With internal benchmarking, functions within an organization are compared with each other. Competitive benchmarking partners do business in the same market and provide a direct comparison of products or services. Functional and generic benchmarking are performed with organizations which may have a specific similar function, such as payroll or purchasing, but which otherwise are in a different business. Benchmarking must be a team process because the outcome will involve changing current practices, with effects felt throughout the organization. The team should include members who have subject knowledge; communications and computer proficiency; skills as facilitators and outside contacts; and sponsorship of senior management. Benchmarking requires quantitative measurement of the subject. The process or activity that you are attempting to benchmark will determine the types of measurements used. Benchmarking metrics usually can be classified in one of four categories: productivity, quality, time and cost-related. PMID:10139084

  17. Data Acceptance Criteria for Standardized Human-Associated Fecal Source Identification Quantitative Real-Time PCR Methods.

    PubMed

    Shanks, Orin C; Kelty, Catherine A; Oshiro, Robin; Haugland, Richard A; Madi, Tania; Brooks, Lauren; Field, Katharine G; Sivaganesan, Mano

    2016-05-01

    There is growing interest in the application of human-associated fecal source identification quantitative real-time PCR (qPCR) technologies for water quality management. The transition from a research tool to a standardized protocol requires a high degree of confidence in data quality across laboratories. Data quality is typically determined through a series of specifications that ensure good experimental practice and the absence of bias in the results due to DNA isolation and amplification interferences. However, there is currently a lack of consensus on how best to evaluate and interpret human fecal source identification qPCR experiments. This is, in part, due to the lack of standardized protocols and information on interlaboratory variability under conditions for data acceptance. The aim of this study is to provide users and reviewers with a complete series of conditions for data acceptance derived from a multiple laboratory data set using standardized procedures. To establish these benchmarks, data from HF183/BacR287 and HumM2 human-associated qPCR methods were generated across 14 laboratories. Each laboratory followed a standardized protocol utilizing the same lot of reference DNA materials, DNA isolation kits, amplification reagents, and test samples to generate comparable data. After removal of outliers, a nested analysis of variance (ANOVA) was used to establish proficiency metrics that include lab-to-lab, replicate testing within a lab, and random error for amplification inhibition and sample processing controls. Other data acceptance measurements included extraneous DNA contamination assessments (no-template and extraction blank controls) and calibration model performance (correlation coefficient, amplification efficiency, and lower limit of quantification). To demonstrate the implementation of the proposed standardized protocols and data acceptance criteria, comparable data from two additional laboratories were reviewed. The data acceptance criteria

  18. Hydrologic information server for benchmark precipitation dataset

    NASA Astrophysics Data System (ADS)

    McEnery, John A.; McKee, Paul W.; Shelton, Gregory P.; Ramsey, Ryan W.

    2013-01-01

    This paper will present the methodology and overall system development by which a benchmark dataset of precipitation information has been made available. Rainfall is the primary driver of the hydrologic cycle. High quality precipitation data is vital for hydrologic models, hydrometeorologic studies and climate analysis,and hydrologic time series observations are important to many water resources applications. Over the past two decades, with the advent of NEXRAD radar, science to measure and record rainfall has improved dramatically. However, much existing data has not been readily available for public access or transferable among the agricultural, engineering and scientific communities. This project takes advantage of the existing CUAHSI Hydrologic Information System ODM model and tools to bridge the gap between data storage and data access, providing an accepted standard interface for internet access to the largest time-series dataset of NEXRAD precipitation data ever assembled. This research effort has produced an operational data system to ingest, transform, load and then serve one of most important hydrologic variable sets.

  19. "I pray that they accept me without scolding:" experiences with disengagement and re-engagement in HIV care and treatment services in Tanzania.

    PubMed

    Layer, Erica H; Brahmbhatt, Heena; Beckham, Sarah W; Ntogwisangu, Jacob; Mwampashi, Ard; Davis, Wendy W; Kerrigan, Deanna L; Kennedy, Caitlin E

    2014-09-01

    HIV care and treatment programs in sub-Saharan Africa have been remarkably successful, but disengagement from care by people living with HIV (PLHIV) remains high. The goal of this study was to explore the experiences of PLHIV who disengaged from HIV care in Iringa, Tanzania. We conducted a series of three longitudinal, semi-structured interviews with 14 PLHIV who had disengaged from ART programs for a total of 37 interviews. Narrative analysis was used to identify key themes. Our findings indicate that an individual's decision to disengage from care often resulted from harsh and disrespectful treatment from providers following missed appointments. Once disengaged, participants reported a strong desire to re-engage in care but also reluctance to return due to fear of further mistreatment. Participants who successfully re-engaged in care during the course of this study leveraged social support networks to facilitate this process, but often felt guilt and shame for breaking clinic rules and believed themselves to be at fault for disengagement. Developing strategies to minimize disengagement and facilitate re-engagement through more flexible attendance policies, improved client-provider interactions, and outreach and support for disengaged clients could increase retention and re-engagement in HIV care and treatment programs. PMID:25093247

  20. ``Observation, Experiment, and the Future of Physics'' John G. King's acceptance speech for the 2000 Oersted Medal presented by the American Association of Physics Teachers, 18 January 2000

    NASA Astrophysics Data System (ADS)

    King, John G.

    2001-01-01

    Looking at our built world, most physicists see order where many others see magic. This view of order should be available to all, and physics would flourish better in an appreciative society. Despite the remarkable developments in the teaching of physics in the last half century, too many people, whether they've had physics courses or not, don't have an inkling of the power and value of our subject, whose importance ranges from the practical to the psychological. We need to supplement people's experiences in ways that are applicable to different groups, from physics majors to people without formal education. I will describe and explain an ambitious program to stimulate scientific, engineering, and technological interest and understanding through direct observation of a wide range of phenomena and experimentation with them. For the very young: toys, playgrounds, kits, projects. For older students: indoor showcases, projects, and courses taught in intensive form. For all ages: more instructive everyday surroundings with outdoor showcases and large demonstrations.

  1. The effectiveness of providing peer benchmarked feedback to hip replacement surgeons based on patient-reported outcome measures—results from the PROFILE (Patient-Reported Outcomes: Feedback Interpretation and Learning Experiment) trial: a cluster randomised controlled study

    PubMed Central

    Boyce, Maria B; Browne, John P

    2015-01-01

    Objective To test whether providing surgeons with peer benchmarked feedback about patient-reported outcomes is effective in improving patient outcomes. Design Cluster randomised controlled trial. Setting Secondary care—Ireland. Participants Surgeons were recruited through the Irish Institute of Trauma and Orthopaedic Surgery, and patients were recruited in hospitals prior to surgery. We randomly allocated 21 surgeons and 550 patients. Intervention Surgeons in the intervention group received peer benchmarked patient-reported outcome measures (PROMs) feedback and education. Main outcome variable Postoperative Oxford Hip Score (OHS). Results Primary outcome data were available for 11 intervention surgeons with responsibility for 230 patients and 10 control surgeons with responsibility for 228 patients. The mean postoperative OHS for the intervention group was 40.8 (95% CI 39.8 to 41.7) and for the control group was 41.9 (95% CI 41.1 to 42.7). The adjusted effect estimate was −1.1 (95% CI −2.4 to 0.2, p=0.09). Secondary outcomes were the Hip Osteoarthritis Outcome Score (HOOS), EQ-5D and the proportion of patients reporting a problem after surgery. The mean postoperative HOOS for the intervention group was 36.2 and for the control group was 37.1. The adjusted effect estimate was −1.1 (95% CI −2.4 to 0.3, p=0.1). The mean postoperative EQ-5D for the intervention group was 0.85 and for the control group was 0.87. The adjusted effect estimate was −0.02 (95% CI −0.05 to 0.008, p=0.2). 27% of intervention patients and 24% of control patients reported at least one complication after surgery (adjusted OR=1.2, 95% CI 0.6 to 2.3, p=0.6). Conclusions Outcomes for patients operated on by surgeons who had received peer benchmarked PROMs data were not statistically different from the outcomes of patients operated on by surgeons who did not receive feedback. PROMs information alone seems to be insufficient to identify opportunities for quality improvement. Trial

  2. Gaia FGK benchmark stars: Metallicity

    NASA Astrophysics Data System (ADS)

    Jofré, P.; Heiter, U.; Soubiran, C.; Blanco-Cuaresma, S.; Worley, C. C.; Pancino, E.; Cantat-Gaudin, T.; Magrini, L.; Bergemann, M.; González Hernández, J. I.; Hill, V.; Lardo, C.; de Laverny, P.; Lind, K.; Masseron, T.; Montes, D.; Mucciarelli, A.; Nordlander, T.; Recio Blanco, A.; Sobeck, J.; Sordo, R.; Sousa, S. G.; Tabernero, H.; Vallenari, A.; Van Eck, S.

    2014-04-01

    Context. To calibrate automatic pipelines that determine atmospheric parameters of stars, one needs a sample of stars, or "benchmark stars", with well-defined parameters to be used as a reference. Aims: We provide detailed documentation of the iron abundance determination of the 34 FGK-type benchmark stars that are selected to be the pillars for calibration of the one billion Gaia stars. They cover a wide range of temperatures, surface gravities, and metallicities. Methods: Up to seven different methods were used to analyze an observed spectral library of high resolutions and high signal-to-noise ratios. The metallicity was determined by assuming a value of effective temperature and surface gravity obtained from fundamental relations; that is, these parameters were known a priori and independently from the spectra. Results: We present a set of metallicity values obtained in a homogeneous way for our sample of benchmark stars. In addition to this value, we provide detailed documentation of the associated uncertainties. Finally, we report a value of the metallicity of the cool giant ψ Phe for the first time. Based on NARVAL and HARPS data obtained within the Gaia DPAC (Data Processing and Analysis Consortium) and coordinated by the GBOG (Ground-Based Observations for Gaia) working group and on data retrieved from the ESO-ADP database.Tables 6-76 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/564/A133

  3. NASA Software Engineering Benchmarking Effort

    NASA Technical Reports Server (NTRS)

    Godfrey, Sally; Rarick, Heather

    2012-01-01

    Benchmarking was very interesting and provided a wealth of information (1) We did see potential solutions to some of our "top 10" issues (2) We have an assessment of where NASA stands with relation to other aerospace/defense groups We formed new contacts and potential collaborations (1) Several organizations sent us examples of their templates, processes (2) Many of the organizations were interested in future collaboration: sharing of training, metrics, Capability Maturity Model Integration (CMMI) appraisers, instructors, etc. We received feedback from some of our contractors/ partners (1) Desires to participate in our training; provide feedback on procedures (2) Welcomed opportunity to provide feedback on working with NASA

  4. INTEGRAL BENCHMARK DATA FOR NUCLEAR DATA TESTING THROUGH THE ICSBEP AND THE NEWLY ORGANIZED IRPHEP

    SciTech Connect

    J. Blair Briggs; Lori Scott; Yolanda Rugama; Enrico Satori

    2007-04-01

    The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) was last reported in a nuclear data conference at the International Conference on Nuclear Data for Science and Technology, ND-2004, in Santa Fe, New Mexico. Since that time the number and type of integral benchmarks have increased significantly. Included in the ICSBEP Handbook are criticality-alarm / shielding and fundamental physic benchmarks in addition to the traditional critical / subcritical benchmark data. Since ND 2004, a reactor physics counterpart to the ICSBEP, the International Reactor Physics Experiment Evaluation Project (IRPhEP) was initiated. The IRPhEP is patterned after the ICSBEP, but focuses on other integral measurements, such as buckling, spectral characteristics, reactivity effects, reactivity coefficients, kinetics measurements, reaction-rate and power distributions, nuclide compositions, and other miscellaneous-type measurements in addition to the critical configuration. The status of these two projects is discussed and selected benchmarks highlighted in this paper.

  5. Benchmark Evaluation of Fuel Effect and Material Worth Measurements for a Beryllium-Reflected Space Reactor Mockup

    SciTech Connect

    Marshall, Margaret A.; Bess, John D.

    2015-02-01

    The critical configuration of the small, compact critical assembly (SCCA) experiments performed at the Oak Ridge Critical Experiments Facility (ORCEF) in 1962-1965 have been evaluated as acceptable benchmark experiments for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. The initial intent of these experiments was to support the design of the Medium Power Reactor Experiment (MPRE) program, whose purpose was to study “power plants for the production of electrical power in space vehicles.” The third configuration in this series of experiments was a beryllium-reflected assembly of stainless-steel-clad, highly enriched uranium (HEU)-O2 fuel mockup of a potassium-cooled space power reactor. Reactivity measurements cadmium ratio spectral measurements and fission rate measurements were measured through the core and top reflector. Fuel effect worth measurements and neutron moderating and absorbing material worths were also measured in the assembly fuel region. The cadmium ratios, fission rate, and worth measurements were evaluated for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. The fuel tube effect and neutron moderating and absorbing material worth measurements are the focus of this paper. Additionally, a measurement of the worth of potassium filling the core region was performed but has not yet been evaluated Pellets of 93.15 wt.% enriched uranium dioxide (UO2) were stacked in 30.48 cm tall stainless steel fuel tubes (0.3 cm tall end caps). Each fuel tube had 26 pellets with a total mass of 295.8 g UO2 per tube. 253 tubes were arranged in 1.506-cm triangular lattice. An additional 7-tube cluster critical configuration was also measured but not used for any physics measurements. The core was surrounded on all side by a beryllium reflector. The fuel effect worths were measured by removing fuel tubes at various radius. An accident scenario

  6. Evaluation of Plutonium Hemisphere Critical Experiments Partially Reflected by Steel and Oil

    SciTech Connect

    John D. Bess

    2012-01-01

    A series of 15 critical experiments performed at the Rocky Flats Critical Mass Laboratory in the late 1960s were evaluated and then determined to represent acceptable benchmark experiments for the validation of calculational methods. This series of experiments was part of a larger set of experiments performed to evaluate operational safety margins at the Rocky Flats Plant. The experiments consisted of bare plutonium metal hemishells reflected by steel hemishells of increasing thickness and motor oil. The hemishell assembly was suspended within dual aluminum tanks. Criticality was achieved by pumping oil into the tanks such that effectively infinite reflection was achieved in all directions except directly above the assembly; then the critical oil height was recorded. The results of these experiments had been initially ignored because early computational methods had been inadequate to analyze partially-reflected configurations. The dominant uncertainties include the uncertainty in the average plutonium density and the composition of materials in the gaps between the plutonium hemishells. Simple and detailed benchmark models were developed. Eigenvalue calculations using MCNP5 and ENDF/B-VII.0 were within 2s of the benchmark values. This benchmark evaluation has been added to the ICSBEP Handbook.

  7. Acceptance, values, and probability.

    PubMed

    Steel, Daniel

    2015-10-01

    This essay makes a case for regarding personal probabilities used in Bayesian analyses of confirmation as objects of acceptance and rejection. That in turn entails that personal probabilities are subject to the argument from inductive risk, which aims to show non-epistemic values can legitimately influence scientific decisions about which hypotheses to accept. In a Bayesian context, the argument from inductive risk suggests that value judgments can influence decisions about which probability models to accept for likelihoods and priors. As a consequence, if the argument from inductive risk is sound, then non-epistemic values can affect not only the level of evidence deemed necessary to accept a hypothesis but also degrees of confirmation themselves. PMID:26386533

  8. Newbery Medal Acceptance.

    ERIC Educational Resources Information Center

    Freedman, Russell

    1988-01-01

    Presents the Newbery Medal acceptance speech of Russell Freedman, writer of children's nonfiction. Discusses the place of nonfiction in the world of children's literature, the evolution of children's biographies, and the author's work on "Lincoln." (ARH)

  9. Newbery Medal Acceptance.

    ERIC Educational Resources Information Center

    Cleary, Beverly

    1984-01-01

    Reprints the text of Ms. Cleary's Newbery medal acceptance speech in which she gives personal history concerning her development as a writer and her response to the letters she receives from children. (CRH)

  10. Caldecott Medal Acceptance.

    ERIC Educational Resources Information Center

    Provensen, Alice; Provensen, Martin

    1984-01-01

    Reprints the text of the Provensens' Caldecott medal acceptance speech in which they describe their early interest in libraries and literature, the collaborative aspect of their work, and their current interest in aviation. (CRH)

  11. Computational Chemistry Comparison and Benchmark Database

    National Institute of Standards and Technology Data Gateway

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  12. Benchmarking for Excellence and the Nursing Process

    NASA Technical Reports Server (NTRS)

    Sleboda, Claire

    1999-01-01

    Nursing is a service profession. The services provided are essential to life and welfare. Therefore, setting the benchmark for high quality care is fundamental. Exploring the definition of a benchmark value will help to determine a best practice approach. A benchmark is the descriptive statement of a desired level of performance against which quality can be judged. It must be sufficiently well understood by managers and personnel in order that it may serve as a standard against which to measure value.

  13. Nuclear Data Performance Testing Using Sensitive, but Less Frequently Used ICSBEP Benchmarks

    SciTech Connect

    J. Blair Briggs; John D. Bess

    2011-08-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) has published the International Handbook of Evaluated Criticality Safety Benchmark Experiments annually since 1995. The Handbook now spans over 51,000 pages with benchmark specifications for 4,283 critical, near critical, or subcritical configurations; 24 criticality alarm placement/shielding configurations with multiple dose points for each; and 200 configurations that have been categorized as fundamental physics measurements relevant to criticality safety applications. Benchmark data in the ICSBEP Handbook were originally intended for validation of criticality safety methods and data; however, the benchmark specifications are now used extensively for nuclear data testing. There are several, less frequently used benchmarks within the Handbook that are very sensitive to thorium and certain key structural and moderating materials. Calculated results for many of those benchmarks using modern nuclear data libraries suggest there is still room for improvement. These and other highly sensitive, but rarely quoted benchmarks are highlighted and data testing results provided using the Monte Carlo N-Particle Version 5 (MCNP5) code and continuous energy ENDF/B-V, VI.8, and VII.0, JEFF-3.1, and JENDL-3.3 nuclear data libraries.

  14. Pynamic: the Python Dynamic Benchmark

    SciTech Connect

    Lee, G L; Ahn, D H; de Supinksi, B R; Gyllenhaal, J C; Miller, P J

    2007-07-10

    Python is widely used in scientific computing to facilitate application development and to support features such as computational steering. Making full use of some of Python's popular features, which improve programmer productivity, leads to applications that access extremely high numbers of dynamically linked libraries (DLLs). As a result, some important Python-based applications severely stress a system's dynamic linking and loading capabilities and also cause significant difficulties for most development environment tools, such as debuggers. Furthermore, using the Python paradigm for large scale MPI-based applications can create significant file IO and further stress tools and operating systems. In this paper, we present Pynamic, the first benchmark program to support configurable emulation of a wide-range of the DLL usage of Python-based applications for large scale systems. Pynamic has already accurately reproduced system software and tool issues encountered by important large Python-based scientific applications on our supercomputers. Pynamic provided insight for our system software and tool vendors, and our application developers, into the impact of several design decisions. As we describe the Pynamic benchmark, we will highlight some of the issues discovered in our large scale system software and tools using Pynamic.

  15. TRENDS: Compendium of Benchmark Objects

    NASA Astrophysics Data System (ADS)

    Gonzales, Erica J.; Crepp, Justin R.; Bechter, Eric; Johnson, John A.; Montet, Benjamin T.; Howard, Andrew; Marcy, Geoffrey W.; Isaacson, Howard T.

    2016-01-01

    The physical properties of faint stellar and substellar objects are highly uncertain. For example, the masses of brown dwarfs are usually inferred using theoretical models, which are age dependent and have yet to be properly tested. With the goal of identifying new benchmark objects through observations with NIRC2 at Keck, we have carried out a comprehensive adaptive-optics survey as part of the TRENDS (TaRgetting bENchmark-objects with Doppler Spectroscopy) high-contrast imaging program. TRENDS targets nearby (d < 100 pc), Sun-like stars showing long-term radial velocity accelerations. We present the discovery of 28 confirmed, co-moving companions as well as 19 strong candidate companions to F-, G-, and K-stars with well-determined parallaxes and metallicities. Benchmark objects of this nature lend themselves to a three dimensional orbit determination that will ultimately yield a precise dynamical mass. Unambiguous mass measurements of very low mass companions, which straddle the hydrogen-burning boundary, will allow our compendium of objects to serve as excellent testbeds to substantiate theoretical evolutionary and atmospheric models in regimes where they currently breakdown (low temperature, low mass, and old age).

  16. Characterizing universal gate sets via dihedral benchmarking

    NASA Astrophysics Data System (ADS)

    Carignan-Dugas, Arnaud; Wallman, Joel J.; Emerson, Joseph

    2015-12-01

    We describe a practical experimental protocol for robustly characterizing the error rates of non-Clifford gates associated with dihedral groups, including small single-qubit rotations. Our dihedral benchmarking protocol is a generalization of randomized benchmarking that relaxes the usual unitary 2-design condition. Combining this protocol with existing randomized benchmarking schemes enables practical universal gate sets for quantum information processing to be characterized in a way that is robust against state-preparation and measurement errors. In particular, our protocol enables direct benchmarking of the π /8 gate even under the gate-dependent error model that is expected in leading approaches to fault-tolerant quantum computation.

  17. Benchmarks for acute stroke care delivery

    PubMed Central

    Hall, Ruth E.; Khan, Ferhana; Bayley, Mark T.; Asllani, Eriola; Lindsay, Patrice; Hill, Michael D.; O'Callaghan, Christina; Silver, Frank L.; Kapral, Moira K.

    2013-01-01

    Objective Despite widespread interest in many jurisdictions in monitoring and improving the quality of stroke care delivery, benchmarks for most stroke performance indicators have not been established. The objective of this study was to develop data-derived benchmarks for acute stroke quality indicators. Design Nine key acute stroke quality indicators were selected from the Canadian Stroke Best Practice Performance Measures Manual. Participants A population-based retrospective sample of patients discharged from 142 hospitals in Ontario, Canada, between 1 April 2008 and 31 March 2009 (N = 3191) was used to calculate hospital rates of performance and benchmarks. Intervention The Achievable Benchmark of Care (ABC™) methodology was used to create benchmarks based on the performance of the upper 15% of patients in the top-performing hospitals. Main Outcome Measures Benchmarks were calculated for rates of neuroimaging, carotid imaging, stroke unit admission, dysphasia screening and administration of stroke-related medications. Results The following benchmarks were derived: neuroimaging within 24 h, 98%; admission to a stroke unit, 77%; thrombolysis among patients arriving within 2.5 h, 59%; carotid imaging, 93%; dysphagia screening, 88%; antithrombotic therapy, 98%; anticoagulation for atrial fibrillation, 94%; antihypertensive therapy, 92% and lipid-lowering therapy, 77%. ABC™ acute stroke care benchmarks achieve or exceed the consensus-based targets required by Accreditation Canada, with the exception of dysphagia screening. Conclusions Benchmarks for nine hospital-based acute stroke care quality indicators have been established. These can be used in the development of standards for quality improvement initiatives. PMID:24141011

  18. Method and system for benchmarking computers

    DOEpatents

    Gustafson, John L.

    1993-09-14

    A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

  19. The Zoo, Benchmarks & You: How To Reach the Oregon State Benchmarks with Zoo Resources.

    ERIC Educational Resources Information Center

    2002

    This document aligns Oregon state educational benchmarks and standards with Oregon Zoo resources. Benchmark areas examined include English, mathematics, science, social studies, and career and life roles. Brief descriptions of the programs offered by the zoo are presented. (SOE)

  20. Present Status and Extensions of the Monte Carlo Performance Benchmark

    NASA Astrophysics Data System (ADS)

    Hoogenboom, J. Eduard; Petrovic, Bojan; Martin, William R.

    2014-06-01

    The NEA Monte Carlo Performance benchmark started in 2011 aiming to monitor over the years the abilities to perform a full-size Monte Carlo reactor core calculation with a detailed power production for each fuel pin with axial distribution. This paper gives an overview of the contributed results thus far. It shows that reaching a statistical accuracy of 1 % for most of the small fuel zones requires about 100 billion neutron histories. The efficiency of parallel execution of Monte Carlo codes on a large number of processor cores shows clear limitations for computer clusters with common type computer nodes. However, using true supercomputers the speedup of parallel calculations is increasing up to large numbers of processor cores. More experience is needed from calculations on true supercomputers using large numbers of processors in order to predict if the requested calculations can be done in a short time. As the specifications of the reactor geometry for this benchmark test are well suited for further investigations of full-core Monte Carlo calculations and a need is felt for testing other issues than its computational performance, proposals are presented for extending the benchmark to a suite of benchmark problems for evaluating fission source convergence for a system with a high dominance ratio, for coupling with thermal-hydraulics calculations to evaluate the use of different temperatures and coolant densities and to study the correctness and effectiveness of burnup calculations. Moreover, other contemporary proposals for a full-core calculation with realistic geometry and material composition will be discussed.

  1. Sieve of Eratosthenes benchmarks for the Z8 FORTH microcontroller

    SciTech Connect

    Edwards, R.

    1989-02-01

    This report presents benchmarks for the Z8 FORTH microcontroller system that ORNL uses extensively in proving concepts and developing prototype test equipment for the Smart House Project. The results are based on the sieve of Eratosthenes algorithm, a calculation used extensively to rate computer systems and programming languages. Three benchmark refinements are presented,each showing how the execution speed of a FORTH program can be improved by use of a particular optimization technique. The last version of the FORTH benchmark shows that optimization is worth the effort: It executes 20 times faster than the Gilbreaths' widely-published FORTH benchmark program. The National Association of Home Builders Smart House Project is a cooperative research and development effort being undertaken by American home builders and a number of major corporations serving the home building industry. The major goal of the project is to help the participating organizations incorporate advanced technology in communications,energy distribution, and appliance control products for American homes. This information is provided to help project participants use the Z8 FORTH prototyping microcontroller in developing Smart House concepts and equipment. The discussion is technical in nature and assumes some experience with microcontroller devices and the techniques used to develop software for them. 7 refs., 5 tabs.

  2. Benchmarking Outcomes in the Critically Injured Burn Patient

    PubMed Central

    Klein, Matthew B.; Goverman, Jeremy; Hayden, Douglas L.; Fagan, Shawn P.; McDonald-Smith, Grace P.; Alexander, Andrew K.; Gamelli, Richard L.; Gibran, Nicole S.; Finnerty, Celeste C.; Jeschke, Marc G.; Arnoldo, Brett; Wispelwey, Bram; Mindrinos, Michael N.; Xiao, Wenzhong; Honari, Shari E.; Mason, Philip H.; Schoenfeld, David A.; Herndon, David N.; Tompkins, Ronald G.

    2014-01-01

    Objective To determine and compare outcomes with accepted benchmarks in burn care at six academic burn centers. Background Since the 1960s, U.S. morbidity and mortality rates have declined tremendously for burn patients, likely related to improvements in surgical and critical care treatment. We describe the baseline patient characteristics and well-defined outcomes for major burn injuries. Methods We followed 300 adults and 241 children from 2003–2009 through hospitalization using standard operating procedures developed at study onset. We created an extensive database on patient and injury characteristics, anatomic and physiological derangement, clinical treatment, and outcomes. These data were compared with existing benchmarks in burn care. Results Study patients were critically injured as demonstrated by mean %TBSA (41.2±18.3 for adults and 57.8±18.2 for children) and presence of inhalation injury in 38% of the adults and 54.8% of the children. Mortality in adults was 14.1% for those less than 55 years old and 38.5% for those age ≥55 years. Mortality in patients less than 17 years old was 7.9%. Overall, the multiple organ failure rate was 27%. When controlling for age and %TBSA, presence of inhalation injury was not significant. Conclusions This study provides the current benchmark for major burn patients. Mortality rates, notwithstanding significant % TBSA and presence of inhalation injury, have significantly declined compared to previous benchmarks. Modern day surgical and medically intensive management has markedly improved to the point where we can expect patients less than 55 years old with severe burn injuries and inhalation injury to survive these devastating conditions. PMID:24722222

  3. HyspIRI Low Latency Concept and Benchmarks

    NASA Technical Reports Server (NTRS)

    Mandl, Dan

    2010-01-01

    Topics include HyspIRI low latency data ops concept, HyspIRI data flow, ongoing efforts, experiment with Web Coverage Processing Service (WCPS) approach to injecting new algorithms into SensorWeb, low fidelity HyspIRI IPM testbed, compute cloud testbed, open cloud testbed environment, Global Lambda Integrated Facility (GLIF) and OCC collaboration with Starlight, delay tolerant network (DTN) protocol benchmarking, and EO-1 configuration for preliminary DTN prototype.

  4. HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks

    SciTech Connect

    Paulson, Patrick R.; Purohit, Sumit; Rodriguez, Luke R.

    2015-05-01

    This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.

  5. A Privacy-Preserving Platform for User-Centric Quantitative Benchmarking

    NASA Astrophysics Data System (ADS)

    Herrmann, Dominik; Scheuer, Florian; Feustel, Philipp; Nowey, Thomas; Federrath, Hannes

    We propose a centralised platform for quantitative benchmarking of key performance indicators (KPI) among mutually distrustful organisations. Our platform offers users the opportunity to request an ad-hoc benchmarking for a specific KPI within a peer group of their choice. Architecture and protocol are designed to provide anonymity to its users and to hide the sensitive KPI values from other clients and the central server. To this end, we integrate user-centric peer group formation, exchangeable secure multi-party computation protocols, short-lived ephemeral key pairs as pseudonyms, and attribute certificates. We show by empirical evaluation of a prototype that the performance is acceptable for reasonably sized peer groups.

  6. Benchmarking ICRF simulations for ITER

    SciTech Connect

    R. V. Budny, L. Berry, R. Bilato, P. Bonoli, M. Brambilla, R.J. Dumont, A. Fukuyama, R. Harvey, E.F. Jaeger, E. Lerche, C.K. Phillips, V. Vdovin, J. Wright, and members of the ITPA-IOS

    2010-09-28

    Abstract Benchmarking of full-wave solvers for ICRF simulations is performed using plasma profiles and equilibria obtained from integrated self-consistent modeling predictions of four ITER plasmas. One is for a high performance baseline (5.3 T, 15 MA) DT H-mode plasma. The others are for half-field, half-current plasmas of interest for the pre-activation phase with bulk plasma ion species being either hydrogen or He4. The predicted profiles are used by seven groups to predict the ICRF electromagnetic fields and heating profiles. Approximate agreement is achieved for the predicted heating power partitions for the DT and He4 cases. Profiles of the heating powers and electromagnetic fields are compared.

  7. Benchmarking Multipacting Simulations in VORPAL

    SciTech Connect

    C. Nieter, C. Roark, P. Stoltz, K. Tian

    2009-05-01

    We will present the results of benchmarking simulations run to test the ability of VORPAL to model multipacting processes in Superconducting Radio Frequency structures. VORPAL is an electromagnetic (FDTD) particle-in-cell simulation code originally developed for applications in plasma and beam physics. The addition of conformal boundaries and algorithms for secondary electron emission allow VORPAL to be applied to multipacting processes. We start with simulations of multipacting between parallel plates where there are well understood theoretical predictions for the frequency bands where multipacting is expected to occur. We reproduce the predicted multipacting bands and demonstrate departures from the theoretical predictions when a more sophisticated model of secondary emission is used. Simulations of existing cavity structures developed at Jefferson National Laboratories will also be presented where we compare results from VORPAL to experimental data.

  8. The He + H2+ --> HeH+ + H reaction: Ab initio studies of the potential energy surface, benchmark time-independent quantum dynamics in an extended energy range and comparison with experiments

    NASA Astrophysics Data System (ADS)

    De Fazio, Dario; de Castro-Vitores, Miguel; Aguado, Alfredo; Aquilanti, Vincenzo; Cavalli, Simonetta

    2012-12-01

    In this work we critically revise several aspects of previous ab initio quantum chemistry studies [P. Palmieri et al., Mol. Phys. 98, 1835 (2000);, 10.1080/00268970009483387 C. N. Ramachandran et al., Chem. Phys. Lett. 469, 26 (2009)], 10.1016/j.cplett.2008.12.035 of the HeH_2^+ system. New diatomic curves for the H_2^+ and HeH+ molecular ions, which provide vibrational frequencies at a near spectroscopic level of accuracy, have been generated to test the quality of the diatomic terms employed in the previous analytical fittings. The reliability of the global potential energy surfaces has also been tested performing benchmark quantum scattering calculations within the time-independent approach in an extended interval of energies. In particular, the total integral cross sections have been calculated in the total collision energy range 0.955-2.400 eV for the scattering of the He atom by the ortho- and para-hydrogen molecular ion. The energy profiles of the total integral cross sections for selected vibro-rotational states of H_2^+ (v = 0, …,5 and j = 1, …,7) show a strong rotational enhancement for the lower vibrational states which becomes weaker as the vibrational quantum number increases. Comparison with several available experimental data is presented and discussed.

  9. Beyond Benchmarking: Value-Adding Metrics

    ERIC Educational Resources Information Center

    Fitz-enz, Jac

    2007-01-01

    HR metrics has grown up a bit over the past two decades, moving away from simple benchmarking practices and toward a more inclusive approach to measuring institutional performance and progress. In this article, the acknowledged "father" of human capital performance benchmarking provides an overview of several aspects of today's HR metrics…

  10. Sequenced Benchmarks for Geography and History

    ERIC Educational Resources Information Center

    Kendall, John S.; Richardson, Amy T.; Ryan, Susan E.

    2005-01-01

    This report is one in a series of reference documents designed to assist those who are directly involved in the revision and improvement of content standards, as well as teachers who use standards and benchmarks to guide everyday instruction. Reports in the series provide information about how benchmarks might best appear in a sequence of…

  11. Benchmarking Learning and Teaching: Developing a Method

    ERIC Educational Resources Information Center

    Henderson-Smart, Cheryl; Winning, Tracey; Gerzina, Tania; King, Shalinie; Hyde, Sarah

    2006-01-01

    Purpose: To develop a method for benchmarking teaching and learning in response to an institutional need to validate a new program in Dentistry at the University of Sydney, Australia. Design/methodology/approach: After a collaborative partner, University of Adelaide, was identified, the areas of teaching and learning to be benchmarked, PBL…

  12. Towards Humanlike Social Touch for Prosthetics and Sociable Robotics: Handshake Experiments and Finger Phalange Indentations

    NASA Astrophysics Data System (ADS)

    Cabibihan, John-John; Pradipta, Raditya; Chew, Yun Zhi; Ge, Shuzhi Sam

    The handshake has become the most acceptable gesture of greeting in many cultures. Replicating the softness of the human hand can contribute to the improvement of the emotional healing process of people who have lost their hands by enabling the concealment of prosthetic hand usage during handshake interactions. Likewise, sociable robots of the future will exchange greetings with humans. The soft humanlike hands during handshakes would be able to address the safety and acceptance issues of robotic hands. This paper investigates the areas of contact during handshake interactions. After the areas of high contact were known, indentation experiments were conducted to obtain the benchmark data for duplication with synthetic skins.

  13. A performance benchmark test for geodynamo simulations

    NASA Astrophysics Data System (ADS)

    Matsui, H.; Heien, E. M.

    2013-12-01

    In the last ten years, a number of numerical dynamo models have successfully represented basic characteristics of the geomagnetic field. As new models and numerical methods continue to be developed, it is important to update and extend benchmarks for testing these models. The first dynamo benchmark of Christensen et al. (2001) was applied to models based on spherical harmonic expansion methods. However, only a few groups have reported results of the dynamo benchmark using local methods (Harder and Hansen, 2005; Matsui and Okuda, 2005; Chan et al., 2007) because of the difficulty treating magnetic boundary conditions based on the local methods. On the other hand, spherical harmonics expansion methods perform poorly on massively parallel computers because global data communications are required for the spherical harmonics expansions to evaluate nonlinear terms. We perform benchmark tests to asses various numerical methods for the next generation of geodynamo simulations. The purpose of this benchmark test is to assess numerical geodynamo models on a massively parallel computational platform. To compare among many numerical methods as possible, we consider the model with the insulated magnetic boundary by Christensen et al. (2001) and with the pseudo vacuum magnetic boundary, because the pseudo vacuum boundaries are implemented easier by using the local method than the magnetic insulated boundaries. In the present study, we consider two kinds of benchmarks, so-called accuracy benchmark and performance benchmark. In the accuracy benchmark, we compare the dynamo models by using modest Ekman and Rayleigh numbers proposed by Christensen et. al. (2001). We investigate a required spatial resolution for each dynamo code to obtain less than 1% difference from the suggested solution of the benchmark test using the two magnetic boundary conditions. In the performance benchmark, we investigate computational performance under the same computational environment. We perform these

  14. The IAEA Coordinated Research Program on HTGR Reactor Physics, Thermal-hydraulics and Depletion Uncertainty Analysis: Description of the Benchmark Test Cases and Phases

    SciTech Connect

    Frederik Reitsma; Gerhard Strydom; Bismark Tyobeka; Kostadin Ivanov

    2012-10-01

    The continued development of High Temperature Gas Cooled Reactors (HTGRs) requires verification of design and safety features with reliable high fidelity physics models and robust, efficient, and accurate codes. The uncertainties in the HTR analysis tools are today typically assessed with sensitivity analysis and then a few important input uncertainties (typically based on a PIRT process) are varied in the analysis to find a spread in the parameter of importance. However, one wish to apply a more fundamental approach to determine the predictive capability and accuracies of coupled neutronics/thermal-hydraulics and depletion simulations used for reactor design and safety assessment. Today there is a broader acceptance of the use of uncertainty analysis even in safety studies and it has been accepted by regulators in some cases to replace the traditional conservative analysis. Finally, there is also a renewed focus in supplying reliable covariance data (nuclear data uncertainties) that can then be used in uncertainty methods. Uncertainty and sensitivity studies are therefore becoming an essential component of any significant effort in data and simulation improvement. In order to address uncertainty in analysis and methods in the HTGR community the IAEA launched a Coordinated Research Project (CRP) on the HTGR Uncertainty Analysis in Modelling early in 2012. The project is built on the experience of the OECD/NEA Light Water Reactor (LWR) Uncertainty Analysis in Best-Estimate Modelling (UAM) benchmark activity, but focuses specifically on the peculiarities of HTGR designs and its simulation requirements. Two benchmark problems were defined with the prismatic type design represented by the MHTGR-350 design from General Atomics (GA) while a 250 MW modular pebble bed design, similar to the INET (China) and indirect-cycle PBMR (South Africa) designs are also included. In the paper more detail on the benchmark cases, the different specific phases and tasks and the latest

  15. Two-dimensional benchmark calculations for PNL-30 through PNL-35

    SciTech Connect

    Mosteller, R.D.

    1997-12-01

    Interest in critical experiments with lattices of mixed-oxide (MOX) fuel pins has been revived by the possibility that light water reactors will be used for the disposition of weapons-grade plutonium. A series of six experiments with MOX lattices, designated PNL-30 through PNL-35, was performed at Pacific Northwest Laboratories in 1975 and 1976, and a set of benchmark specifications for these experiments subsequently was adopted by the Cross Section Evaluation Working Group (CSEWG). Although there appear to be some problems with these experiments, they remain the only CSEWG benchmarks for MOX lattices.

  16. Accept or divert?

    PubMed

    Angelucci, P A

    1999-09-01

    Stretching scarce resources is more than a managerial issue. Should you accept the patient to an understaffed ICU or divert him to another facility? The intense "medical utility" controversy focuses on a situation that critical care nurses now face every day. PMID:10614370

  17. Approaches to acceptable risk

    SciTech Connect

    Whipple, C.

    1997-04-30

    Several alternative approaches to address the question {open_quotes}How safe is safe enough?{close_quotes} are reviewed and an attempt is made to apply the reasoning behind these approaches to the issue of acceptability of radiation exposures received in space. The approaches to the issue of the acceptability of technological risk described here are primarily analytical, and are drawn from examples in the management of environmental health risks. These include risk-based approaches, in which specific quantitative risk targets determine the acceptability of an activity, and cost-benefit and decision analysis, which generally focus on the estimation and evaluation of risks, benefits and costs, in a framework that balances these factors against each other. These analytical methods tend by their quantitative nature to emphasize the magnitude of risks, costs and alternatives, and to downplay other factors, especially those that are not easily expressed in quantitative terms, that affect acceptance or rejection of risk. Such other factors include the issues of risk perceptions and how and by whom risk decisions are made.

  18. 1984 Newbery Acceptance Speech.

    ERIC Educational Resources Information Center

    Cleary, Beverly

    1984-01-01

    This acceptance speech for an award honoring "Dear Mr. Henshaw," a book about feelings of a lonely child of divorce intended for eight-, nine-, and ten-year-olds, highlights children's letters to author. Changes in society that affect children, the inception of "Dear Mr. Henshaw," and children's reactions to books are highlighted. (EJS)

  19. Why was Relativity Accepted?

    NASA Astrophysics Data System (ADS)

    Brush, S. G.

    Historians of science have published many studies of the reception of Einstein's special and general theories of relativity. Based on a review of these studies, and my own research on the role of the light-bending prediction in the reception of general relativity, I discuss the role of three kinds of reasons for accepting relativity (1) empirical predictions and explanations; (2) social-psychological factors; and (3) aesthetic-mathematical factors. According to the historical studies, acceptance was a three-stage process. First, a few leading scientists adopted the special theory for aesthetic-mathematical reasons. In the second stage, their enthusiastic advocacy persuaded other scientists to work on the theory and apply it to problems currently of interest in atomic physics. The special theory was accepted by many German physicists by 1910 and had begun to attract some interest in other countries. In the third stage, the confirmation of Einstein's light-bending prediction attracted much public attention and forced all physicists to take the general theory of relativity seriously. In addition to light-bending, the explanation of the advance of Mercury's perihelion was considered strong evidence by theoretical physicists. The American astronomers who conducted successful tests of general relativity became defenders of the theory. There is little evidence that relativity was `socially constructed' but its initial acceptance was facilitated by the prestige and resources of its advocates.

  20. UGV acceptance testing

    NASA Astrophysics Data System (ADS)

    Kramer, Jeffrey A.; Murphy, Robin R.

    2006-05-01

    With over 100 models of unmanned vehicles now available for military and civilian safety, security or rescue applications, it is important to for agencies to establish acceptance testing. However, there appears to be no general guidelines for what constitutes a reasonable acceptance test. This paper describes i) a preliminary method for acceptance testing by a customer of the mechanical and electrical components of an unmanned ground vehicle system, ii) how it has been applied to a man-packable micro-robot, and iii) discusses the value of testing both to ensure that the customer has a workable system and to improve design. The test method automated the operation of the robot to repeatedly exercise all aspects and combinations of components on the robot for 6 hours. The acceptance testing process uncovered many failures consistent with those shown to occur in the field, showing that testing by the user does predict failures. The process also demonstrated that the testing by the manufacturer can provide important design data that can be used to identify, diagnose, and prevent long-term problems. Also, the structured testing environment showed that sensor systems can be used to predict errors and changes in performance, as well as uncovering unmodeled behavior in subsystems.

  1. Benchmarking--Measuring and Comparing for Continuous Improvement.

    ERIC Educational Resources Information Center

    Henczel, Sue

    2002-01-01

    Discussion of benchmarking focuses on the use of internal and external benchmarking by special librarians. Highlights include defining types of benchmarking; historical development; benefits, including efficiency, improved performance, increased competitiveness, and better decision making; problems, including inappropriate adaptation; developing a…

  2. ACT in Context: An Exploration of Experiential Acceptance

    ERIC Educational Resources Information Center

    Block-Lerner, Jennifer; Wulfert, Edelgard; Moses, Erica

    2009-01-01

    Experiential acceptance, which involves "having," or "allowing" private experiences, has recently gained much attention in the cognitive-behavioral literature. Acceptance, however, may be considered a common factor among psychotherapeutic traditions. The purposes of this paper are to examine the historical roots of acceptance and to discuss the…

  3. Social Acceptance of Wind: A Brief Overview (Presentation)

    SciTech Connect

    Lantz, E.

    2015-01-01

    This presentation discusses concepts and trends in social acceptance of wind energy, profiles recent research findings, and discussions mitigation strategies intended to resolve wind power social acceptance challenges as informed by published research and the experiences of individuals participating in the International Energy Agencies Working Group on Social Acceptance of Wind Energy

  4. Parallelization of NAS Benchmarks for Shared Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    This paper presents our experiences of parallelizing the sequential implementation of NAS benchmarks using compiler directives on SGI Origin2000 distributed shared memory (DSM) system. Porting existing applications to new high performance parallel and distributed computing platforms is a challenging task. Ideally, a user develops a sequential version of the application, leaving the task of porting to new generations of high performance computing systems to parallelization tools and compilers. Due to the simplicity of programming shared-memory multiprocessors, compiler developers have provided various facilities to allow the users to exploit parallelism. Native compilers on SGI Origin2000 support multiprocessing directives to allow users to exploit loop-level parallelism in their programs. Additionally, supporting tools can accomplish this process automatically and present the results of parallelization to the users. We experimented with these compiler directives and supporting tools by parallelizing sequential implementation of NAS benchmarks. Results reported in this paper indicate that with minimal effort, the performance gain is comparable with the hand-parallelized, carefully optimized, message-passing implementations of the same benchmarks.

  5. Toxicological Benchmarks for Screening Potential Contaminants of Concern for Effects on Terrestrial Plants

    SciTech Connect

    Suter, G.W. II

    1993-01-01

    One of the initial stages in ecological risk assessment for hazardous waste sites is screening contaminants to determine which of them are worthy of further consideration as contaminants of potential concern. This process is termed contaminant screening. It is performed by comparing measured ambient concentrations of chemicals to benchmark concentrations. Currently, no standard benchmark concentrations exist for assessing contaminants in soil with respect to their toxicity to plants. This report presents a standard method for deriving benchmarks for this purpose (phytotoxicity benchmarks), a set of data concerning effects of chemicals in soil or soil solution on plants, and a set of phytotoxicity benchmarks for 38 chemicals potentially associated with United States Department of Energy (DOE) sites. In addition, background information on the phytotoxicity and occurrence of the chemicals in soils is presented, and literature describing the experiments from which data were drawn for benchmark derivation is reviewed. Chemicals that are found in soil at concentrations exceeding both the phytotoxicity benchmark and the background concentration for the soil type should be considered contaminants of potential concern.

  6. Toxicological benchmarks for screening potential contaminants of concern for effects on terrestrial plants: 1994 revision

    SciTech Connect

    Will, M.E.; Suter, G.W. II

    1994-09-01

    One of the initial stages in ecological risk assessment for hazardous waste sites is screening contaminants to determine which of them are worthy of further consideration as contaminants of potential concern. This process is termed contaminant screening. It is performed by comparing measured ambient concentrations of chemicals to benchmark concentrations. Currently, no standard benchmark concentrations exist for assessing contaminants in soil with respect to their toxicity to plants. This report presents a standard method for deriving benchmarks for this purpose (phytotoxicity benchmarks), a set of data concerning effects of chemicals in soil or soil solution on plants, and a set of phytotoxicity benchmarks for 38 chemicals potentially associated with United States Department of Energy (DOE) sites. In addition, background information on the phytotoxicity and occurrence of the chemicals in soils is presented, and literature describing the experiments from which data were drawn for benchmark derivation is reviewed. Chemicals that are found in soil at concentrations exceeding both the phytotoxicity benchmark and the background concentration for the soil type should be considered contaminants of potential concern.

  7. Effective File I/O Bandwidth Benchmark

    SciTech Connect

    Rabenseifner, R.; Koniges, A.E.

    2000-02-15

    The effective I/O bandwidth benchmark (b{_}eff{_}io) covers two goals: (1) to achieve a characteristic average number for the I/O bandwidth achievable with parallel MPI-I/O applications, and (2) to get detailed information about several access patterns and buffer lengths. The benchmark examines ''first write'', ''rewrite'' and ''read'' access, strided (individual and shared pointers) and segmented collective patterns on one file per application and non-collective access to one file per process. The number of parallel accessing processes is also varied and well-formed I/O is compared with non-well formed. On systems, meeting the rule that the total memory can be written to disk in 10 minutes, the benchmark should not need more than 15 minutes for a first pass of all patterns. The benchmark is designed analogously to the effective bandwidth benchmark for message passing (b{_}eff) that characterizes the message passing capabilities of a system in a few minutes. First results of the b{_}eff{_}io benchmark are given for IBM SP and Cray T3E systems and compared with existing benchmarks based on parallel Posix-I/O.

  8. Benchmarking Nonlinear Turbulence Simulations on Alcator C-Mod

    SciTech Connect

    M.H. Redi; C.L. Fiore; W. Dorland; M.J. Greenwald; G.W. Hammett; K. Hill; D. McCune; D.R. Mikkelsen; G. Rewoldt; J.E. Rice

    2004-06-22

    Linear simulations of plasma microturbulence are used with recent radial profiles of toroidal velocity from similar plasmas to consider nonlinear microturbulence simulations and observed transport analysis on Alcator C-Mod. We focus on internal transport barrier (ITB) formation in fully equilibrated H-mode plasmas with nearly flat velocity profiles. Velocity profile data, transport analysis and linear growth rates are combined to integrate data and simulation, and explore the effects of toroidal velocity on benchmarking simulations. Areas of interest for future nonlinear simulations are identified. A good gyrokinetic benchmark is found in the plasma core, without extensive nonlinear simulations. RF-heated C-Mod H-mode experiments, which exhibit an ITB, have been studied with the massively parallel code GS2 towards validation of gyrokinetic microturbulence models. New, linear, gyrokinetic calculations are reported and discussed in connection with transport analysis near the ITB trigger time of shot No.1001220016.

  9. MODEL PREDICTION RESULTS FOR 2008 ULTRASONIC BENCHMARK PROBLEMS

    SciTech Connect

    Kim, Hak-Joon; Song, Sung-Jin

    2009-03-03

    The World Federation of NDE Centers (WFNDEC) has addressed two types of problems for the 2008 ultrasonic benchmark problems: effects of surface curvatures on the ultrasonic responses of flat-bottomed holes, and prediction of side-drilled hole responses at various depths in a steel block. To solve this year ultrasonic benchmark problems, multi-Gaussian beam models was adopted for calculation of insonifying fields on the flat-bottomed holes and the side-drilled holes. And, the Kirchhoff approximation and the separation of variables method were applied for calculation of far-field scattering amplitudes of flat-bottomed holes and side-drilled holes, respectively. In this paper, we present comparison of the model predictions to the experiments for side-drilled holes and discuss the effect of interface curvatures on ultrasonic responses by comparison of the peak-to-peak amplitudes of the flat-bottomed hole responses with different interface curvatures.

  10. Trinity Acceptance Tests Performance Summary.

    SciTech Connect

    Rajan, Mahesh

    2015-12-01

    Ensuring Real Applications perform well on Trinity is key to success. Four components: ASC applications, Sustained System Performance (SSP), Extra-Large MiniApplications problems, and Micro-benchmarks.

  11. Clinically meaningful performance benchmarks in MS

    PubMed Central

    Motl, Robert W.; Scagnelli, John; Pula, John H.; Sosnoff, Jacob J.; Cadavid, Diego

    2013-01-01

    Objective: Identify and validate clinically meaningful Timed 25-Foot Walk (T25FW) performance benchmarks in individuals living with multiple sclerosis (MS). Methods: Cross-sectional study of 159 MS patients first identified candidate T25FW benchmarks. To characterize the clinical meaningfulness of T25FW benchmarks, we ascertained their relationships to real-life anchors, functional independence, and physiologic measurements of gait and disease progression. Candidate T25FW benchmarks were then prospectively validated in 95 subjects using 13 measures of ambulation and cognition, patient-reported outcomes, and optical coherence tomography. Results: T25FW of 6 to 7.99 seconds was associated with a change in occupation due to MS, occupational disability, walking with a cane, and needing “some help” with instrumental activities of daily living; T25FW ≥8 seconds was associated with collecting Supplemental Security Income and government health care, walking with a walker, and inability to do instrumental activities of daily living. During prospective benchmark validation, we trichotomized data by T25FW benchmarks (<6 seconds, 6–7.99 seconds, and ≥8 seconds) and found group main effects on 12 of 13 objective and subjective measures (p < 0.05). Conclusions: Using a cross-sectional design, we identified 2 clinically meaningful T25FW benchmarks of ≥6 seconds (6–7.99) and ≥8 seconds. Longitudinal and larger studies are needed to confirm the clinical utility and relevance of these proposed T25FW benchmarks and to parse out whether there are additional benchmarks in the lower (<6 seconds) and higher (>10 seconds) ranges of performance. PMID:24174581

  12. Developing integrated benchmarks for DOE performance measurement

    SciTech Connect

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  13. ASIS healthcare security benchmarking study.

    PubMed

    2001-01-01

    Effective security has aligned itself into the everyday operations of a healthcare organization. This is evident in every regional market segment, regardless of size, location, and provider clinical expertise or organizational growth. This research addresses key security issues from an acute care provider to freestanding facilities, from rural hospitals and community hospitals to large urban teaching hospitals. Security issues and concerns are identified and addressed daily by senior and middle management. As provider campuses become larger and more diverse, the hospitals surveyed have identified critical changes and improvements that are proposed or pending. Mitigating liabilities and improving patient, visitor, and/or employee safety are consequential to the performance and viability of all healthcare providers. Healthcare organizations have identified the requirement to compete for patient volume and revenue. The facility that can deliver high-quality healthcare in a comfortable, safe, secure, and efficient atmosphere will have a significant competitive advantage over a facility where patient or visitor security and safety is deficient. Continuing changes in healthcare organizations' operating structure and healthcare geographic layout mean changes in leadership and direction. These changes have led to higher levels of corporate responsibility. As a result, each organization participating in this benchmark study has added value and will derive value for the overall benefit of the healthcare providers throughout the nation. This study provides a better understanding of how the fundamental security needs of security in healthcare organizations are being addressed and its solutions identified and implemented. PMID:11602980

  14. Benchmark problems in computational aeroacoustics

    NASA Technical Reports Server (NTRS)

    Porter-Locklear, Freda

    1994-01-01

    A recent directive at NASA Langley is aimed at numerically predicting principal noise sources. During my summer stay, I worked with high-order ENO code, developed by Dr. Harold Atkins, for solving the unsteady compressible Navier-Stokes equations, as it applies to computational aeroacoustics (CAA). A CAA workshop, composed of six categories of benchmark problems, has been organized to test various numerical properties of code. My task was to determine the robustness of Atkins' code for these test problems. In one category, we tested the nonlinear wave propagation of the code for the one-dimensional Euler equations, with initial pressure, density, and velocity conditions. Using freestream boundary conditions, our results were plausible. In another category, we solved the linearized two-dimensional Euler equations to test the effectiveness of radiation boundary conditions. Here we utilized MAPLE to compute eigenvalues and eigenvectors of the Jacobian given variable and flux vectors. We experienced a minor problem with inflow and outflow boundary conditions. Next, we solved the quasi one dimensional unsteady flow equations with an incoming acoustic wave of amplitude 10(exp -6). The small amplitude sound wave was incident on a convergent-divergent nozzle. After finding a steady-state solution and then marching forward, our solution indicated that after 30 periods the acoustic wave had dissipated (a period is time required for sound wave to traverse one end of nozzle to other end).

  15. Benchmarking for Bayesian Reinforcement Learning

    PubMed Central

    Ernst, Damien; Couëtoux, Adrien

    2016-01-01

    In the Bayesian Reinforcement Learning (BRL) setting, agents try to maximise the collected rewards while interacting with their environment while using some prior knowledge that is accessed beforehand. Many BRL algorithms have already been proposed, but the benchmarks used to compare them are only relevant for specific cases. The paper addresses this problem, and provides a new BRL comparison methodology along with the corresponding open source library. In this methodology, a comparison criterion that measures the performance of algorithms on large sets of Markov Decision Processes (MDPs) drawn from some probability distributions is defined. In order to enable the comparison of non-anytime algorithms, our methodology also includes a detailed analysis of the computation time requirement of each algorithm. Our library is released with all source code and documentation: it includes three test problems, each of which has two different prior distributions, and seven state-of-the-art RL algorithms. Finally, our library is illustrated by comparing all the available algorithms and the results are discussed. PMID:27304891

  16. Metrics and Benchmarks for Visualization

    NASA Technical Reports Server (NTRS)

    Uselton, Samuel P.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    What is a "good" visualization? How can the quality of a visualization be measured? How can one tell whether one visualization is "better" than another? I claim that the true quality of a visualization can only be measured in the context of a particular purpose. The same image generated from the same data may be excellent for one purpose and abysmal for another. A good measure of visualization quality will correspond to the performance of users in accomplishing the intended purpose, so the "gold standard" is user testing. As a user of visualization software (or at least a consultant to such users) I don't expect visualization software to have been tested in this way for every possible use. In fact, scientific visualization (as distinct from more "production oriented" uses of visualization) will continually encounter new data, new questions and new purposes; user testing can never keep up. User need software they can trust, and advice on appropriate visualizations of particular purposes. Considering the following four processes, and their impact on visualization trustworthiness, reveals important work needed to create worthwhile metrics and benchmarks for visualization. These four processes are (1) complete system testing (user-in-loop), (2) software testing, (3) software design and (4) information dissemination. Additional information is contained in the original extended abstract.

  17. Acceptability of human risk.

    PubMed Central

    Kasperson, R E

    1983-01-01

    This paper has three objectives: to explore the nature of the problem implicit in the term "risk acceptability," to examine the possible contributions of scientific information to risk standard-setting, and to argue that societal response is best guided by considerations of process rather than formal methods of analysis. Most technological risks are not accepted but are imposed. There is also little reason to expect consensus among individuals on their tolerance of risk. Moreover, debates about risk levels are often at base debates over the adequacy of the institutions which manage the risks. Scientific information can contribute three broad types of analyses to risk-setting deliberations: contextual analysis, equity assessment, and public preference analysis. More effective risk-setting decisions will involve attention to the process used, particularly in regard to the requirements of procedural justice and democratic responsibility. PMID:6418541

  18. Geant4 Computing Performance Benchmarking and Monitoring

    NASA Astrophysics Data System (ADS)

    Dotti, Andrea; Elvira, V. Daniel; Folger, Gunter; Genser, Krzysztof; Jun, Soon Yung; Kowalkowski, James B.; Paterno, Marc

    2015-12-01

    Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared to previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. The scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.

  19. Geant4 Computing Performance Benchmarking and Monitoring

    SciTech Connect

    Dotti, Andrea; Elvira, V. Daniel; Folger, Gunter; Genser, Krzysztof; Jun, Soon Yung; Kowalkowski, James B.; Paterno, Marc

    2015-12-23

    Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared to previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. In conclusion, the scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.

  20. Geant4 Computing Performance Benchmarking and Monitoring

    DOE PAGESBeta

    Dotti, Andrea; Elvira, V. Daniel; Folger, Gunter; Genser, Krzysztof; Jun, Soon Yung; Kowalkowski, James B.; Paterno, Marc

    2015-12-23

    Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared tomore » previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. In conclusion, the scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.« less

  1. Age and Acceptance of Euthanasia.

    ERIC Educational Resources Information Center

    Ward, Russell A.

    1980-01-01

    Study explores relationship between age (and sex and race) and acceptance of euthanasia. Women and non-Whites were less accepting because of religiosity. Among older people less acceptance was attributable to their lesser education and greater religiosity. Results suggest that quality of life in old age affects acceptability of euthanasia. (Author)

  2. Updates to the integrated protein-protein interaction benchmarks: Docking benchmark version 5 and affinity benchmark version 2

    PubMed Central

    Vreven, Thom; Moal, Iain H.; Vangone, Anna; Pierce, Brian G.; Kastritis, Panagiotis L.; Torchala, Mieczyslaw; Chaleil, Raphael; Jiménez-García, Brian; Bates, Paul A.; Fernandez-Recio, Juan; Bonvin, Alexandre M.J.J.; Weng, Zhiping

    2015-01-01

    We present an updated and integrated version of our widely used protein-protein docking and binding affinity benchmarks. The benchmarks consist of non-redundant, high quality structures of protein-protein complexes along with the unbound structures of their components. Fifty-five new complexes were added to the docking benchmark, 35 of which have experimentally-measured binding affinities. These updated docking and affinity benchmarks now contain 230 and 179 entries, respectively. In particular, the number of antibody-antigen complexes has increased significantly, by 67% and 74% in the docking and affinity benchmarks, respectively. We tested previously developed docking and affinity prediction algorithms on the new cases. Considering only the top ten docking predictions per benchmark case, a prediction accuracy of 38% is achieved on all 55 cases, and up to 50% for the 32 rigid-body cases only. Predicted affinity scores are found to correlate with experimental binding energies up to r=0.52 overall, and r=0.72 for the rigid complexes. PMID:26231283

  3. Thermo-hydro-mechanical-chemical processes in fractured-porous media: Benchmarks and examples

    NASA Astrophysics Data System (ADS)

    Kolditz, O.; Shao, H.; Görke, U.; Kalbacher, T.; Bauer, S.; McDermott, C. I.; Wang, W.

    2012-12-01

    The book comprises an assembly of benchmarks and examples for porous media mechanics collected over the last twenty years. Analysis of thermo-hydro-mechanical-chemical (THMC) processes is essential to many applications in environmental engineering, such as geological waste deposition, geothermal energy utilisation, carbon capture and storage, water resources management, hydrology, even climate change. In order to assess the feasibility as well as the safety of geotechnical applications, process-based modelling is the only tool to put numbers, i.e. to quantify future scenarios. This charges a huge responsibility concerning the reliability of computational tools. Benchmarking is an appropriate methodology to verify the quality of modelling tools based on best practices. Moreover, benchmarking and code comparison foster community efforts. The benchmark book is part of the OpenGeoSys initiative - an open source project to share knowledge and experience in environmental analysis and scientific computation.

  4. Benchmarking ENDF/B-VII.1, JENDL-4.0 and JEFF-3.1

    SciTech Connect

    Van Der Marck, S. C.

    2012-07-01

    Three nuclear data libraries have been tested extensively using criticality safety benchmark calculations. The three libraries are the new release of the US library ENDF/B-VII.1 (2011), the new release of the Japanese library JENDL-4.0 (2011), and the OECD/NEA library JEFF-3.1 (2006). All calculations were performed with the continuous-energy Monte Carlo code MCNP (version 4C3, as well as version 6-beta1). Around 2000 benchmark cases from the International Handbook of Criticality Safety Benchmark Experiments (ICSBEP) were used. The results were analyzed per ICSBEP category, and per element. Overall, the three libraries show similar performance on most criticality safety benchmarks. The largest differences are probably caused by elements such as Be, C, Fe, Zr, W. (authors)

  5. Benchmarking of Graphite Reflected Critical Assemblies of UO2

    SciTech Connect

    Margaret A. Marshall; John D. Bess

    2011-11-01

    A series of experiments were carried out in 1963 at the Oak Ridge National Laboratory Critical Experiments Facility (ORCEF) for use in space reactor research programs. A core containing 93.2% enriched UO2 fuel rods was used in these experiments. The first part of the experimental series consisted of 253 tightly-packed fuel rods (1.27 cm triangular pitch) with graphite reflectors [1], the second part used 253 graphite-reflected fuel rods organized in a 1.506 cm triangular pitch [2], and the final part of the experimental series consisted of 253 beryllium-reflected fuel rods with a 1.506 cm triangular pitch. [3] Fission rate distribution and cadmium ratio measurements were taken for all three parts of the experimental series. Reactivity coefficient measurements were taken for various materials placed in the beryllium reflected core. The first part of this experimental series has been evaluated for inclusion in the International Reactor Physics Experiment Evaluation Project (IRPhEP) [4] and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbooks, [5] and is discussed below. These experiments are of interest as benchmarks because they support the validation of compact reactor designs with similar characteristics to the design parameters for a space nuclear fission surface power systems. [6

  6. NAS Grid Benchmarks. 1.0

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob; Frumkin, Michael; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We provide a paper-and-pencil specification of a benchmark suite for computational grids. It is based on the NAS (NASA Advanced Supercomputing) Parallel Benchmarks (NPB) and is called the NAS Grid Benchmarks (NGB). NGB problems are presented as data flow graphs encapsulating an instance of a slightly modified NPB task in each graph node, which communicates with other nodes by sending/receiving initialization data. Like NPB, NGB specifies several different classes (problem sizes). In this report we describe classes S, W, and A, and provide verification values for each. The implementor has the freedom to choose any language, grid environment, security model, fault tolerance/error correction mechanism, etc., as long as the resulting implementation passes the verification test and reports the turnaround time of the benchmark.

  7. Benchmark calculations from summarized data: an example

    SciTech Connect

    Crump, K. S.; Teeguarden, Justin G.

    2009-03-01

    Benchmark calculations often are made from data extracted from publications. Such datamay not be in a formmost appropriate for benchmark analysis, and, as a result, suboptimal and/or non-standard benchmark analyses are often applied. This problem can be mitigated in some cases using Monte Carlo computational methods that allow the likelihood of the published data to be calculated while still using an appropriate benchmark dose (BMD) definition. Such an approach is illustrated herein using data from a study of workers exposed to styrene, in which a hybrid BMD calculation is implemented from dose response data reported only as means and standard deviations of ratios of scores on neuropsychological tests from exposed subjects to corresponding scores from matched controls. The likelihood of the data is computed using a combination of analytic and Monte Carlo integration methods.

  8. DOE Commercial Building Benchmark Models: Preprint

    SciTech Connect

    Torcelini, P.; Deru, M.; Griffith, B.; Benne, K.; Halverson, M.; Winiarski, D.; Crawley, D. B.

    2008-07-01

    To provide a consistent baseline of comparison and save time conducting such simulations, the U.S. Department of Energy (DOE) has developed a set of standard benchmark building models. This paper will provide an executive summary overview of these benchmark buildings, and how they can save building analysts valuable time. Fully documented and implemented to use with the EnergyPlus energy simulation program, the benchmark models are publicly available and new versions will be created to maintain compatibility with new releases of EnergyPlus. The benchmark buildings will form the basis for research on specific building technologies, energy code development, appliance standards, and measurement of progress toward DOE energy goals. Having a common starting point allows us to better share and compare research results and move forward to make more energy efficient buildings.

  9. Toward Scalable Benchmarks for Mass Storage Systems

    NASA Technical Reports Server (NTRS)

    Miller, Ethan L.

    1996-01-01

    This paper presents guidelines for the design of a mass storage system benchmark suite, along with preliminary suggestions for programs to be included. The benchmarks will measure both peak and sustained performance of the system as well as predicting both short- and long-term behavior. These benchmarks should be both portable and scalable so they may be used on storage systems from tens of gigabytes to petabytes or more. By developing a standard set of benchmarks that reflect real user workload, we hope to encourage system designers and users to publish performance figures that can be compared with those of other systems. This will allow users to choose the system that best meets their needs and give designers a tool with which they can measure the performance effects of improvements to their systems.

  10. Neutronic Benchmarks for the Utilization of Mixed-Oxide Fuel: Joint U.S./Russian Progress Report for Fiscal Year 1997 - Volume 4, Part 2--Saxton Plutonium Program Critical Experiments

    SciTech Connect

    Abdurrahman, NM

    2000-10-12

    Critical experiments with water-moderated, single-region PuO{sub 2}-UO{sub 2} or UO{sub 2}, and multiple-region PuO{sub 2}-UO{sub 2}- and UO{sub 2}-fueled cores were performed at the CRX reactor critical facility at the Westinghouse Reactor Evaluation Center (WREC) at Waltz Mill, Pennsylvania in 1965 [1]. These critical experiments were part of the Saxton Plutonium Program. The mixed oxide (MOX) fuel used in these critical experiments and then loaded in the Saxton reactor contained 6.6 wt% PuO{sub 2} in a mixture of PuO{sub 2} and natural UO{sub 2}. The Pu metal had the following isotopic mass percentages: 90.50% {sup 239}Pu; 8.57% {sup 239}Pu; 0.89% {sup 240}Pu; and 0.04% {sup 241}Pu. The purpose of these critical experiments was to verify the nuclear design of Saxton partial plutonium cores while obtaining parameters of fundamental significance such as buckling, control rod worth, soluble poison worth, flux, power peaking, relative pin power, and power sharing factors of MOX and UO{sub 2} lattices. For comparison purposes, the core was also loaded with uranium dioxide fuel rods only. This series is covered by experiments beginning with the designation SX.

  11. Benchmarking of optical dimerizer systems.

    PubMed

    Pathak, Gopal P; Strickland, Devin; Vrana, Justin D; Tucker, Chandra L

    2014-11-21

    Optical dimerizers are a powerful new class of optogenetic tools that allow light-inducible control of protein-protein interactions. Such tools have been useful for regulating cellular pathways and processes with high spatiotemporal resolution in live cells, and a growing number of dimerizer systems are available. As these systems have been characterized by different groups using different methods, it has been difficult for users to compare their properties. Here, we set about to systematically benchmark the properties of four optical dimerizer systems, CRY2/CIB1, TULIPs, phyB/PIF3, and phyB/PIF6. Using a yeast transcriptional assay, we find significant differences in light sensitivity and fold-activation levels between the red light regulated systems but similar responses between the CRY2/CIB and TULIP systems. Further comparison of the ability of the CRY2/CIB1 and TULIP systems to regulate a yeast MAPK signaling pathway also showed similar responses, with slightly less background activity in the dark observed with CRY2/CIB. In the process of developing this work, we also generated an improved blue-light-regulated transcriptional system using CRY2/CIB in yeast. In addition, we demonstrate successful application of the CRY2/CIB dimerizers using a membrane-tethered CRY2, which may allow for better local control of protein interactions. Taken together, this work allows for a better understanding of the capacities of these different dimerization systems and demonstrates new uses of these dimerizers to control signaling and transcription in yeast. PMID:25350266

  12. Implementation of NAS Parallel Benchmarks in Java

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Schultz, Matthew; Jin, Hao-Qiang; Yan, Jerry

    2000-01-01

    A number of features make Java an attractive but a debatable choice for High Performance Computing (HPC). In order to gauge the applicability of Java to the Computational Fluid Dynamics (CFD) we have implemented NAS Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would move Java closer to Fortran in the competition for CFD applications.

  13. Benchmarking in healthcare organizations: an introduction.

    PubMed

    Anderson-Miles, E

    1994-09-01

    Business survival is increasingly difficult in the contemporary world. In order to survive, organizations need a commitment to excellence and a means of measuring that commitment and its results. Benchmarking provides one method for doing this. As the author describes, benchmarking is a performance improvement method that has been used for centuries. Recently, it has begun to be used in the healthcare industry where it has the potential to improve significantly the efficiency, cost-effectiveness, and quality of healthcare services. PMID:10146064

  14. Simple Benchmark Specifications for Space Radiation Protection

    NASA Technical Reports Server (NTRS)

    Singleterry, Robert C. Jr.; Aghara, Sukesh K.

    2013-01-01

    This report defines space radiation benchmark specifications. This specification starts with simple, monoenergetic, mono-directional particles on slabs and progresses to human models in spacecraft. This report specifies the models and sources needed to what the team performing the benchmark needs to produce in a report. Also included are brief descriptions of how OLTARIS, the NASA Langley website for space radiation analysis, performs its analysis.

  15. Machine characterization and benchmark performance prediction

    NASA Technical Reports Server (NTRS)

    Saavedra-Barrera, Rafael H.

    1988-01-01

    From runs of standard benchmarks or benchmark suites, it is not possible to characterize the machine nor to predict the run time of other benchmarks which have not been run. A new approach to benchmarking and machine characterization is reported. The creation and use of a machine analyzer is described, which measures the performance of a given machine on FORTRAN source language constructs. The machine analyzer yields a set of parameters which characterize the machine and spotlight its strong and weak points. Also described is a program analyzer, which analyzes FORTRAN programs and determines the frequency of execution of each of the same set of source language operations. It is then shown that by combining a machine characterization and a program characterization, we are able to predict with good accuracy the run time of a given benchmark on a given machine. Characterizations are provided for the Cray-X-MP/48, Cyber 205, IBM 3090/200, Amdahl 5840, Convex C-1, VAX 8600, VAX 11/785, VAX 11/780, SUN 3/50, and IBM RT-PC/125, and for the following benchmark programs or suites: Los Alamos (BMK8A1), Baskett, Linpack, Livermore Loops, Madelbrot Set, NAS Kernels, Shell Sort, Smith, Whetstone and Sieve of Erathostenes.

  16. Action-Oriented Benchmarking: Concepts and Tools

    SciTech Connect

    California Energy Commission; Mathew, Paul; Mills, Evan; Mathew, Paul; Piette, Mary Ann; Bourassa, Norman; Brook, Martha

    2008-02-13

    Most energy benchmarking tools provide static feedback on how one building compares to a larger set of loosely similar buildings, without providing information at the end-use level or on what can be done to reduce consumption, cost, or emissions. In this article--Part 1 of a two-part series--we describe an 'action-oriented benchmarking' approach, which extends whole-building energy benchmarking to include analysis of system and component energy use metrics and features. Action-oriented benchmarking thereby allows users to generate more meaningful metrics and to identify, screen and prioritize potential efficiency improvements. This opportunity assessment process can then be used to inform and optimize a full-scale audit or commissioning process. We introduce a new web-based action-oriented benchmarking system and associated software tool-EnergyIQ. The benchmarking methods, visualizations, and user interface design are informed by an end-user needs assessment survey and best-practice guidelines from ASHRAE.

  17. Benchmarking for Cost Improvement. Final report

    SciTech Connect

    Not Available

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  18. Baby-Crying Acceptance

    NASA Astrophysics Data System (ADS)

    Martins, Tiago; de Magalhães, Sérgio Tenreiro

    The baby's crying is his most important mean of communication. The crying monitoring performed by devices that have been developed doesn't ensure the complete safety of the child. It is necessary to join, to these technological resources, means of communicating the results to the responsible, which would involve the digital processing of information available from crying. The survey carried out, enabled to understand the level of adoption, in the continental territory of Portugal, of a technology that will be able to do such a digital processing. It was used the TAM as the theoretical referential. The statistical analysis showed that there is a good probability of acceptance of such a system.

  19. Criticality Benchmark Analysis of the HTTR Annular Startup Core Configurations

    SciTech Connect

    John D. Bess

    2009-11-01

    One of the high priority benchmarking activities for corroborating the Next Generation Nuclear Plant (NGNP) Project and Very High Temperature Reactor (VHTR) Program is evaluation of Japan's existing High Temperature Engineering Test Reactor (HTTR). The HTTR is a 30 MWt engineering test reactor utilizing graphite moderation, helium coolant, and prismatic TRISO fuel. A large amount of critical reactor physics data is available for validation efforts of High Temperature Gas-cooled Reactors (HTGRs). Previous international reactor physics benchmarking activities provided a collation of mixed results that inaccurately predicted actual experimental performance.1 Reevaluations were performed by the Japanese to reduce the discrepancy between actual and computationally-determined critical configurations.2-3 Current efforts at the Idaho National Laboratory (INL) involve development of reactor physics benchmark models in conjunction with the International Reactor Physics Experiment Evaluation Project (IRPhEP) for use with verification and validation methods in the VHTR Program. Annular cores demonstrate inherent safety characteristics that are of interest in developing future HTGRs.

  20. Benchmarking Gas Path Diagnostic Methods: A Public Approach

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Bird, Jeff; Davison, Craig; Volponi, Al; Iverson, R. Eugene

    2008-01-01

    Recent technology reviews have identified the need for objective assessments of engine health management (EHM) technology. The need is two-fold: technology developers require relevant data and problems to design and validate new algorithms and techniques while engine system integrators and operators need practical tools to direct development and then evaluate the effectiveness of proposed solutions. This paper presents a publicly available gas path diagnostic benchmark problem that has been developed by the Propulsion and Power Systems Panel of The Technical Cooperation Program (TTCP) to help address these needs. The problem is coded in MATLAB (The MathWorks, Inc.) and coupled with a non-linear turbofan engine simulation to produce "snap-shot" measurements, with relevant noise levels, as if collected from a fleet of engines over their lifetime of use. Each engine within the fleet will experience unique operating and deterioration profiles, and may encounter randomly occurring relevant gas path faults including sensor, actuator and component faults. The challenge to the EHM community is to develop gas path diagnostic algorithms to reliably perform fault detection and isolation. An example solution to the benchmark problem is provided along with associated evaluation metrics. A plan is presented to disseminate this benchmark problem to the engine health management technical community and invite technology solutions.

  1. Performance Evaluation and Benchmarking of Next Intelligent Systems

    SciTech Connect

    del Pobil, Angel; Madhavan, Raj; Bonsignorio, Fabio

    2009-10-01

    Performance Evaluation and Benchmarking of Intelligent Systems presents research dedicated to the subject of performance evaluation and benchmarking of intelligent systems by drawing from the experiences and insights of leading experts gained both through theoretical development and practical implementation of intelligent systems in a variety of diverse application domains. This contributed volume offers a detailed and coherent picture of state-of-the-art, recent developments, and further research areas in intelligent systems. The chapters cover a broad range of applications, such as assistive robotics, planetary surveying, urban search and rescue, and line tracking for automotive assembly. Subsystems or components described in this book include human-robot interaction, multi-robot coordination, communications, perception, and mapping. Chapters are also devoted to simulation support and open source software for cognitive platforms, providing examples of the type of enabling underlying technologies that can help intelligent systems to propagate and increase in capabilities. Performance Evaluation and Benchmarking of Intelligent Systems serves as a professional reference for researchers and practitioners in the field. This book is also applicable to advanced courses for graduate level students and robotics professionals in a wide range of engineering and related disciplines including computer science, automotive, healthcare, manufacturing, and service robotics.

  2. DOSE RESPONSE ASSESSMENT FOR DEVELOPMENTAL TOXICITY: II. COMPARISON OF GENERIC BENCHMARK DOSE ESTIMATES WITH NO OBSERVED ADVERSE EFFECT LEVELS

    EPA Science Inventory

    The benchmark dose (BMD) has been proposed as an alternative basis for reference value calculations. A large data base of 246 developmental toxicity experiments compiled for use in comparing alternative approaches to developmental toxicity risk assessment. BMD estimates derived w...

  3. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  4. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  5. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 4 2012-10-01 2012-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  6. Computer acceptance of older adults.

    PubMed

    Nägle, Sibylle; Schmidt, Ludger

    2012-01-01

    Even though computers play a massive role in everyday life of modern societies, older adults, and especially older women, are less likely to use a computer, and they perform fewer activities on it than younger adults. To get a better understanding of the factors affecting older adults' intention towards and usage of computers, the Unified Theory of Acceptance and Usage of Technology (UTAUT) was applied as part of a more extensive study with 52 users and non-users of computers, ranging in age from 50 to 90 years. The model covers various aspects of computer usage in old age via four key constructs, namely performance expectancy, effort expectancy, social influences, and facilitating conditions, as well as the variables gender, age, experience, and voluntariness it. Interestingly, next to performance expectancy, facilitating conditions showed the strongest correlation with use as well as with intention. Effort expectancy showed no significant correlation with the intention of older adults to use a computer. PMID:22317258

  7. DICE: Database for the International Criticality Safety Benchmark Evaluation Program Handbook

    SciTech Connect

    Nouri, Ali; Nagel, Pierre; Briggs, J. Blair; Ivanova, Tatiana

    2003-09-15

    The 2002 edition of the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments' (ICSBEP Handbook) spans more than 26 000 pages and contains 330 evaluations with benchmark specifications for 2881 critical or near-critical configurations. With such a large content, it became evident that the users needed more than a broad and qualitative classification of experiments to make efficient use of the ICSBEP Handbook. This paper describes the features of Database for the International Handbook of Evaluated Criticality Safety Benchmark Experiments (DICE), which is a database for the ICSBEP Handbook. The DICE program contains a relational database loaded with selected information from each configuration and a users' interface that enables one to query the database and to extract specific parameters. Summary descriptions of each experimental configuration can also be obtained. In addition, plotting capabilities provide the means of comparing neutron spectra and sensitivity coefficients for a set of configurations.

  8. Benchmarking ENDF/B-VII.1, JENDL-4.0 and JEFF-3.1.1 with MCNP6

    NASA Astrophysics Data System (ADS)

    van der Marck, Steven C.

    2012-12-01

    Recent releases of three major world nuclear reaction data libraries, ENDF/B-VII.1, JENDL-4.0, and JEFF-3.1.1, have been tested extensively using benchmark calculations. The calculations were performed with the latest release of the continuous energy Monte Carlo neutronics code MCNP, i.e. MCNP6. Three types of benchmarks were used, viz. criticality safety benchmarks, (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 2000 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6Li, 7Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D2O, H2O, concrete, polyethylene and teflon). The new functionality in MCNP6 to calculate the effective delayed neutron fraction was tested by comparison with more than thirty measurements in widely varying systems. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. The performance of the three libraries, in combination with MCNP6, is shown to be good. The results for the LEU-COMP-THERM category are on average very close to the benchmark value. Also for most other categories the results are satisfactory. Deviations from the benchmark values do occur in certain benchmark series, or in isolated cases within benchmark series. Such

  9. Benchmarking ENDF/B-VII.1, JENDL-4.0 and JEFF-3.1.1 with MCNP6

    SciTech Connect

    Marck, Steven C. van der

    2012-12-15

    Recent releases of three major world nuclear reaction data libraries, ENDF/B-VII.1, JENDL-4.0, and JEFF-3.1.1, have been tested extensively using benchmark calculations. The calculations were performed with the latest release of the continuous energy Monte Carlo neutronics code MCNP, i.e. MCNP6. Three types of benchmarks were used, viz. criticality safety benchmarks, (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 2000 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for {sup 6}Li, {sup 7}Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D2O, H2O, concrete, polyethylene and teflon). The new functionality in MCNP6 to calculate the effective delayed neutron fraction was tested by comparison with more than thirty measurements in widely varying systems. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. The performance of the three libraries, in combination with MCNP6, is shown to be good. The results for the LEU-COMP-THERM category are on average very close to the benchmark value. Also for most other categories the results are satisfactory. Deviations from the benchmark values do occur in certain benchmark series, or in isolated cases within benchmark series

  10. MCNP calculations for Russian criticality-safety benchmarks

    SciTech Connect

    Capell, B.M.; Mosteller, R.D.; Pelowitz, D.B.

    1996-12-31

    The current edition of the International Handbook of Evaluated Criticality Safety Benchmark Experiments contains evaluations of 20 critical experiments performed and evaluated by the Institute for Experimental Physics of the Russian Federal Nuclear Center (VNIIEF) at Arzamas-16 and 16 critical experiments performed and evaluated by the Institute for Technical Physics of the Russian Federal Nuclear Center (VNIITF) at Chelyabinsk-70. These fast-spectrum experiments are of particular interest for data testing of ENDF/B-VI because they contain uranium metal systems of intermediate enrichment as well as uranium and plutonium metal systems with reflectors such as graphite, stainless steel, polyethylene, beryllium, and beryllium oxide. This paper presents the first published results for such systems using cross-section libraries based on ENDF/B-VI.

  11. Experts discuss how benchmarking improves the healthcare industry. Roundtable discussion.

    PubMed

    Capozzalo, G L; Hlywak, J W; Kenny, B; Krivenko, C A

    1994-09-01

    Healthcare Financial Management engaged four benchmarking experts in a discussion about benchmarking and its role in the healthcare industry. The experts agree that benchmarking by itself does not create change unless it is part of a larger continuous quality improvement program; that benchmarking works best when senior management supports it enthusiastically and when the "appropriate" people are involved; and that benchmarking, when implemented correctly, is one of the best tools available to help healthcare organizations improve their internal processes. PMID:10146069

  12. Benchmark 2 - Springback of a draw / re-draw panel: Part C: Benchmark analysis

    NASA Astrophysics Data System (ADS)

    Carsley, John E.; Xia, Cedric; Yang, Lianxiang; Stoughton, Thomas B.; Xu, Siguang; Hartfield-Wünsch, Susan E.; Li, Jingjing

    2013-12-01

    Benchmark analysis is summarized for DP600 and AA 5182-O. Nine simulation results submitted for this benchmark study are compared to the physical measurement results. The details on the codes, friction parameters, mesh technology, CPU, and material models are also summarized at the end of this report with the participant information details.

  13. Benchmarking local healthcare-associated infections: available benchmarks and interpretation challenges.

    PubMed

    El-Saed, Aiman; Balkhy, Hanan H; Weber, David J

    2013-10-01

    Growing numbers of healthcare facilities are routinely collecting standardized data on healthcare-associated infection (HAI), which can be used not only to track internal performance but also to compare local data to national and international benchmarks. Benchmarking overall (crude) HAI surveillance metrics without accounting or adjusting for potential confounders can result in misleading conclusions. Methods commonly used to provide risk-adjusted metrics include multivariate logistic regression analysis, stratification, indirect standardization, and restrictions. The characteristics of recognized benchmarks worldwide, including the advantages and limitations are described. The choice of the right benchmark for the data from the Gulf Cooperation Council (GCC) states is challenging. The chosen benchmark should have similar data collection and presentation methods. Additionally, differences in surveillance environments including regulations should be taken into consideration when considering such a benchmark. The GCC center for infection control took some steps to unify HAI surveillance systems in the region. GCC hospitals still need to overcome legislative and logistic difficulties in sharing data to create their own benchmark. The availability of a regional GCC benchmark may better enable health care workers and researchers to obtain more accurate and realistic comparisons. PMID:23999329

  14. Test Nationally, Benchmark Locally: Using Local DIBELS Benchmarks to Predict Performance on the Pssa

    ERIC Educational Resources Information Center

    Ferchalk, Matthew R.

    2013-01-01

    The Dynamic Indicators of Basic Early Literacy Skills (DIBELS) benchmarks are frequently used to make important decision regarding student performance. More information, however, is needed to understand if the nationally-derived benchmarks created by the DIBELS system provide the most accurate criterion for evaluating reading proficiency. The…

  15. Preliminary Benchmark Evaluation of Japan’s High Temperature Engineering Test Reactor

    SciTech Connect

    John Darrell Bess

    2009-05-01

    A benchmark model of the initial fully-loaded start-up core critical of Japan’s High Temperature Engineering Test Reactor (HTTR) was developed to provide data in support of ongoing validation efforts of the Very High Temperature Reactor Program using publicly available resources. The HTTR is a 30 MWt test reactor utilizing graphite moderation, helium coolant, and prismatic TRISO fuel. The benchmark was modeled using MCNP5 with various neutron cross-section libraries. An uncertainty evaluation was performed by perturbing the benchmark model and comparing the resultant eigenvalues. The calculated eigenvalues are approximately 2-3% greater than expected with an uncertainty of ±0.70%. The primary sources of uncertainty are the impurities in the core and reflector graphite. The release of additional HTTR data could effectively reduce the benchmark model uncertainties and bias. Sensitivity of the results to the graphite impurity content might imply that further evaluation of the graphite content could significantly improve calculated results. Proper characterization of graphite for future Next Generation Nuclear Power reactor designs will improve computational modeling capabilities. Current benchmarking activities include evaluation of the annular HTTR cores and assessment of the remaining start-up core physics experiments, including reactivity effects, reactivity coefficient, and reaction-rate distribution measurements. Long term benchmarking goals might include analyses of the hot zero-power critical, rise-to-power tests, and other irradiation, safety, and technical evaluations performed with the HTTR.

  16. MCNP (trademark) ENDF/B-VI iron benchmark calculations

    NASA Astrophysics Data System (ADS)

    Court, J. D.; Hendricks, J. S.

    Four iron shielding benchmarks have been calculated for, we believe the first time, with MCNP4A and its new ENDF/B-VI library. These calculations are part of the Hiroshima/Nagasaki dose re-evaluation for the National Academy of Sciences and the Defense Nuclear Agency. We believe these calculations are significant because they validate MCNP and the new ENDF/B-VI libraries. These calculations are compared to ENDF/B-V, experiment, and in some cases the recommended MCNP data library (a T-2 evaluation) and ENDF/IV.

  17. Environmental radiation: risk benchmarks or benchmarking risk assessment.

    PubMed

    Bates, Matthew E; Valverde, L James; Vogel, John T; Linkov, Igor

    2011-07-01

    In the wake of the compound March 2011 nuclear disaster at the Fukushima I nuclear power plant in Japan, international public dialogue has repeatedly turned to questions of the accuracy of current risk assessment processes to assess nuclear risks and the adequacy of existing regulatory risk thresholds to protect us from nuclear harm. We confront these issues with an emphasis on learning from the incident in Japan for future US policy discussions. Without delving into a broader philosophical discussion of the general social acceptance of the risk, the relative adequacy of existing US Nuclear Regulatory Commission (NRC) risk thresholds is assessed in comparison with the risk thresholds of federal agencies not currently under heightened public scrutiny. Existing NRC thresholds are found to be among the most conservative in the comparison, suggesting that the agency's current regulatory framework is consistent with larger societal ideals. In turning to risk assessment methodologies, the disaster in Japan does indicate room for growth. Emerging lessons seem to indicate an opportunity to enhance resilience through systemic levels of risk aggregation. Specifically, we believe bringing systemic reasoning to the risk management process requires a framework that (i) is able to represent risk-based knowledge and information about a panoply of threats; (ii) provides a systemic understanding (and representation) of the natural and built environments of interest and their dependencies; and (iii) allows for the rational and coherent valuation of a range of outcome variables of interest, both tangible and intangible. Rather than revisiting the thresholds themselves, we see the goal of future nuclear risk management in adopting and implementing risk assessment techniques that systemically evaluate large-scale socio-technical systems with a view toward enhancing resilience and minimizing the potential for surprise. PMID:21608107

  18. A Benchmark for Cloud Tracking Wind Measurements

    NASA Astrophysics Data System (ADS)

    Sayanagi, K. M.; Mitchell, J.; Ingersoll, A. P.; Ewald, S. P.; Marcus, P. S.; de Pater, I.; Wong, M. H.; Choi, D. S.; Sussman, M.; Ogohara, K.; Imamura, T.; Kouyama, T.; Takagi, M.; Satoh, N.; Del Genio, A. D.; Barbara, J.; Sanchez-Lavega, A.; Hueso, R.; García-Melendo, E.; Simon-Miller, A. A.

    2010-12-01

    Cloud tracking has been the primary method of measuring wind speeds in planetary atmospheres through Earth- and space- based remote sensing. Latest developments of automated feature tracking software are able to harvest thousands of wind vectors out of a sequence of high-resolution images acquired with an appropriate temporal separation. However, unlike satellite-based cloud-tracking measurements of Earth, these planetary measurements cannot easily be validated against in-situ data, which makes the interpretation difficult when different cloud-tracking schemes do not agree on their results. To address the issue of data validation, we run multiple automated cloud-tracking software independently developed at multiple institutions on synthetic wind data generated using a General Circulation Model. Our simulations calculate the advection of tracer distributions to represent cloud motions as done by Sayanagi and Showman (2007, Icarus 187, p520-539). The motions of tracers are measured using cloud-tracking software to derive wind vector fields, which will be compared against the model "truth." We test the performance of cloud-tracking software for different wind scenarios. Our first test wind field contains a simple zonal jet. The second test scenario is a large vortex like Jupiter’s Great Red Spot. The third test case has waves propagating alongside a zonal jet. We compare the results returned from different cloud-tracking schemes and discuss what approaches work better at measuring winds. In addition to verifying the wind vector field measurements, we also address the accuracy and validity of eddy momentum flux measurements by tracking clouds. The difficulties of such measurements are discussed by Salyk et al. (2006, Icarus 185, p430-442), and we re-examine the issue using our synthetic wind data. From our experiments, we aim to establish a standard benchmark of cloud tracking measurements for planetary mission applications.

  19. Action-Oriented Benchmarking: Using the CEUS Database to Benchmark Commercial Buildings in California

    SciTech Connect

    Mathew, Paul; Mills, Evan; Bourassa, Norman; Brook, Martha

    2008-02-01

    The 2006 Commercial End Use Survey (CEUS) database developed by the California Energy Commission is a far richer source of energy end-use data for non-residential buildings than has previously been available and opens the possibility of creating new and more powerful energy benchmarking processes and tools. In this article--Part 2 of a two-part series--we describe the methodology and selected results from an action-oriented benchmarking approach using the new CEUS database. This approach goes beyond whole-building energy benchmarking to more advanced end-use and component-level benchmarking that enables users to identify and prioritize specific energy efficiency opportunities - an improvement on benchmarking tools typically in use today.

  20. Toxicological benchmarks for wildlife: 1994 Revision

    SciTech Connect

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report.

  1. Benchmarking computational fluid dynamics models for lava flow simulation

    NASA Astrophysics Data System (ADS)

    Dietterich, Hannah; Lev, Einat; Chen, Jiangzhi

    2016-04-01

    Numerical simulations of lava flow emplacement are valuable for assessing lava flow hazards, forecasting active flows, interpreting past eruptions, and understanding the controls on lava flow behavior. Existing lava flow models vary in simplifying assumptions, physics, dimensionality, and the degree to which they have been validated against analytical solutions, experiments, and natural observations. In order to assess existing models and guide the development of new codes, we conduct a benchmarking study of computational fluid dynamics models for lava flow emplacement, including VolcFlow, OpenFOAM, FLOW-3D, and COMSOL. Using the new benchmark scenarios defined in Cordonnier et al. (Geol Soc SP, 2015) as a guide, we model viscous, cooling, and solidifying flows over horizontal and sloping surfaces, topographic obstacles, and digital elevation models of natural topography. We compare model results to analytical theory, analogue and molten basalt experiments, and measurements from natural lava flows. Overall, the models accurately simulate viscous flow with some variability in flow thickness where flows intersect obstacles. OpenFOAM, COMSOL, and FLOW-3D can each reproduce experimental measurements of cooling viscous flows, and FLOW-3D simulations with temperature-dependent rheology match results from molten basalt experiments. We can apply these models to reconstruct past lava flows in Hawai'i and Saudi Arabia using parameters assembled from morphology, textural analysis, and eruption observations as natural test cases. Our study highlights the strengths and weaknesses of each code, including accuracy and computational costs, and provides insights regarding code selection.

  2. Benchmarking: with whom? How often?

    PubMed

    Pressler, Jana L; Kenner, Carole A

    2013-01-01

    Many new nursing leaders assuming deanships or assistant or interim deanships have limited education, experience, or background to prepare them for the job. To assist new deans and those aspiring to be deans, the authors of this department offer survival tips based on their personal experiences and insights. They address common issues, challenges, and opportunities that face academic executive teams, such as negotiating an executive contract, obtaining faculty lines, building effective work teams, managing difficult employees, and creating nimble organizational structure to respond to changing consumer, healthcare delivery, and community needs. The authors welcome counterpoint discussions with readers. PMID:23608898

  3. Two-dimensional benchmark calculations for PNL-30 through PNL-35

    SciTech Connect

    Mosteller, R.D.

    1997-09-01

    Interest in critical experiments with lattices of mixed-oxide (MOX) fuel pins has been revived by the possibility that light water reactors will be used for disposition of weapons-grade plutonium. A series of six experiments with MOX lattices, designated PNL-30 through PNL-35, was performed at Pacific Northwest Laboratories in 1975 and 1976, and a set of benchmark specifications for these experiments subsequently was adopted by the Cross Section Evaluation Working Group (CSEWG). Although there appear to be some problems with these experiments, they remain the only CSEWG benchmarks for MOX lattices. The number of fuel pins in these experiments is relatively low, corresponding to fewer than 4 typical pressurized-water-reactor fuel assemblies. Accordingly, they are more appropriate as benchmarks for lattice-physics codes than for reactor-core simulator codes. Unfortunately, the CSEWG specifications retain the full three-dimensional (3D) detail of the experiments, while lattice-physics codes almost universally are limited to two dimensions (2D). This paper proposes an extension of the benchmark specifications to include a 2D model, and it justifies that extension by comparing results from the MCNP Monte Carlo code for the 2D and 3D specifications.

  4. Benchmark field study of deep neutron penetration

    NASA Astrophysics Data System (ADS)

    Morgan, J. F.; Sale, K.; Gold, R.; Roberts, J. H.; Preston, C. C.

    1991-06-01

    A unique benchmark neutron field has been established at the Lawrence Livermore National Laboratory (LLNL) to study deep penetration neutron transport. At LLNL, a tandem accelerator is used to generate a monoenergetic neutron source that permits investigation of deep neutron penetration under conditions that are virtually ideal to model, namely the transport of mono-energetic neutrons through a single material in a simple geometry. General features of the Lawrence Tandem (LATAN) benchmark field are described with emphasis on neutron source characteristics and room return background. The single material chosen for the first benchmark, LATAN-1, is a steel representative of Light Water Reactor (LWR) Pressure Vessels (PV). Also included is a brief description of the Little Boy replica, a critical reactor assembly designed to mimic the radiation doses from the atomic bomb dropped on Hiroshima, and its us in neutron spectrometry.

  5. Benchmark field study of deep neutron penetration

    SciTech Connect

    Morgan, J.F.; Sale, K. ); Gold, R.; Roberts, J.H.; Preston, C.C. )

    1991-06-10

    A unique benchmark neutron field has been established at the Lawrence Livermore National Laboratory (LLNL) to study deep penetration neutron transport. At LLNL, a tandem accelerator is used to generate a monoenergetic neutron source that permits investigation of deep neutron penetration under conditions that are virtually ideal to model, namely the transport of mono-energetic neutrons through a single material in a simple geometry. General features of the Lawrence Tandem (LATAN) benchmark field are described with emphasis on neutron source characteristics and room return background. The single material chosen for the first benchmark, LATAN-1, is a steel representative of Light Water Reactor (LWR) Pressure Vessels (PV). Also included is a brief description of the Little Boy replica, a critical reactor assembly designed to mimic the radiation doses from the atomic bomb dropped on Hiroshima, and its us in neutron spectrometry. 18 refs.

  6. NAS Parallel Benchmarks, Multi-Zone Versions

    NASA Technical Reports Server (NTRS)

    vanderWijngaart, Rob F.; Haopiang, Jin

    2003-01-01

    We describe an extension of the NAS Parallel Benchmarks (NPB) suite that involves solving the application benchmarks LU, BT and SP on collections of loosely coupled discretization meshes. The solutions on the meshes are updated independently, but after each time step they exchange boundary value information. This strategy, which is common among structured-mesh production flow solver codes in use at NASA Ames and elsewhere, provides relatively easily exploitable coarse-grain parallelism between meshes. Since the individual application benchmarks also allow fine-grain parallelism themselves, this NPB extension, named NPB Multi-Zone (NPB-MZ), is a good candidate for testing hybrid and multi-level parallelization tools and strategies.

  7. Analysis of ANS LWR physics benchmark problems.

    SciTech Connect

    Taiwo, T. A.

    1998-07-29

    Various Monte Carlo and deterministic solutions to the three PWR Lattice Benchmark Problems recently defined by the ANS Ad Hoc Committee on Reactor Physics Benchmarks are presented. These solutions were obtained using the VIM continuous-energy Monte Carlo code and the DIF3D/WIMS-D4M code package implemented at the Argonne National Laboratory. The code results for the K{sub eff} and relative pin power distribution are compared to measured values. Additionally, code results for the three benchmark-prescribed infinite lattice configurations are also intercompared. The results demonstrate that the codes produce very good estimates of both the K{sub eff} and power distribution for the critical core and the lattice parameters of the infinite lattice configuration.

  8. Energy benchmarking of South Australian WWTPs.

    PubMed

    Krampe, J

    2013-01-01

    Optimising the energy consumption and energy generation of wastewater treatment plants (WWTPs) is a topic with increasing importance for water utilities in times of rising energy costs and pressures to reduce greenhouse gas (GHG) emissions. Assessing the energy efficiency and energy optimisation of a WWTP are difficult tasks as most plants vary greatly in size, process layout and other influencing factors. To overcome these limits it is necessary to compare energy efficiency with a statistically relevant base to identify shortfalls and optimisation potential. Such energy benchmarks have been successfully developed and used in central Europe over the last two decades. This paper demonstrates how the latest available energy benchmarks from Germany have been applied to 24 WWTPs in South Australia. It shows how energy benchmarking can be used to identify shortfalls in current performance, prioritise detailed energy assessments and help inform decisions on capital investment. PMID:23656950

  9. New Test Set for Video Quality Benchmarking

    NASA Astrophysics Data System (ADS)

    Raventos, Joaquin

    A new test set design and benchmarking approach (US Patent pending) allows a "standard observer" to assess the end-to-end image quality characteristics of video imaging systems operating in day time or low-light conditions. It uses randomized targets based on extensive application of Photometry, Geometrical Optics, and Digital Media. The benchmarking takes into account the target's contrast sensitivity, its color characteristics, and several aspects of human vision such as visual acuity and dynamic response. The standard observer is part of the "extended video imaging system" (EVIS). The new test set allows image quality benchmarking by a panel of standard observers at the same time. The new approach shows that an unbiased assessment can be guaranteed. Manufacturers, system integrators, and end users will assess end-to-end performance by simulating a choice of different colors, luminance levels, and dynamic conditions in the laboratory or in permanent video systems installations.

  10. Standardized benchmarking in the quest for orthologs.

    PubMed

    Altenhoff, Adrian M; Boeckmann, Brigitte; Capella-Gutierrez, Salvador; Dalquen, Daniel A; DeLuca, Todd; Forslund, Kristoffer; Huerta-Cepas, Jaime; Linard, Benjamin; Pereira, Cécile; Pryszcz, Leszek P; Schreiber, Fabian; da Silva, Alan Sousa; Szklarczyk, Damian; Train, Clément-Marie; Bork, Peer; Lecompte, Odile; von Mering, Christian; Xenarios, Ioannis; Sjölander, Kimmen; Jensen, Lars Juhl; Martin, Maria J; Muffato, Matthieu; Gabaldón, Toni; Lewis, Suzanna E; Thomas, Paul D; Sonnhammer, Erik; Dessimoz, Christophe

    2016-05-01

    Achieving high accuracy in orthology inference is essential for many comparative, evolutionary and functional genomic analyses, yet the true evolutionary history of genes is generally unknown and orthologs are used for very different applications across phyla, requiring different precision-recall trade-offs. As a result, it is difficult to assess the performance of orthology inference methods. Here, we present a community effort to establish standards and an automated web-based service to facilitate orthology benchmarking. Using this service, we characterize 15 well-established inference methods and resources on a battery of 20 different benchmarks. Standardized benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimum requirement for new tools and resources, and guides the development of more accurate orthology inference methods. PMID:27043882

  11. Experimental power density distribution benchmark in the TRIGA Mark II reactor

    SciTech Connect

    Snoj, L.; Stancar, Z.; Radulovic, V.; Podvratnik, M.; Zerovnik, G.; Trkov, A.; Barbot, L.; Domergue, C.; Destouches, C.

    2012-07-01

    In order to improve the power calibration process and to benchmark the existing computational model of the TRIGA Mark II reactor at the Josef Stefan Inst. (JSI), a bilateral project was started as part of the agreement between the French Commissariat a l'energie atomique et aux energies alternatives (CEA) and the Ministry of higher education, science and technology of Slovenia. One of the objectives of the project was to analyze and improve the power calibration process of the JSI TRIGA reactor (procedural improvement and uncertainty reduction) by using absolutely calibrated CEA fission chambers (FCs). This is one of the few available power density distribution benchmarks for testing not only the fission rate distribution but also the absolute values of the fission rates. Our preliminary calculations indicate that the total experimental uncertainty of the measured reaction rate is sufficiently low that the experiments could be considered as benchmark experiments. (authors)

  12. Object-Oriented Implementation of the NAS Parallel Benchmarks using Charm++

    NASA Technical Reports Server (NTRS)

    Krishnan, Sanjeev; Bhandarkar, Milind; Kale, Laxmikant V.

    1996-01-01

    This report describes experiences with implementing the NAS Computational Fluid Dynamics benchmarks using a parallel object-oriented language, Charm++. Our main objective in implementing the NAS CFD kernel benchmarks was to develop a code that could be used to easily experiment with different domain decomposition strategies and dynamic load balancing. We also wished to leverage the object-orientation provided by the Charm++ parallel object-oriented language, to develop reusable abstractions that would simplify the process of developing parallel applications. We first describe the Charm++ parallel programming model and the parallel object array abstraction, then go into detail about each of the Scalar Pentadiagonal (SP) and Lower/Upper Triangular (LU) benchmarks, along with performance results. Finally we conclude with an evaluation of the methodology used.

  13. Sonic boom acceptability studies

    NASA Astrophysics Data System (ADS)

    Shepherd, Kevin P.; Sullivan, Brenda M.; Leatherwood, Jack D.; McCurdy, David A.

    1992-04-01

    The determination of the magnitude of sonic boom exposure which would be acceptable to the general population requires, as a starting point, a method to assess and compare individual sonic booms. There is no consensus within the scientific and regulatory communities regarding an appropriate sonic boom assessment metric. Loudness, being a fundamental and well-understood attribute of human hearing was chosen as a means of comparing sonic booms of differing shapes and amplitudes. The figure illustrates the basic steps which yield a calculated value of loudness. Based upon the aircraft configuration and its operating conditions, the sonic boom pressure signature which reaches the ground is calculated. This pressure-time history is transformed to the frequency domain and converted into a one-third octave band spectrum. The essence of the loudness method is to account for the frequency response and integration characteristics of the auditory system. The result of the calculation procedure is a numerical description (perceived level, dB) which represents the loudness of the sonic boom waveform.

  14. Sonic boom acceptability studies

    NASA Technical Reports Server (NTRS)

    Shepherd, Kevin P.; Sullivan, Brenda M.; Leatherwood, Jack D.; Mccurdy, David A.

    1992-01-01

    The determination of the magnitude of sonic boom exposure which would be acceptable to the general population requires, as a starting point, a method to assess and compare individual sonic booms. There is no consensus within the scientific and regulatory communities regarding an appropriate sonic boom assessment metric. Loudness, being a fundamental and well-understood attribute of human hearing was chosen as a means of comparing sonic booms of differing shapes and amplitudes. The figure illustrates the basic steps which yield a calculated value of loudness. Based upon the aircraft configuration and its operating conditions, the sonic boom pressure signature which reaches the ground is calculated. This pressure-time history is transformed to the frequency domain and converted into a one-third octave band spectrum. The essence of the loudness method is to account for the frequency response and integration characteristics of the auditory system. The result of the calculation procedure is a numerical description (perceived level, dB) which represents the loudness of the sonic boom waveform.

  15. Benchmarks for the point kinetics equations

    SciTech Connect

    Ganapol, B.; Picca, P.; Previti, A.; Mostacci, D.

    2013-07-01

    A new numerical algorithm is presented for the solution to the point kinetics equations (PKEs), whose accurate solution has been sought for over 60 years. The method couples the simplest of finite difference methods, a backward Euler, with Richardsons extrapolation, also called an acceleration. From this coupling, a series of benchmarks have emerged. These include cases from the literature as well as several new ones. The novelty of this presentation lies in the breadth of reactivity insertions considered, covering both prescribed and feedback reactivities, and the extreme 8- to 9- digit accuracy achievable. The benchmarks presented are to provide guidance to those who wish to develop further numerical improvements. (authors)

  16. Benchmarking: implementing the process in practice.

    PubMed

    Stark, Sheila; MacHale, Anita; Lennon, Eileen; Shaw, Lynne

    Government guidance and policy promotes the use of benchmarks as measures against which practice and care can be measured. This provides the motivation for practitioners to make changes to improve patient care. Adopting a systematic approach, practitioners can implement changes in practice quickly. The process requires motivation and communication between professionals of all disciplines. It provides a forum for sharing good practice and developing a support network. In this article the authors outline the initial steps taken by three PCGs in implementing the benchmarking process as they move towards primary care trust status. PMID:12212335

  17. Los Alamos National Laboratory computer benchmarking 1982

    SciTech Connect

    Martin, J.L.

    1983-06-01

    Evaluating the performance of computing machinery is a continual effort of the Computer Research and Applications Group of the Los Alamos National Laboratory. This report summarizes the results of the group's benchmarking activities performed between October 1981 and September 1982, presenting compilation and execution times as well as megaflop rates for a set of benchmark codes. Tests were performed on the following computers: Cray Research, Inc. (CRI) Cray-1S; Control Data Corporation (CDC) 7600, 6600, Cyber 73, Cyber 825, Cyber 835, Cyber 855, and Cyber 205; Digital Equipment Corporation (DEC) VAX 11/780 and VAX 11/782; and Apollo Computer, Inc., Apollo.

  18. Toxicological benchmarks for wildlife: 1996 Revision

    SciTech Connect

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets.

  19. Benchmark 4 - Wrinkling during cup drawing

    NASA Astrophysics Data System (ADS)

    Dick, Robert; Cardoso, Rui; Paulino, Mariana; Yoon, Jeong Whan

    2013-12-01

    Benchmark-4 is designed to predict wrinkling during cup drawing. Two different punch geometries have been selected in order to investigate the changes of wrinkling amplitude and wave. To study the effect of material on wrinkling, two distinct materials including AA 5042 and AKDQ steel are also considered in the benchmark. Problem description, material properties, and simulation reports with experimental data are summarized. At the request of the author, and Proceedings Editor, a corrected and updated version of this paper was published on January 2, 2014. The Corrigendum attached to the updated article PDF contains a list of the changes made to the original published version.

  20. Experience With the SCALE Criticality Safety Cross Section Libraries

    SciTech Connect

    Bowman, S.M.

    2000-08-21

    This report provides detailed information on the SCALE criticality safety cross-section libraries. Areas covered include the origins of the libraries, the data on which they are based, how they were generated, past experience and validations, and performance comparisons with measured critical experiments and numerical benchmarks. The performance of the SCALE criticality safety cross-section libraries on various types of fissile systems are examined in detail. Most of the performance areas are demonstrated by examining the performance of the libraries vs critical experiments to show general trends and weaknesses. In areas where directly applicable critical experiments do not exist, performance is examined based on the general knowledge of the strengths and weaknesses of the cross sections. In this case, the experience in the use of the cross sections and comparisons with the results of other libraries on the same systems are relied on for establishing acceptability of application of a particular SCALE library to a particular fissile system. This report should aid in establishing when a SCALE cross-section library would be expected to perform acceptably and where there are known or suspected deficiencies that would cause the calculations to be less reliable. To determine the acceptability of a library for a particular application, the calculational bias of the library should be established by directly applicable critical experiments.

  1. Death Acceptance through Ritual

    ERIC Educational Resources Information Center

    Reeves, Nancy C.

    2011-01-01

    This article summarizes the author's original research, which sought to discover the elements necessary for using death-related ritual as a psychotherapeutic technique for grieving people who experience their grief as "stuck," "unending," "maladaptive," and so on. A "death-related ritual" is defined as a ceremony, directly involving at least 1…

  2. Conditions for the acceptance of deontic conditionals.

    PubMed

    Over, D E; Manktelow, K I; Hadjichristidis, C

    2004-06-01

    Recent psychological research has investigated how people assess the probability of an indicative conditional. Most people give the conditional probability of q given p as the probability of if p then q. Asking about the probability of an indicative conditional, one is in effect asking about its acceptability. But on what basis are deontic conditionals judged to be acceptable or unacceptable? Using a decision theoretic analysis, we argue that a deontic conditional, of the form if p then must q or if p then may q, will be judged acceptable to the extent that the p & q possibility is preferred to the p & not-q possibility. Two experiments are reported in which this prediction was upheld. There was also evidence that the pragmatic suitability of permission rules is partly determined by evaluations of the not-p & q possibility. Implications of these results for theories of deontic reasoning are discussed. PMID:15285599

  3. Criticality benchmark guide for light-water-reactor fuel in transportation and storage packages

    SciTech Connect

    Lichtenwalter, J.J.; Bowman, S.M.; DeHart, M.D.; Hopper, C.M.

    1997-03-01

    This report is designed as a guide for performing criticality benchmark calculations for light-water-reactor (LWR) fuel applications. The guide provides documentation of 180 criticality experiments with geometries, materials, and neutron interaction characteristics representative of transportation packages containing LWR fuel or uranium oxide pellets or powder. These experiments should benefit the U.S. Nuclear Regulatory Commission (NRC) staff and licensees in validation of computational methods used in LWR fuel storage and transportation concerns. The experiments are classified by key parameters such as enrichment, water/fuel volume, hydrogen-to-fissile ratio (H/X), and lattice pitch. Groups of experiments with common features such as separator plates, shielding walls, and soluble boron are also identified. In addition, a sample validation using these experiments and a statistical analysis of the results are provided. Recommendations for selecting suitable experiments and determination of calculational bias and uncertainty are presented as part of this benchmark guide.

  4. Benchmark Evaluation of the Neutron Radiography (NRAD) Reactor Upgraded LEU-Fueled Core

    SciTech Connect

    John D. Bess

    2001-09-01

    Benchmark models were developed to evaluate the cold-critical start-up measurements performed during the fresh core reload of the Neutron Radiography (NRAD) reactor with Low Enriched Uranium (LEU) fuel. The final upgraded core configuration with 64 fuel elements has been completed. Evaluated benchmark measurement data include criticality, control-rod worth measurements, shutdown margin, and excess reactivity. Dominant uncertainties in keff include the manganese content and impurities contained within the stainless steel cladding of the fuel and the 236U and erbium poison content in the fuel matrix. Calculations with MCNP5 and ENDF/B-VII.0 nuclear data are approximately 1.4% greater than the benchmark model eigenvalue, supporting contemporary research regarding errors in the cross section data necessary to simulate TRIGA-type reactors. Uncertainties in reactivity effects measurements are estimated to be ~10% with calculations in agreement with benchmark experiment values within 2s. The completed benchmark evaluation de-tails are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Experiments (IRPhEP Handbook). Evaluation of the NRAD LEU cores containing 56, 60, and 62 fuel elements have also been completed, including analysis of their respective reactivity effects measurements; they are also available in the IRPhEP Handbook but will not be included in this summary paper.

  5. Acceptability of alternative treatments for deviant child behavior.

    PubMed Central

    Kazdin, A E

    1980-01-01

    The acceptability of alternative treatments for deviant child behavior was evaluated in two experiments. In each experiment, clinical cases were described to undergraduate students along with four different treatments in a Replicated Latin Square Design. The treatments included reinforcement of incomparible behavior, time out from reinforcement, drug therapy, and electric shock and the treatments were described as they were appliedto children with problem behaviors. Experiment 1 developed an assessment device to evaluate treatment acceptability and examined whether treatments were rated as differentially acceptable. Experiment 2 replicated the first experiment and examined whether the severity of the presenting clinical problem influenced ratings of acceptability. The results indicated that treatments were sharply distinguished in overall acceptability. Reinforcement of incompatible behavior was more acceptable than other treatments which followed, in order, time out from reinforcement, drug therapy, and electric shock. Case severity influenced acceptability of alternative treatments with all treatments being rated as more acceptable with more severe cases. However, the strength of case severity was relatively small in relation to the different treatment conditions themselves which accounted for large portions of variance. PMID:7380752

  6. Overview of TPC Benchmark E: The Next Generation of OLTP Benchmarks

    NASA Astrophysics Data System (ADS)

    Hogan, Trish

    Set to replace the aging TPC-C, the TPC Benchmark E is the next generation OLTP benchmark, which more accurately models client database usage. TPC-E addresses the shortcomings of TPC-C. It has a much more complex workload, requires the use of RAID-protected storage, generates much less I/O, and is much cheaper and easier to set up, run, and audit. After a period of overlap, it is expected that TPC-E will become the de facto OLTP benchmark.

  7. Toxicological Benchmarks for Screening Potential Contaminants of Concern for Effects on Soil and Litter Invertebrates and Heterotrophic Process

    SciTech Connect

    Will, M.E.

    1994-01-01

    This report presents a standard method for deriving benchmarks for the purpose of ''contaminant screening,'' performed by comparing measured ambient concentrations of chemicals. The work was performed under Work Breakdown Structure 1.4.12.2.3.04.07.02 (Activity Data Sheet 8304). In addition, this report presents sets of data concerning the effects of chemicals in soil on invertebrates and soil microbial processes, benchmarks for chemicals potentially associated with United States Department of Energy sites, and literature describing the experiments from which data were drawn for benchmark derivation.

  8. Physiologic correlates to background noise acceptance

    NASA Astrophysics Data System (ADS)

    Tampas, Joanna; Harkrider, Ashley; Nabelek, Anna

    2001-05-01

    Acceptance of background noise can be evaluated by having listeners indicate the highest background noise level (BNL) they are willing to accept while following the words of a story presented at their most comfortable listening level (MCL). The difference between the selected MCL and BNL is termed the acceptable noise level (ANL). One of the consistent findings in previous studies of ANL is large intersubject variability in acceptance of background noise. This variability is not related to age, gender, hearing sensitivity, personality, type of background noise, or speech perception in noise performance. The purpose of the current experiment was to determine if individual differences in physiological activity measured from the peripheral and central auditory systems of young female adults with normal hearing can account for the variability observed in ANL. Correlations between ANL and various physiological responses, including spontaneous, click-evoked, and distortion-product otoacoustic emissions, auditory brainstem and middle latency evoked potentials, and electroencephalography will be presented. Results may increase understanding of the regions of the auditory system that contribute to individual noise acceptance.

  9. Benchmarking the operational search accuracy of a national identification system

    NASA Astrophysics Data System (ADS)

    Suman, Ambika; Whitaker, Geoff

    2005-03-01

    This paper reports on some of the challenges associated with setting up and conducting a full operational benchmark of a palm and fingerprint identification system, based on PITO's own recent experience in this field. The tests described were undertaken as part of the overall evaluation of suppliers tendering for a multi million pound contract to deliver a new national automated fingerprint service for the UK (known as IDENT1), as a successor to the existing systems, both in England and Wales, and in Scotland. The emphasis throughout was on 'operationally' representative testing and it was this that determined the design and scale of the tests, which PITO believes are the largest such tests of a national AFIS ever undertaken. The knowledge gained from performing these benchmark tests has provided PITO with extremely valuable experience in both the theoretical and practical issues surrounding the design and conduct of operational tests on large scale identification systems, and it is these issues that are discussed in this paper.

  10. A chemical EOR benchmark study of different reservoir simulators

    NASA Astrophysics Data System (ADS)

    Goudarzi, Ali; Delshad, Mojdeh; Sepehrnoori, Kamy

    2016-09-01

    Interest in chemical EOR processes has intensified in recent years due to the advancements in chemical formulations and injection techniques. Injecting Polymer (P), surfactant/polymer (SP), and alkaline/surfactant/polymer (ASP) are techniques for improving sweep and displacement efficiencies with the aim of improving oil production in both secondary and tertiary floods. There has been great interest in chemical flooding recently for different challenging situations. These include high temperature reservoirs, formations with extreme salinity and hardness, naturally fractured carbonates, and sandstone reservoirs with heavy and viscous crude oils. More oil reservoirs are reaching maturity where secondary polymer floods and tertiary surfactant methods have become increasingly important. This significance has added to the industry's interest in using reservoir simulators as tools for reservoir evaluation and management to minimize costs and increase the process efficiency. Reservoir simulators with special features are needed to represent coupled chemical and physical processes present in chemical EOR processes. The simulators need to be first validated against well controlled lab and pilot scale experiments to reliably predict the full field implementations. The available data from laboratory scale include 1) phase behavior and rheological data; and 2) results of secondary and tertiary coreflood experiments for P, SP, and ASP floods under reservoir conditions, i.e. chemical retentions, pressure drop, and oil recovery. Data collected from corefloods are used as benchmark tests comparing numerical reservoir simulators with chemical EOR modeling capabilities such as STARS of CMG, ECLIPSE-100 of Schlumberger, REVEAL of Petroleum Experts. The research UTCHEM simulator from The University of Texas at Austin is also included since it has been the benchmark for chemical flooding simulation for over 25 years. The results of this benchmark comparison will be utilized to improve

  11. NAS Parallel Benchmarks Results 3-95

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Bailey, David H.; Walter, Howard (Technical Monitor)

    1995-01-01

    The NAS Parallel Benchmarks (NPB) were developed in 1991 at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a "pencil and paper" fashion, i.e., the complete details of the problem are given in a NAS technical document. Except for a few restrictions, benchmark implementors are free to select the language constructs and implementation techniques best suited for a particular system. In this paper, we present new NPB performance results for the following systems: (a) Parallel-Vector Processors: CRAY C90, CRAY T90 and Fujitsu VPP500; (b) Highly Parallel Processors: CRAY T3D, IBM SP2-WN (Wide Nodes), and IBM SP2-TN2 (Thin Nodes 2); and (c) Symmetric Multiprocessors: Convex Exemplar SPPIOOO, CRAY J90, DEC Alpha Server 8400 5/300, and SGI Power Challenge XL (75 MHz). We also present sustained performance per dollar for Class B LU, SP and BT benchmarks. We also mention future NAS plans for the NPB.

  12. Benchmarking 2011: Trends in Education Philanthropy

    ERIC Educational Resources Information Center

    Grantmakers for Education, 2011

    2011-01-01

    The analysis in "Benchmarking 2011" is based on data from an unduplicated sample of 184 education grantmaking organizations--approximately two-thirds of Grantmakers for Education's (GFE's) network of grantmakers--who responded to an online survey consisting of fixed-choice and open-ended questions. Because a different subset of funders elects to…

  13. Benchmarking 2010: Trends in Education Philanthropy

    ERIC Educational Resources Information Center

    Bearman, Jessica

    2010-01-01

    "Benchmarking 2010" offers insights into the current priorities, practices and concerns of education grantmakers. The report is divided into five sections: (1) Mapping the Education Grantmaking Landscape; (2) 2010 Funding Priorities; (3) Strategies for Leveraging Greater Impact; (4) Identifying Significant Trends in Education Funding; and (5)…

  14. Benchmarking Year Five Students' Reading Abilities

    ERIC Educational Resources Information Center

    Lim, Chang Kuan; Eng, Lin Siew; Mohamed, Abdul Rashid

    2014-01-01

    Reading and understanding a written text is one of the most important skills in English learning.This study attempts to benchmark Year Five students' reading abilities of fifteen rural schools in a district in Malaysia. The objectives of this study are to develop a set of standardised written reading comprehension and a set of indicators to inform…

  15. Benchmark Generation using Domain Specific Modeling

    SciTech Connect

    Bui, Ngoc B.; Zhu, Liming; Gorton, Ian; Liu, Yan

    2007-08-01

    Performance benchmarks are domain specific applications that are specialized to a certain set of technologies and platforms. The development of a benchmark application requires mapping the performance specific domain concepts to an implementation and producing complex technology and platform specific code. Domain Specific Modeling (DSM) promises to bridge the gap between application domains and implementations by allowing designers to specify solutions in domain-specific abstractions and semantics through Domain Specific Languages (DSL). This allows generation of a final implementation automatically from high level models. The modeling and task automation benefits obtained from this approach usually justify the upfront cost involved. This paper employs a DSM based approach to invent a new DSL, DSLBench, for benchmark generation. DSLBench and its associated code generation facilities allow the design and generation of a completely deployable benchmark application for performance testing from a high level model. DSLBench is implemented using Microsoft Domain Specific Language toolkit. It is integrated with the Visual Studio 2005 Team Suite as a plug-in to provide extra modeling capabilities for performance testing. We illustrate the approach using a case study based on .Net and C#.

  16. Benchmarking: A New Approach to Space Planning.

    ERIC Educational Resources Information Center

    Fink, Ira

    1999-01-01

    Questions some fundamental assumptions of historical methods of space guidelines in college facility planning, and offers an alternative approach to space projections based on a new benchmarking method. The method, currently in use at several institutions, uses space per faculty member as the basis for prediction of need and space allocation. (MSE)

  17. Issues in Benchmarking and Assessing Institutional Engagement

    ERIC Educational Resources Information Center

    Furco, Andrew; Miller, William

    2009-01-01

    The process of assessing and benchmarking community engagement can take many forms. To date, more than two dozen assessment tools for measuring community engagement institutionalization have been published. These tools vary substantially in purpose, level of complexity, scope, process, structure, and focus. While some instruments are designed to…

  18. Benchmarking Peer Production Mechanisms, Processes & Practices

    ERIC Educational Resources Information Center

    Fischer, Thomas; Kretschmer, Thomas

    2008-01-01

    This deliverable identifies key approaches for quality management in peer production by benchmarking peer production practices and processes in other areas. (Contains 29 footnotes, 13 figures and 2 tables.)[This report has been authored with contributions of: Kaisa Honkonen-Ratinen, Matti Auvinen, David Riley, Jose Pinzon, Thomas Fischer, Thomas…

  19. Sequenced Benchmarks for K-8 Science.

    ERIC Educational Resources Information Center

    Kendall, John S.; DeFrees, Keri L.; Richardson, Amy

    This document describes science benchmarks for grades K-8 in Earth and Space Science, Life Science, and Physical Science. Each subject area is divided into topics followed by a short content description and grade level information. Source documents for this paper included science content guides from California, Ohio, South Carolina, and South…

  20. Cleanroom Energy Efficiency: Metrics and Benchmarks

    SciTech Connect

    International SEMATECH Manufacturing Initiative; Mathew, Paul A.; Tschudi, William; Sartor, Dale; Beasley, James

    2010-07-07

    Cleanrooms are among the most energy-intensive types of facilities. This is primarily due to the cleanliness requirements that result in high airflow rates and system static pressures, as well as process requirements that result in high cooling loads. Various studies have shown that there is a wide range of cleanroom energy efficiencies and that facility managers may not be aware of how energy efficient their cleanroom facility can be relative to other cleanroom facilities with the same cleanliness requirements. Metrics and benchmarks are an effective way to compare one facility to another and to track the performance of a given facility over time. This article presents the key metrics and benchmarks that facility managers can use to assess, track, and manage their cleanroom energy efficiency or to set energy efficiency targets for new construction. These include system-level metrics such as air change rates, air handling W/cfm, and filter pressure drops. Operational data are presented from over 20 different cleanrooms that were benchmarked with these metrics and that are part of the cleanroom benchmark dataset maintained by Lawrence Berkeley National Laboratory (LBNL). Overall production efficiency metrics for cleanrooms in 28 semiconductor manufacturing facilities in the United States and recorded in the Fabs21 database are also presented.

  1. Closed benchmarks for network community structure characterization

    NASA Astrophysics Data System (ADS)

    Aldecoa, Rodrigo; Marín, Ignacio

    2012-02-01

    Characterizing the community structure of complex networks is a key challenge in many scientific fields. Very diverse algorithms and methods have been proposed to this end, many working reasonably well in specific situations. However, no consensus has emerged on which of these methods is the best to use in practice. In part, this is due to the fact that testing their performance requires the generation of a comprehensive, standard set of synthetic benchmarks, a goal not yet fully achieved. Here, we present a type of benchmark that we call “closed,” in which an initial network of known community structure is progressively converted into a second network whose communities are also known. This approach differs from all previously published ones, in which networks evolve toward randomness. The use of this type of benchmark allows us to monitor the transformation of the community structure of a network. Moreover, we can predict the optimal behavior of the variation of information, a measure of the quality of the partitions obtained, at any moment of the process. This enables us in many cases to determine the best partition among those suggested by different algorithms. Also, since any network can be used as a starting point, extensive studies and comparisons can be performed using a heterogeneous set of structures, including random ones. These properties make our benchmarks a general standard for comparing community detection algorithms.

  2. Benchmark graphs for testing community detection algorithms

    NASA Astrophysics Data System (ADS)

    Lancichinetti, Andrea; Fortunato, Santo; Radicchi, Filippo

    2008-10-01

    Community structure is one of the most important features of real networks and reveals the internal organization of the nodes. Many algorithms have been proposed but the crucial issue of testing, i.e., the question of how good an algorithm is, with respect to others, is still open. Standard tests include the analysis of simple artificial graphs with a built-in community structure, that the algorithm has to recover. However, the special graphs adopted in actual tests have a structure that does not reflect the real properties of nodes and communities found in real networks. Here we introduce a class of benchmark graphs, that account for the heterogeneity in the distributions of node degrees and of community sizes. We use this benchmark to test two popular methods of community detection, modularity optimization, and Potts model clustering. The results show that the benchmark poses a much more severe test to algorithms than standard benchmarks, revealing limits that may not be apparent at a first analysis.

  3. A MULTIMODEL APPROACH FOR CALCULATING BENCHMARK DOSE

    EPA Science Inventory


    A Multimodel Approach for Calculating Benchmark Dose
    Ramon I. Garcia and R. Woodrow Setzer

    In the assessment of dose response, a number of plausible dose- response models may give fits that are consistent with the data. If no dose response formulation had been speci...

  4. MHEC Survey Establishes Midwest Property Insurance Benchmarks.

    ERIC Educational Resources Information Center

    Midwestern Higher Education Commission Risk Management Institute Research Bulletin, 1994

    1994-01-01

    This publication presents the results of a survey of over 200 midwestern colleges and universities on their property insurance programs and establishes benchmarks to help these institutions evaluate their insurance programs. Findings included the following: (1) 51 percent of respondents currently purchase their property insurance as part of a…

  5. What Is the Impact of Subject Benchmarking?

    ERIC Educational Resources Information Center

    Pidcock, Steve

    2006-01-01

    The introduction of subject benchmarking led to fears of increased external intervention in the activities of universities and a more restrictive view of institutional autonomy, accompanied by an undermining of the academic profession, particularly through the perceived threat of the introduction of a national curriculum for higher education. For…

  6. Benchmark Problems for Spacecraft Formation Flying Missions

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Leitner, Jesse A.; Burns, Richard D.; Folta, David C.

    2003-01-01

    To provide high-level focus to distributed space system flight dynamics and control research, several benchmark problems are suggested. These problems are not specific to any current or proposed mission, but instead are intended to capture high-level features that would be generic to many similar missions.

  7. Finding a benchmark for monitoring hospital cleanliness.

    PubMed

    Mulvey, D; Redding, P; Robertson, C; Woodall, C; Kingsmore, P; Bedwell, D; Dancer, S J

    2011-01-01

    This study evaluated three methods for monitoring hospital cleanliness. The aim was to find a benchmark that could indicate risk to patients from a contaminated environment. We performed visual monitoring, ATP bioluminescence and microbiological screening of five clinical surfaces before and after detergent-based cleaning on two wards over a four-week period. Five additional sites that were not featured in the routine domestic specification were also sampled. Measurements from all three methods were integrated and compared in order to choose appropriate levels for routine monitoring. We found that visual assessment did not reflect ATP values nor environmental contamination with microbial flora including Staphylococcus aureus and meticillin-resistant S. aureus (MRSA). There was a relationship between microbial growth categories and the proportion of ATP values exceeding a chosen benchmark but neither reliably predicted the presence of S. aureus or MRSA. ATP values were occasionally diverse. Detergent-based cleaning reduced levels of organic soil by 32% (95% confidence interval: 16-44%; P<0.001) but did not necessarily eliminate indicator staphylococci, some of which survived the cleaning process. An ATP benchmark value of 100 relative light units offered the closest correlation with microbial growth levels <2.5 cfu/cm(2) (receiver operating characteristic ROC curve sensitivity: 57%; specificity: 57%). In conclusion, microbiological and ATP monitoring confirmed environmental contamination, persistence of hospital pathogens and measured the effect on the environment from current cleaning practices. This study has provided provisional benchmarks to assist with future assessment of hospital cleanliness. Further work is required to refine practical sampling strategy and choice of benchmarks. PMID:21129820

  8. The OLYMPUS Experiment Simulation

    NASA Astrophysics Data System (ADS)

    Schmidt, Axel

    2013-04-01

    The OLYMPUS Experiment aims to measure the ratio of electron-proton to positron-proton elastic scattering cross-sections to better than 1% systematic uncertainty. Achieving this goal requires a precise understanding of a wide range of systematic effects, such as the radiative corrections internal to the reaction, the varying acceptance of the detector aparatus, and efficiency of the tracking algorithms. A detailed Geant4 simulation of the OLYMPUS experiment has been developed to study these effects, and using the Monte Carlo method, properly account for their convolution. Radiative corrections are applied by the event generator, whose events are propagated through the simulation. Simulated detector signals are produced with identical format to the raw OLYMPUS data, so that simulated data can be processed using the same analysis software. The simulation, therefore, serves as a benchmark for comparison with the final OLYMPUS results. A discussion of the radiative corrections procedure and an overview of the simulation will be presented. This work is supported by DOE Grant DE-FG02-94ER40818.

  9. International E-Benchmarking: Flexible Peer Development of Authentic Learning Principles in Higher Education

    ERIC Educational Resources Information Center

    Leppisaari, Irja; Vainio, Leena; Herrington, Jan; Im, Yeonwook

    2011-01-01

    More and more, social technologies and virtual work methods are facilitating new ways of crossing boundaries in professional development and international collaborations. This paper examines the peer development of higher education teachers through the experiences of the IVBM project (International Virtual Benchmarking, 2009-2010). The…

  10. Winning Strategy: Set Benchmarks of Early Success to Build Momentum for the Long Term

    ERIC Educational Resources Information Center

    Spiro, Jody

    2012-01-01

    Change is a highly personal experience. Everyone participating in the effort has different reactions to change, different concerns, and different motivations for being involved. The smart change leader sets benchmarks along the way so there are guideposts and pause points instead of an endless change process. "Early wins"--a term used to describe…

  11. Prospective Elementary Teacher Understandings of Pest-Related Science and Agricultural Education Benchmarks.

    ERIC Educational Resources Information Center

    Trexler, Cary J.; Heinze, Kirk L.

    2001-01-01

    Clinical interviews with eight preservice elementary teachers elicited their understanding of pest-related benchmarks. Those with out-of-school experience were better able to articulate their understanding. Many were unable to make connections between scientific, societal and technological concepts. (Contains 39 references.) (SK)

  12. Benchmarking and Threshold Standards in Higher Education. Staff and Educational Development Series.

    ERIC Educational Resources Information Center

    Smith, Helen, Ed.; Armstrong, Michael, Ed.; Brown, Sally, Ed.

    This book explores the issues involved in developing standards in higher education, examining the practical issues involved in benchmarking and offering a critical analysis of the problems associated with this developmental tool. The book focuses primarily on experience in the United Kingdom (UK), but looks also at international activity in this…

  13. Employee Acceptance of BOS and BES Performance Appraisals.

    ERIC Educational Resources Information Center

    Dossett, Dennis L.; Gier, Joseph A.

    Previous research on performance evaluation systems has failed to take into account user acceptance. Employee acceptance of a behaviorally-based performance appraisal system was assessed in a field experiment contrasting user preference for Behavioral Expectations Scales (BES) versus Behavioral Observation Scales (BOS). Non-union sales associates…

  14. Using Benchmarking To Influence Tuition and Fee Decisions.

    ERIC Educational Resources Information Center

    Hubbell, Loren W. Loomis; Massa, Robert J.; Lapovsky, Lucie

    2002-01-01

    Discusses the use of benchmarking in managing enrollment. Using a case study, illustrates how benchmarking can help administrators develop strategies for planning and implementing admissions and pricing practices. (EV)

  15. Results Oriented Benchmarking: The Evolution of Benchmarking at NASA from Competitive Comparisons to World Class Space Partnerships

    NASA Technical Reports Server (NTRS)

    Bell, Michael A.

    1999-01-01

    Informal benchmarking using personal or professional networks has taken place for many years at the Kennedy Space Center (KSC). The National Aeronautics and Space Administration (NASA) recognized early on, the need to formalize the benchmarking process for better utilization of resources and improved benchmarking performance. The need to compete in a faster, better, cheaper environment has been the catalyst for formalizing these efforts. A pioneering benchmarking consortium was chartered at KSC in January 1994. The consortium known as the Kennedy Benchmarking Clearinghouse (KBC), is a collaborative effort of NASA and all major KSC contractors. The charter of this consortium is to facilitate effective benchmarking, and leverage the resulting quality improvements across KSC. The KBC acts as a resource with experienced facilitators and a proven process. One of the initial actions of the KBC was to develop a holistic methodology for Center-wide benchmarking. This approach to Benchmarking integrates the best features of proven benchmarking models (i.e., Camp, Spendolini, Watson, and Balm). This cost-effective alternative to conventional Benchmarking approaches has provided a foundation for consistent benchmarking at KSC through the development of common terminology, tools, and techniques. Through these efforts a foundation and infrastructure has been built which allows short duration benchmarking studies yielding results gleaned from world class partners that can be readily implemented. The KBC has been recognized with the Silver Medal Award (in the applied research category) from the International Benchmarking Clearinghouse.

  16. Middleware Evaluation and Benchmarking for Use in Mission Operations Centers

    NASA Technical Reports Server (NTRS)

    Antonucci, Rob; Waktola, Waka

    2005-01-01

    Middleware technologies have been promoted as timesaving, cost-cutting alternatives to the point-to-point communication used in traditional mission operations systems. However, missions have been slow to adopt the new technology. The lack of existing middleware-based missions has given rise to uncertainty about middleware's ability to perform in an operational setting. Most mission architects are also unfamiliar with the technology and do not know the benefits and detriments to architectural choices - or even what choices are available. We will present the findings of a study that evaluated several middleware options specifically for use in a mission operations system. We will address some common misconceptions regarding the applicability of middleware-based architectures, and we will identify the design decisions and tradeoffs that must be made when choosing a middleware solution. The Middleware Comparison and Benchmark Study was conducted at NASA Goddard Space Flight Center to comprehensively evaluate candidate middleware products, compare and contrast the performance of middleware solutions with the traditional point- to-point socket approach, and assess data delivery and reliability strategies. The study focused on requirements of the Global Precipitation Measurement (GPM) mission, validating the potential use of middleware in the GPM mission ground system. The study was jointly funded by GPM and the Goddard Mission Services Evolution Center (GMSEC), a virtual organization for providing mission enabling solutions and promoting the use of appropriate new technologies for mission support. The study was broken into two phases. To perform the generic middleware benchmarking and performance analysis, a network was created with data producers and consumers passing data between themselves. The benchmark monitored the delay, throughput, and reliability of the data as the characteristics were changed. Measurements were taken under a variety of topologies, data demands

  17. TRANSX/DANT benchmark studies using a ENDF/B-V based MATXS library

    SciTech Connect

    Johns, R.C.; Mosteller, R.D.; Perry, R.T.

    1995-07-01

    A series of 20 benchmark critical experiments were studied using the DANT code with cross section libraries prepared by TRANSX from ENDF/B-V based MATXS libraries. The benchmarks were selected to cover both fast and thermal systems utilizing either uranium or plutonium as the primary fissile isotope. An effort was made to cover the range of isotopes prevalent in nuclear systems, though no heterogeneous thermal plutonium cases were included. The results indicate that the code package and library give satisfactory results for the majority of cases, though the results are somewhat poorer for thermal plutonium cases.

  18. Reactor Physics and Criticality Benchmark Evaluations for Advanced Nuclear Fuel - Final Technical Report

    SciTech Connect

    William Anderson; James Tulenko; Bradley Rearden; Gary Harms

    2008-09-11

    The nuclear industry interest in advanced fuel and reactor design often drives towards fuel with uranium enrichments greater than 5 wt% 235U. Unfortunately, little data exists, in the form of reactor physics and criticality benchmarks, for uranium enrichments ranging between 5 and 10 wt% 235U. The primary purpose of this project is to provide benchmarks for fuel similar to what may be required for advanced light water reactors (LWRs). These experiments will ultimately provide additional information for application to the criticality-safety bases for commercial fuel facilities handling greater than 5 wt% 235U fuel.

  19. From detailed analysis of IO pattern of the HEP applications to benchmark of new storage solutions

    NASA Astrophysics Data System (ADS)

    Horký, Jiří; Santinelli, Roberto

    2011-12-01

    The problem of file access on the site level has been one of the main issues since the start of the WLCG project. A lot of studies have already been performed using both industry standard benchmarks and actual physicist jobs. However, such studies are typically bound to one particular LHC experiment supported on a given site and/or one type of job. In this paper, we present an application suitable for detailed study of applications' file access behavior. We have also developed an application for exact replaying of IO requests of the previously run applications. This enables to benchmark performance of solutions without a need of installing the whole working environment.

  20. 42 CFR 414.1255 - Benchmarks for cost measures.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 3 2013-10-01 2013-10-01 false Benchmarks for cost measures. 414.1255 Section 414... Payment Modifier Under the Physician Fee Schedule § 414.1255 Benchmarks for cost measures. The benchmark for each cost measure is the national mean of the performance rates calculated among all groups...

  1. 42 CFR 414.1255 - Benchmarks for cost measures.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 3 2014-10-01 2014-10-01 false Benchmarks for cost measures. 414.1255 Section 414... Payment Modifier Under the Physician Fee Schedule § 414.1255 Benchmarks for cost measures. (a) For the CY 2015 payment adjustment period, the benchmark for each cost measure is the national mean of...

  2. Taking Stock of Corporate Benchmarking Practices: Panacea or Pandora's Box?

    ERIC Educational Resources Information Center

    Fleisher, Craig S.; Burton, Sara

    1995-01-01

    Discusses why corporate communications/public relations (cc/pr) should be benchmarked (an approach used by cc/pr managers to demonstrate the value of their activities to skeptical organizational executives). Discusses myths about cc/pr benchmarking; types, targets, and focus of cc/pr benchmarking; a process model; and critical decisions about…

  3. Discovering and Implementing Best Practices to Strengthen SEAs: Collaborative Benchmarking

    ERIC Educational Resources Information Center

    Building State Capacity and Productivity Center, 2013

    2013-01-01

    This paper is written for state educational agency (SEA) leaders who are considering the benefits of collaborative benchmarking, and it addresses the following questions: (1) What does benchmarking of best practices entail?; (2) How does "collaborative benchmarking" enhance the process?; (3) How do SEAs control the process so that "their" needs…

  4. 42 CFR 422.258 - Calculation of benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Calculation of benchmarks. 422.258 Section 422.258... and Plan Approval § 422.258 Calculation of benchmarks. (a) The term “MA area-specific non-drug monthly... the plan bids. (c) Calculation of MA regional non-drug benchmark amount. CMS calculates the...

  5. Unstructured Adaptive (UA) NAS Parallel Benchmark. Version 1.0

    NASA Technical Reports Server (NTRS)

    Feng, Huiyu; VanderWijngaart, Rob; Biswas, Rupak; Mavriplis, Catherine

    2004-01-01

    We present a complete specification of a new benchmark for measuring the performance of modern computer systems when solving scientific problems featuring irregular, dynamic memory accesses. It complements the existing NAS Parallel Benchmark suite. The benchmark involves the solution of a stylized heat transfer problem in a cubic domain, discretized on an adaptively refined, unstructured mesh.

  6. Accepters and Rejecters of Counseling.

    ERIC Educational Resources Information Center

    Rose, Harriett A.; Elton, Charles F.

    Personality differences between students who accept or reject proffered counseling assistance were investigated by comparing personality traits of 116 male students at the University of Kentucky who accepted or rejected letters of invitation to group counseling. Factor analysis of Omnibus Personality Inventory (OPI) scores to two groups of 60 and…

  7. Cone penetrometer acceptance test report

    SciTech Connect

    Boechler, G.N.

    1996-09-19

    This Acceptance Test Report (ATR) documents the results of acceptance test procedure WHC-SD-WM-ATR-151. Included in this report is a summary of the tests, the results and issues, the signature and sign- off ATP pages, and a summarized table of the specification vs. ATP section that satisfied the specification.

  8. The OECD/NEA/NSC PBMR coupled neutronics/thermal hydraulics transient benchmark: The PBMR-400 core design

    SciTech Connect

    Reitsma, F.; Ivanov, K.; Downar, T.; De Haas, H.; Gougar, H. D.

    2006-07-01

    The Pebble Bed Modular Reactor (PBMR) is a High-Temperature Gas-cooled Reactor (HTGR) concept to be built in South Africa. As part of the verification and validation program the definition and execution of code-to-code benchmark exercises are important. The Nuclear Energy Agency (NEA) of the Organisation for Economic Cooperation and Development (OECD) has accepted, through the Nuclear Science Committee (NSC), the inclusion of the Pebble-Bed Modular Reactor (PBMR) coupled neutronics/thermal hydraulics transient benchmark problem in its program. The OECD benchmark defines steady-state and transients cases, including reactivity insertion transients. It makes use of a common set of cross sections (to eliminate uncertainties between different codes) and includes specific simplifications to the design to limit the need for participants to introduce approximations in their models. In this paper the detailed specification is explained, including the test cases to be calculated and the results required from participants. (authors)

  9. MODEL BENCHMARK WITH EXPERIMENT AT THE SNS LINAC

    SciTech Connect

    Shishlo, Andrei P; Aleksandrov, Alexander V; Liu, Yun; Plum, Michael A

    2016-01-01

    The history of attempts to perform a transverse match-ing in the Spallation Neutron Source (SNS) superconduct-ing linac (SCL) is discussed. The SCL has 9 laser wire (LW) stations to perform non-destructive measurements of the transverse beam profiles. Any matching starts with the measurement of the initial Twiss parameters, which in the SNS case was done by using the first four LW stations at the beginning of the superconducting linac. For years the consistency between data from all LW stations could not be achieved. This problem was resolved only after significant improvements in accuracy of the phase scans of the SCL cavities, more precise analysis of all available scan data, better optics planning, and the initial longitudi-nal Twiss parameter measurements. The presented paper discusses in detail these developed procedures.

  10. Developing and Using Benchmarks for Eddy Current Simulation Codes Validation to Address Industrial Issues

    NASA Astrophysics Data System (ADS)

    Mayos, M.; Buvat, F.; Costan, V.; Moreau, O.; Gilles-Pascaud, C.; Reboud, C.; Foucher, F.

    2011-06-01

    To achieve performance demonstration, which is a legal requirement for the qualification of NDE processes applied on French nuclear power plants, the use of modeling tools is a valuable support, provided that the employed models have been previously validated. To achieve this, in particular for eddy current modeling, a validation methodology based on the use of specific benchmarks close to the actual industrial issue has to be defined. Nonetheless, considering the high variability in code origin and complexity, the feedback from experience on actual cases has shown that it was critical to define simpler generic and public benchmarks in order to perform a preliminary selection. A specific Working Group has been launched in the frame of COFREND, the French Association for NDE, resulting in the definition of several benchmark problems. This action is now ready for mutualization with similar international approaches.

  11. A Cross-Benchmarking and Validation Initiative for Tokamak 3D Equilibrium Calculations

    NASA Astrophysics Data System (ADS)

    Reiman, A.; Turnbull, A.; Evans, T.; Ferraro, N.; Lazarus, E.; Breslau, J.; Cerfon, A.; Chang, C. S.; Hager, R.; King, J.; Lanctot, M.; Lazerson, S.; Liu, Y.; McFadden, G.; Monticello, D.; Nazikian, R.; Park, J. K.; Sovinec, C.; Suzuki, Y.; Zhu, P.

    2014-10-01

    We are pursuing a cross-benchmarking and validation initiative for tokamak 3D equilibrium calculations, with 11 codes participating: the linearized tokamak equilibrium codes IPEC and MARS-F, the time-dependent extended MHD codes M3D-C1, M3D, and NIMROD, the gyrokinetic code XGC, as well as the stellarator codes VMEC, NSTAB, PIES, HINT and SPEC. Dedicated experiments for the purpose of generating data for validation have been done on the DIII-D tokamak. The data will allow us to do validation simultaneously with cross-benchmarking. Initial cross-benchmarking calculations are finding a disagreement between stellarator and tokamak 3D equilibrium codes. Work supported in part by U.S. DOE under Contracts DE-ACO2-09CH11466, DE-FC02-04E854698, DE-FG02-95E854309 and DE-AC05-000R22725.

  12. Benchmarking a Visual-Basic based multi-component one-dimensional reactive transport modeling tool

    NASA Astrophysics Data System (ADS)

    Torlapati, Jagadish; Prabhakar Clement, T.

    2013-01-01

    We present the details of a comprehensive numerical modeling tool, RT1D, which can be used for simulating biochemical and geochemical reactive transport problems. The code can be run within the standard Microsoft EXCEL Visual Basic platform, and it does not require any additional software tools. The code can be easily adapted by others for simulating different types of laboratory-scale reactive transport experiments. We illustrate the capabilities of the tool by solving five benchmark problems with varying levels of reaction complexity. These literature-derived benchmarks are used to highlight the versatility of the code for solving a variety of practical reactive transport problems. The benchmarks are described in detail to provide a comprehensive database, which can be used by model developers to test other numerical codes. The VBA code presented in the study is a practical tool that can be used by laboratory researchers for analyzing both batch and column datasets within an EXCEL platform.

  13. Energy benchmarking of commercial buildings: a low-cost pathway toward urban sustainability

    NASA Astrophysics Data System (ADS)

    Cox, Matt; Brown, Marilyn A.; Sun, Xiaojing

    2013-09-01

    US cities are beginning to experiment with a regulatory approach to address information failures in the real estate market by mandating the energy benchmarking of commercial buildings. Understanding how a commercial building uses energy has many benefits; for example, it helps building owners and tenants identify poor-performing buildings and subsystems and it enables high-performing buildings to achieve greater occupancy rates, rents, and property values. This paper estimates the possible impacts of a national energy benchmarking mandate through analysis chiefly utilizing the Georgia Tech version of the National Energy Modeling System (GT-NEMS). Correcting input discount rates results in a 4.0% reduction in projected energy consumption for seven major classes of equipment relative to the reference case forecast in 2020, rising to 8.7% in 2035. Thus, the official US energy forecasts appear to overestimate future energy consumption by underestimating investments in energy-efficient equipment. Further discount rate reductions spurred by benchmarking policies yield another 1.3-1.4% in energy savings in 2020, increasing to 2.2-2.4% in 2035. Benchmarking would increase the purchase of energy-efficient equipment, reducing energy bills, CO2 emissions, and conventional air pollution. Achieving comparable CO2 savings would require more than tripling existing US solar capacity. Our analysis suggests that nearly 90% of the energy saved by a national benchmarking policy would benefit metropolitan areas, and the policy’s benefits would outweigh its costs, both to the private sector and society broadly.

  14. CFD validation in OECD/NEA t-junction benchmark.

    SciTech Connect

    Obabko, A. V.; Fischer, P. F.; Tautges, T. J.; Karabasov, S.; Goloviznin, V. M.; Zaytsev, M. A.; Chudanov, V. V.; Pervichko, V. A.; Aksenova, A. E.

    2011-08-23

    When streams of rapidly moving flow merge in a T-junction, the potential arises for large oscillations at the scale of the diameter, D, with a period scaling as O(D/U), where U is the characteristic flow velocity. If the streams are of different temperatures, the oscillations result in experimental fluctuations (thermal striping) at the pipe wall in the outlet branch that can accelerate thermal-mechanical fatigue and ultimately cause pipe failure. The importance of this phenomenon has prompted the nuclear energy modeling and simulation community to establish a benchmark to test the ability of computational fluid dynamics (CFD) codes to predict thermal striping. The benchmark is based on thermal and velocity data measured in an experiment designed specifically for this purpose. Thermal striping is intrinsically unsteady and hence not accessible to steady state simulation approaches such as steady state Reynolds-averaged Navier-Stokes (RANS) models.1 Consequently, one must consider either unsteady RANS or large eddy simulation (LES). This report compares the results for three LES codes: Nek5000, developed at Argonne National Laboratory (USA), and Cabaret and Conv3D, developed at the Moscow Institute of Nuclear Energy Safety at (IBRAE) in Russia. Nek5000 is based on the spectral element method (SEM), which is a high-order weighted residual technique that combines the geometric flexibility of the finite element method (FEM) with the tensor-product efficiencies of spectral methods. Cabaret is a 'compact accurately boundary-adjusting high-resolution technique' for fluid dynamics simulation. The method is second-order accurate on nonuniform grids in space and time, and has a small dispersion error and computational stencil defined within one space-time cell. The scheme is equipped with a conservative nonlinear correction procedure based on the maximum principle. CONV3D is based on the immersed boundary method and is validated on a wide set of the experimental and

  15. Benchmarking and audit of breast units improves quality of care

    PubMed Central

    van Dam, P.A.; Verkinderen, L.; Hauspy, J.; Vermeulen, P.; Dirix, L.; Huizing, M.; Altintas, S.; Papadimitriou, K.; Peeters, M.; Tjalma, W.

    2013-01-01

    Quality Indicators (QIs) are measures of health care quality that make use of readily available hospital inpatient administrative data. Assessment quality of care can be performed on different levels: national, regional, on a hospital basis or on an individual basis. It can be a mandatory or voluntary system. In all cases development of an adequate database for data extraction, and feedback of the findings is of paramount importance. In the present paper we performed a Medline search on “QIs and breast cancer” and “benchmarking and breast cancer care”, and we have added some data from personal experience. The current data clearly show that the use of QIs for breast cancer care, regular internal and external audit of performance of breast units, and benchmarking are effective to improve quality of care. Adherence to guidelines improves markedly (particularly regarding adjuvant treatment) and there are data emerging showing that this results in a better outcome. As quality assurance benefits patients, it will be a challenge for the medical and hospital community to develop affordable quality control systems, which are not leading to excessive workload. PMID:24753926

  16. Benchmarking and audit of breast units improves quality of care.

    PubMed

    van Dam, P A; Verkinderen, L; Hauspy, J; Vermeulen, P; Dirix, L; Huizing, M; Altintas, S; Papadimitriou, K; Peeters, M; Tjalma, W

    2013-01-01

    Quality Indicators (QIs) are measures of health care quality that make use of readily available hospital inpatient administrative data. Assessment quality of care can be performed on different levels: national, regional, on a hospital basis or on an individual basis. It can be a mandatory or voluntary system. In all cases development of an adequate database for data extraction, and feedback of the findings is of paramount importance. In the present paper we performed a Medline search on "QIs and breast cancer" and "benchmarking and breast cancer care", and we have added some data from personal experience. The current data clearly show that the use of QIs for breast cancer care, regular internal and external audit of performance of breast units, and benchmarking are effective to improve quality of care. Adherence to guidelines improves markedly (particularly regarding adjuvant treatment) and there are data emerging showing that this results in a better outcome. As quality assurance benefits patients, it will be a challenge for the medical and hospital community to develop affordable quality control systems, which are not leading to excessive workload. PMID:24753926

  17. National Practice Benchmark: 2010 Report on 2009 Data

    PubMed Central

    Towle, Elaine L.; Barr, Thomas R.

    2010-01-01

    Purpose: Oncology practices continue to experience economic pressures as costs rise, numbers of patients increase, and reimbursements from payers remain flat or decrease. Many practices have responded to these challenges by examining business processes and making changes to improve efficiency and decrease costs. The National Practice Benchmark is a national survey of community oncology practices that provides data for practices to use in managing today's challenging practice environment. Methods: Oncology practices were invited to participate in an online benchmarking survey. One hundred eighty-nine practices from 44 states responded to the survey, and demographic, operational, and financial data were collected for calendar year 2009 or the most recently completed fiscal year. Results: Data from 2009 were compiled and compared with previously collected 2007 and 2008 data. The data reveal that total revenue has increased by approximately 6% per year over this 3-year period. During the same period, however, cost of drugs increased dramatically: 13.5% increase from 2007 to 2008 and 16% from 2008 to 2009. Total practice expense increased at virtually the same level as drug costs in 2008 and was flat for 2009. Conclusion: Survey results indicate an overall lowering of practice expenses even as cost of drugs continues to rise, and are consistent with the slight increase in the number of new patients per full-time equivalent hematology/oncology physician. These measures indicate an overall increase in service delivery efficiency and adaptation by many practices to the changing practice environment. PMID:21197184

  18. Benchmark Evaluation of Plutonium Hemispheres Reflected by Steel and Oil

    SciTech Connect

    John Darrell Bess

    2008-06-01

    During the period from June 1967 through September 1969 a series of critical experiments was performed at the Rocky Flats Critical Mass Laboratory with spherical and hemispherical plutonium assemblies as nested hemishells as part of a Nuclear Safety Facility Experimental Program to evaluate operational safety margins for the Rocky Flats Plant. These assemblies were both bare and fully or partially oil-reflected. Many of these experiments were subcritical with an extrapolation to critical configurations or critical at a particular oil height. Existing records reveal that 167 experiments were performed over the course of 28 months. Unfortunately, much of the data was not recorded. A reevaluation of the experiments had been summarized in a report for future experimental and computational analyses. This report examines only fifteen partially oil-reflected hemispherical assemblies. Fourteen of these assemblies also had close-fitting stainless-steel hemishell reflectors, used to determine the effective critical reflector height of oil with varying steel-reflector thickness. The experiments and their uncertainty in keff values were evaluated to determine their potential as valid criticality benchmark experiments of plutonium.

  19. Benchmarking of Monte Carlo based shutdown dose rate calculations for applications to JET.

    PubMed

    Petrizzi, L; Batistoni, P; Fischer, U; Loughlin, M; Pereslavtsev, P; Villari, R

    2005-01-01

    The calculation of dose rates after shutdown is an important issue for operating nuclear reactors. A validated computational tool is needed for reliable dose rate calculations. In fusion reactors neutrons induce high levels of radioactivity and presumably high doses. The complex geometries of the devices require the use of sophisticated geometry modelling and computational tools for transport calculations. Simple rule of thumb laws do not always apply well. Two computational procedures have been developed recently and applied to fusion machines. Comparisons between the two methods showed some inherent discrepancies when applied to calculation for the ITER while good agreement was found for a 14 MeV point source neutron benchmark experiment. Further benchmarks were considered necessary to investigate in more detail the reasons for the different results in different cases. In this frame the application to the Joint European Torus JET machine has been considered as a useful benchmark exercise. In a first calculational benchmark with a representative D-T irradiation history of JET the two methods differed by no more than 25%. In another, more realistic benchmark exercise, which is the subject of this paper, the real irradiation history of D-T and D-D campaigns conducted at JET in 1997-98 were used to calculate the shut-down doses at different locations, irradiation and decay times. Experimental dose data recorded at JET for the same conditions offer the possibility to check the prediction capability of the calculations and thus show the applicability (and the constraints) of the procedures and data to the rather complex shutdown dose rate analysis of real fusion devices. Calculation results obtained by the two methods are reported below, comparison with experimental results give discrepancies ranging between 2 and 10. The reasons of that can be ascribed to the high uncertainty on the experimental data and the unsatisfactory JET model used in the calculation. A new

  20. A Benchmarking Initiative for Reactive Transport Modeling Applied to Subsurface Environmental Applications

    NASA Astrophysics Data System (ADS)

    Steefel, C. I.

    2015-12-01

    Over the last 20 years, we have seen the evolution of multicomponent reactive transport modeling and the expanding range and increasing complexity of subsurface environmental applications it is being used to address. Reactive transport modeling is being asked to provide accurate assessments of engineering performance and risk for important issues with far-reaching consequences. As a result, the complexity and detail of subsurface processes, properties, and conditions that can be simulated have significantly expanded. Closed form solutions are necessary and useful, but limited to situations that are far simpler than typical applications that combine many physical and chemical processes, in many cases in coupled form. In the absence of closed form and yet realistic solutions for complex applications, numerical benchmark problems with an accepted set of results will be indispensable to qualifying codes for various environmental applications. The intent of this benchmarking exercise, now underway for more than five years, is to develop and publish a set of well-described benchmark problems that can be used to demonstrate simulator conformance with norms established by the subsurface science and engineering community. The objective is not to verify this or that specific code--the reactive transport codes play a supporting role in this regard—but rather to use the codes to verify that a common solution of the problem can be achieved. Thus, the objective of each of the manuscripts is to present an environmentally-relevant benchmark problem that tests the conceptual model capabilities, numerical implementation, process coupling, and accuracy. The benchmark problems developed to date include 1) microbially-mediated reactions, 2) isotopes, 3) multi-component diffusion, 4) uranium fate and transport, 5) metal mobility in mining affected systems, and 6) waste repositories and related aspects.

  1. OCTALIS benchmarking: comparison of four watermarking techniques

    NASA Astrophysics Data System (ADS)

    Piron, Laurent; Arnold, Michael; Kutter, Martin; Funk, Wolfgang; Boucqueau, Jean M.; Craven, Fiona

    1999-04-01

    In this paper, benchmarking results of watermarking techniques are presented. The benchmark includes evaluation of the watermark robustness and the subjective visual image quality. Four different algorithms are compared, and exhaustively tested. One goal of these tests is to evaluate the feasibility of a Common Functional Model (CFM) developed in the European Project OCTALIS and determine parameters of this model, such as the length of one watermark. This model solves the problem of image trading over an insecure network, such as Internet, and employs hybrid watermarking. Another goal is to evaluate the resistance of the watermarking techniques when subjected to a set of attacks. Results show that the tested techniques do not have the same behavior and that no tested methods has optimal characteristics. A last conclusion is that, as for the evaluation of compression techniques, clear guidelines are necessary to evaluate and compare watermarking techniques.

  2. Specification for the VERA Depletion Benchmark Suite

    SciTech Connect

    Kim, Kang Seog

    2015-12-17

    CASL-X-2015-1014-000 iii Consortium for Advanced Simulation of LWRs EXECUTIVE SUMMARY The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the pressurized water reactor. MPACT includes the ORIGEN-API and internal depletion module to perform depletion calculations based upon neutron-material reaction and radioactive decay. It is a challenge to validate the depletion capability because of the insufficient measured data. One of the detoured methods to validate it is to perform a code-to-code comparison for benchmark problems. In this study a depletion benchmark suite has been developed and a detailed guideline has been provided to obtain meaningful computational outcomes which can be used in the validation of the MPACT depletion capability.

  3. Benchmark West Texas Intermediate crude assayed

    SciTech Connect

    Rhodes, A.K.

    1994-08-15

    The paper gives an assay of West Texas Intermediate, one of the world's market crudes. The price of this crude, known as WTI, is followed by market analysts, investors, traders, and industry managers around the world. WTI price is used as a benchmark for pricing all other US crude oils. The 41[degree] API < 0.34 wt % sulfur crude is gathered in West Texas and moved to Cushing, Okla., for distribution. The WTI posted prices is the price paid for the crude at the wellhead in West Texas and is the true benchmark on which other US crudes are priced. The spot price is the negotiated price for short-term trades of the crude. And the New York Mercantile Exchange, or Nymex, price is a futures price for barrels delivered at Cushing.

  4. Toxicological benchmarks for wildlife. Environmental Restoration Program

    SciTech Connect

    Opresko, D.M.; Sample, B.E.; Suter, G.W.

    1993-09-01

    This report presents toxicological benchmarks for assessment of effects of 55 chemicals on six representative mammalian wildlife species (short-tailed shrew, white-footed mouse, cottontail ink, red fox, and whitetail deer) and eight avian wildlife species (American robin, woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, Cooper`s hawk, and redtailed hawk) (scientific names are presented in Appendix C). These species were chosen because they are widely distributed and provide a representative range of body sizes and diets. The chemicals are some of those that occur at United States Department of Energy (DOE) waste sites. The benchmarks presented in this report are values believed to be nonhazardous for the listed wildlife species.

  5. Benchmark On Sensitivity Calculation (Phase III)

    SciTech Connect

    Ivanova, Tatiana; Laville, Cedric; Dyrda, James; Mennerdahl, Dennis; Golovko, Yury; Raskach, Kirill; Tsiboulia, Anatoly; Lee, Gil Soo; Woo, Sweng-Woong; Bidaud, Adrien; Patel, Amrit; Bledsoe, Keith C; Rearden, Bradley T; Gulliford, J.

    2012-01-01

    The sensitivities of the keff eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods.

  6. Differences in HIV vaccine acceptability between genders.

    PubMed

    Kakinami, Lisa; Newman, Peter A; Lee, Sung-Jae; Duan, Naihua

    2008-05-01

    The development of safe and efficacious preventive HIV vaccines offers the best long-term hope of controlling the AIDS pandemic. Nevertheless, suboptimal uptake of safe and efficacious vaccines that already exist suggest that HIV vaccine acceptability cannot be assumed, particularly among communities most vulnerable to HIV. The present study aimed to identify barriers and motivators to future HIV vaccine acceptability among low socioeconomic, ethnically diverse men and women in Los Angeles County. Participants completed a cross-sectional survey assessing their attitudes and beliefs regarding future HIV vaccines. Hypothetical HIV vaccine scenarios were administered to determine HIV vaccine acceptability. Two-sided t-tests were performed, stratified by gender, to examine the association between vaccine acceptability and potential barriers and motivators. Barriers to HIV vaccine acceptability differed between men and women. For women, barriers to HIV vaccine acceptability were related to their intimate relationships (p<0.05), negative experiences with health care providers (p<0.05) and anticipated difficulties procuring insurance (p<0.01). Men were concerned that the vaccine would weaken the immune system (p<0.005) or would affect their HIV test results (p<0.05). Motivators for women included the ability to conceive a child without worrying about contracting HIV (p<0.10) and support from their spouse/significant other for being vaccinated (p<0.10). Motivators for men included feeling safer with sex partners (p<0.05) and social influence from friends to get vaccinated (p<0.005). Family support for HIV immunization was a motivator for both men and women (p<0.10). Gender-specific interventions may increase vaccine acceptability among men and women at elevated risk for HIV infection. Among women, interventions need to focus on addressing barriers due to gendered power dynamics in relationships and discrimination in health care. Among men, education that addresses fears

  7. EXPERIMENTAL BENCHMARKING OF THE MAGNETIZED FRICTION FORCE.

    SciTech Connect

    FEDOTOV, A.V.; GALNANDER, B.; LITVINENKO, V.N.; LOFNES, T.; SIDORIN, A.O.; SMIRNOV, A.V.; ZIEMANN, V.

    2005-09-18

    High-energy electron cooling, presently considered as essential tool for several applications in high-energy and nuclear physics, requires accurate description of the friction force. A series of measurements were performed at CELSIUS with the goal to provide accurate data needed for the benchmarking of theories and simulations. Some results of accurate comparison of experimental data with the friction force formulas are presented.

  8. Reactor calculation benchmark PCA blind test results

    SciTech Connect

    Kam, F.B.K.; Stallmann, F.W.

    1980-01-01

    Further improvement in calculational procedures or a combination of calculations and measurements is necessary to attain 10 to 15% (1 sigma) accuracy for neutron exposure parameters (flux greater than 0.1 MeV, flux greater than 1.0 MeV, and dpa). The calculational modeling of power reactors should be benchmarked in an actual LWR plant to provide final uncertainty estimates for end-of-life predictions and limitations for plant operations. 26 references, 14 figures, 6 tables.

  9. Collection of Neutronic VVER Reactor Benchmarks.

    2002-01-30

    Version 00 A system of computational neutronic benchmarks has been developed. In this CD-ROM report, the data generated in the course of the project are reproduced in their integrity with minor corrections. The editing that was performed on the various documents comprising this report was primarily meant to facilitate the production of the CD-ROM and to enable electronic retrieval of the information. The files are electronically navigable.

  10. Experimental Benchmarking of the Magnetized Friction Force

    SciTech Connect

    Fedotov, A. V.; Litvinenko, V. N.; Galnander, B.; Lofnes, T.; Ziemann, V.; Sidorin, A. O.; Smirnov, A. V.

    2006-03-20

    High-energy electron cooling, presently considered as essential tool for several applications in high-energy and nuclear physics, requires accurate description of the friction force. A series of measurements were performed at CELSIUS with the goal to provide accurate data needed for the benchmarking of theories and simulations. Some results of accurate comparison of experimental data with the friction force formulas are presented.

  11. Benchmarking and accounting for the (private) cloud

    NASA Astrophysics Data System (ADS)

    Belleman, J.; Schwickerath, U.

    2015-12-01

    During the past two years large parts of the CERN batch farm have been moved to virtual machines running on the CERN internal cloud. During this process a large fraction of the resources, which had previously been used as physical batch worker nodes, were converted into hypervisors. Due to the large spread of the per-core performance in the farm, caused by its heterogenous nature, it is necessary to have a good knowledge of the performance of the virtual machines. This information is used both for scheduling in the batch system and for accounting. While in the previous setup worker nodes were classified and benchmarked based on the purchase order number, for virtual batch worker nodes this is no longer possible; the information is now either hidden or hard to retrieve. Therefore we developed a new scheme to classify worker nodes according to their performance. The new scheme is flexible enough to be usable both for virtual and physical machines in the batch farm. With the new classification it is possible to have an estimation of the performance of worker nodes also in a very dynamic farm with worker nodes coming and going at a high rate, without the need to benchmark each new node again. An extension to public cloud resources is possible if all conditions under which the benchmark numbers have been obtained are fulfilled.

  12. Introduction to the HPC Challenge Benchmark Suite

    SciTech Connect

    Luszczek, Piotr; Dongarra, Jack J.; Koester, David; Rabenseifner,Rolf; Lucas, Bob; Kepner, Jeremy; McCalpin, John; Bailey, David; Takahashi, Daisuke

    2005-04-25

    The HPC Challenge benchmark suite has been released by the DARPA HPCS program to help define the performance boundaries of future Petascale computing systems. HPC Challenge is a suite of tests that examine the performance of HPC architectures using kernels with memory access patterns more challenging than those of the High Performance Linpack (HPL) benchmark used in the Top500 list. Thus, the suite is designed to augment the Top500 list, providing benchmarks that bound the performance of many real applications as a function of memory access characteristics e.g., spatial and temporal locality, and providing a framework for including additional tests. In particular, the suite is composed of several well known computational kernels (STREAM, HPL, matrix multiply--DGEMM, parallel matrix transpose--PTRANS, FFT, RandomAccess, and bandwidth/latency tests--b{sub eff}) that attempt to span high and low spatial and temporal locality space. By design, the HPC Challenge tests are scalable with the size of data sets being a function of the largest HPL matrix for the tested system.

  13. Accepted scientific research works (abstracts).

    PubMed

    2014-01-01

    These are the 39 accepted abstracts for IAYT's Symposium on Yoga Research (SYR) September 24-24, 2014 at the Kripalu Center for Yoga & Health and published in the Final Program Guide and Abstracts. PMID:25645134

  14. L-286 Acceptance Test Record

    SciTech Connect

    HARMON, B.C.

    2000-01-14

    This document provides a detailed account of how the acceptance testing was conducted for Project L-286, ''200E Area Sanitary Water Plant Effluent Stream Reduction''. The testing of the L-286 instrumentation system was conducted under the direct supervision

  15. Indoor Modelling Benchmark for 3D Geometry Extraction

    NASA Astrophysics Data System (ADS)

    Thomson, C.; Boehm, J.

    2014-06-01

    A combination of faster, cheaper and more accurate hardware, more sophisticated software, and greater industry acceptance have all laid the foundations for an increased desire for accurate 3D parametric models of buildings. Pointclouds are the data source of choice currently with static terrestrial laser scanning the predominant tool for large, dense volume measurement. The current importance of pointclouds as the primary source of real world representation is endorsed by CAD software vendor acquisitions of pointcloud engines in 2011. Both the capture and modelling of indoor environments require great effort in time by the operator (and therefore cost). Automation is seen as a way to aid this by reducing the workload of the user and some commercial packages have appeared that provide automation to some degree. In the data capture phase, advances in indoor mobile mapping systems are speeding up the process, albeit currently with a reduction in accuracy. As a result this paper presents freely accessible pointcloud datasets of two typical areas of a building each captured with two different capture methods and each with an accurate wholly manually created model. These datasets are provided as a benchmark for the research community to gauge the performance and improvements of various techniques for indoor geometry extraction. With this in mind, non-proprietary, interoperable formats are provided such as E57 for the scans and IFC for the reference model. The datasets can be found at: http://indoor-bench.github.io/indoor-bench.

  16. Demonstration of robust quantum gate tomography via randomized benchmarking

    NASA Astrophysics Data System (ADS)

    Johnson, Blake R.; da Silva, Marcus P.; Ryan, Colm A.; Kimmel, Shelby; Chow, Jerry M.; Ohki, Thomas A.

    2015-11-01

    Typical quantum gate tomography protocols struggle with a self-consistency problem: the gate operation cannot be reconstructed without knowledge of the initial state and final measurement, but such knowledge cannot be obtained without well-characterized gates. A recently proposed technique, known as randomized benchmarking tomography (RBT), sidesteps this self-consistency problem by designing experiments to be insensitive to preparation and measurement imperfections. We implement this proposal in a superconducting qubit system, using a number of experimental improvements including implementing each of the elements of the Clifford group in single ‘atomic’ pulses and custom control hardware to enable large overhead protocols. We show a robust reconstruction of several single-qubit quantum gates, including a unitary outside the Clifford group. We demonstrate that RBT yields physical gate reconstructions that are consistent with fidelities obtained by RB.

  17. Benchmarking and performance analysis of the CM-2. [SIMD computer

    NASA Technical Reports Server (NTRS)

    Myers, David W.; Adams, George B., II

    1988-01-01

    A suite of benchmarking routines testing communication, basic arithmetic operations, and selected kernel algorithms written in LISP and PARIS was developed for the CM-2. Experiment runs are automated via a software framework that sequences individual tests, allowing for unattended overnight operation. Multiple measurements are made and treated statistically to generate well-characterized results from the noisy values given by cm:time. The results obtained provide a comparison with similar, but less extensive, testing done on a CM-1. Tests were chosen to aid the algorithmist in constructing fast, efficient, and correct code on the CM-2, as well as gain insight into what performance criteria are needed when evaluating parallel processing machines.

  18. Evaluation of the HTR-10 Reactor as a Benchmark for Physics Code QA

    SciTech Connect

    William K. Terry; Soon Sam Kim; Leland M. Montierth; Joshua J. Cogliati; Abderrafi M. Ougouag

    2006-09-01

    The HTR-10 is a small (10 MWt) pebble-bed research reactor intended to develop pebble-bed reactor (PBR) technology in China. It will be used to test and develop fuel, verify PBR safety features, demonstrate combined electricity production and co-generation of heat, and provide experience in PBR design, operation, and construction. As the only currently operating PBR in the world, the HTR-10 can provide data of great interest to everyone involved in PBR technology. In particular, if it yields data of sufficient quality, it can be used as a benchmark for assessing the accuracy of computer codes proposed for use in PBR analysis. This paper summarizes the evaluation for the International Reactor Physics Experiment Evaluation Project (IRPhEP) of data obtained in measurements of the HTR-10’s initial criticality experiment for use as benchmarks for reactor physics codes.

  19. 2013 SYR Accepted Poster Abstracts.

    PubMed

    2013-01-01

    Promote Health and Well-being Among Middle School Educators. 20. A Systematic Review of Yoga-based Interventions for Objective and Subjective Balance Measures. 21. Disparities in Yoga Use: A Multivariate Analysis of 2007 National Health Interview Survey Data. 22. Implementing Yoga Therapy Adapted for Older Veterans Who Are Cancer Survivors. 23. Randomized, Controlled Trial of Yoga for Women With Major Depressive Disorder: Decreased Ruminations as Potential Mechanism for Effects on Depression? 24. Yoga Beyond the Metropolis: A Yoga Telehealth Program for Veterans. 25. Yoga Practice Frequency, Relationship Maintenance Behaviors, and the Potential Mediating Role of Relationally Interdependent Cognition. 26. Effects of Medical Yoga in Quality of Life, Blood Pressure, and Heart Rate in Patients With Paroxysmal Atrial Fibrillation. 27. Yoga During School May Promote Emotion Regulation Capacity in Adolescents: A Group Randomized, Controlled Study. 28. Integrated Yoga Therapy in a Single Session as a Stress Management Technique in Comparison With Other Techniques. 29. Effects of a Classroom-based Yoga Intervention on Stress and Attention in Second and Third Grade Students. 30. Improving Memory, Attention, and Executive Function in Older Adults with Yoga Therapy. 31. Reasons for Starting and Continuing Yoga. 32. Yoga and Stress Management May Buffer Against Sexual Risk-Taking Behavior Increases in College Freshmen. 33. Whole-systems Ayurveda and Yoga Therapy for Obesity: Outcomes of a Pilot Study. 34. Women�s Phenomenological Experiences of Exercise, Breathing, and the Body During Yoga for Smoking Cessation Treatment. 35. Mindfulness as a Tool for Trauma Recovery: Examination of a Gender-responsive Trauma-informed Integrative Mindfulness Program for Female Inmates. 36. Yoga After Stroke Leads to Multiple Physical Improvements. 37. Tele-Yoga in Patients With Chronic Obstructive Pulmonary Disease and Heart Failure: A Mixed-methods Study of Feasibility, Acceptability, and Safety

  20. Benchmarking NSP Reactors with CORETRAN-01

    SciTech Connect

    Hines, Donald D.; Grow, Rodney L.; Agee, Lance J

    2004-10-15

    As part of an overall verification and validation effort, the Electric Power Research Institute's (EPRIs) CORETRAN-01 has been benchmarked against Northern States Power's Prairie Island and Monticello reactors through 12 cycles of operation. The two Prairie Island reactors are Westinghouse 2-loop units with 121 asymmetric 14 x 14 lattice assemblies utilizing up to 8 wt% gadolinium while Monticello is a General Electric 484 bundle boiling water reactor. All reactor cases were executed in full core utilizing 24 axial nodes per assembly in the fuel with 1 additional reflector node above, below, and around the perimeter of the core. Cross-section sets used in this benchmark effort were generated by EPRI's CPM-3 as well as Studsvik's CASMO-3 and CASMO-4 to allow for separation of the lattice calculation effect from the nodal simulation method. These cases exercised the depletion-shuffle-depletion sequence through four cycles for each unit using plant data to follow actual operations. Flux map calculations were performed for comparison to corresponding measurement statepoints. Additionally, start-up physics testing cases were used to predict cycle physics parameters for comparison to existing plant methods and measurements.These benchmark results agreed well with both current analysis methods and plant measurements, indicating that CORETRAN-01 may be appropriate for steady-state physics calculations of both the Prairie Island and Monticello reactors. However, only the Prairie Island results are discussed in this paper since Monticello results were of similar quality and agreement. No attempt was made in this work to investigate CORETRAN-01 kinetics capability by analyzing plant transients, but these steady-state results form a good foundation for moving in that direction.