Science.gov

Sample records for acceptable benchmark experiment

  1. Benchmarking Asteroid-Deflection Experiment

    NASA Astrophysics Data System (ADS)

    Remington, Tane; Bruck Syal, Megan; Owen, John Michael; Miller, Paul L.

    2016-10-01

    An asteroid impacting Earth could have devastating consequences. In preparation to deflect or disrupt one before it reaches Earth, it is imperative to have modeling capabilities that adequately simulate the deflection actions. Code validation is key to ensuring full confidence in simulation results used in an asteroid-mitigation plan. We are benchmarking well-known impact experiments using Spheral, an adaptive smoothed-particle hydrodynamics code, to validate our modeling of asteroid deflection. We describe our simulation results, compare them with experimental data, and discuss what we have learned from our work. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-695540

  2. The skyshine benchmark experiment revisited.

    PubMed

    Terry, Ian R

    2005-01-01

    With the coming renaissance of nuclear power, heralded by new nuclear power plant construction in Finland, the issue of qualifying modern tools for calculation becomes prominent. Among the calculations required may be the determination of radiation levels outside the plant owing to skyshine. For example, knowledge of the degree of accuracy in the calculation of gamma skyshine through the turbine hall roof of a BWR plant is important. Modern survey programs which can calculate skyshine dose rates tend to be qualified only by verification with the results of Monte Carlo calculations. However, in the past, exacting experimental work has been performed in the field for gamma skyshine, notably the benchmark work in 1981 by Shultis and co-workers, which considered not just the open source case but also the effects of placing a concrete roof above the source enclosure. The latter case is a better reflection of reality as safety considerations nearly always require the source to be shielded in some way, usually by substantial walls but by a thinner roof. One of the tools developed since that time, which can both calculate skyshine radiation and accurately model the geometrical set-up of an experiment, is the code RANKERN, which is used by Framatome ANP and other organisations for general shielding design work. The following description concerns the use of this code to re-address the experimental results from 1981. This then provides a realistic gauge to validate, but also to set limits on, the program for future gamma skyshine applications within the applicable licensing procedures for all users of the code.

  3. Benchmarking criticality safety calculations with subcritical experiments

    SciTech Connect

    Mihalczo, J.T.

    1984-06-01

    Calculation of the neutron multiplication factor at delayed criticality may be necessary for benchmarking calculations but it may not be sufficient. The use of subcritical experiments to benchmark criticality safety calculations could result in substantial savings in fuel material costs for experiments. In some cases subcritical configurations could be used to benchmark calculations where sufficient fuel to achieve delayed criticality is not available. By performing a variety of measurements with subcritical configurations, much detailed information can be obtained which can be compared directly with calculations. This paper discusses several measurements that can be performed with subcritical assemblies and presents examples that include comparisons between calculation and experiment where possible. Where not, examples from critical experiments have been used but the measurement methods could also be used for subcritical experiments.

  4. Direct Simulation of a Solidification Benchmark Experiment

    NASA Astrophysics Data System (ADS)

    Carozzani, Tommy; Gandin, Charles-André; Digonnet, Hugues; Bellet, Michel; Zaidat, Kader; Fautrelle, Yves

    2013-02-01

    A solidification benchmark experiment is simulated using a three-dimensional cellular automaton—finite element solidification model. The experiment consists of a rectangular cavity containing a Sn-3 wt pct Pb alloy. The alloy is first melted and then solidified in the cavity. A dense array of thermocouples permits monitoring of temperatures in the cavity and in the heat exchangers surrounding the cavity. After solidification, the grain structure is revealed by metallography. X-ray radiography and inductively coupled plasma spectrometry are also conducted to access a distribution map of Pb, or macrosegregation map. The solidification model consists of solutions for heat, solute mass, and momentum conservations using the finite element method. It is coupled with a description of the development of grain structure using the cellular automaton method. A careful and direct comparison with experimental results is possible thanks to boundary conditions deduced from the temperature measurements, as well as a careful choice of the values of the material properties for simulation. Results show that the temperature maps and the macrosegregation map can only be approached with a three-dimensional simulation that includes the description of the grain structure.

  5. Criticality safety benchmark experiments derived from ANL ZPR assemblies.

    SciTech Connect

    Schaefer, R. W.; Lell, R. M.; McKnight, R. D.

    2003-09-01

    Numerous criticality safety benchmarks have been, and continue to be, developed from experiments performed on Argonne National Laboratory's plate-type fast critical assemblies. The nature and scope of assemblies suitable for deriving these benchmarks are discussed. The benchmark derivation process, including full treatment of all significant uncertainties, is explained. Calculational results are presented that support the small uncertainty assigned to the key derivation step in which complex geometric detail is removed.

  6. Benchmark experiments at ASTRA facility on definition of space distribution of {sup 235}U fission reaction rate

    SciTech Connect

    Bobrov, A. A.; Boyarinov, V. F.; Glushkov, A. E.; Glushkov, E. S.; Kompaniets, G. V.; Moroz, N. P.; Nevinitsa, V. A.; Nosov, V. I.; Smirnov, O. N.; Fomichenko, P. A.; Zimin, A. A.

    2012-07-01

    Results of critical experiments performed at five ASTRA facility configurations modeling the high-temperature helium-cooled graphite-moderated reactors are presented. Results of experiments on definition of space distribution of {sup 235}U fission reaction rate performed at four from these five configurations are presented more detail. Analysis of available information showed that all experiments on criticality at these five configurations are acceptable for use them as critical benchmark experiments. All experiments on definition of space distribution of {sup 235}U fission reaction rate are acceptable for use them as physical benchmark experiments. (authors)

  7. Companies' opinions and acceptance of global food safety initiative benchmarks after implementation.

    PubMed

    Crandall, Phil; Van Loo, Ellen J; O'Bryan, Corliss A; Mauromoustakos, Andy; Yiannas, Frank; Dyenson, Natalie; Berdnik, Irina

    2012-09-01

    International attention has been focused on minimizing costs that may unnecessarily raise food prices. One important aspect to consider is the redundant and overlapping costs of food safety audits. The Global Food Safety Initiative (GFSI) has devised benchmarked schemes based on existing international food safety standards for use as a unifying standard accepted by many retailers. The present study was conducted to evaluate the impact of the decision made by Walmart Stores (Bentonville, AR) to require their suppliers to become GFSI compliant. An online survey of 174 retail suppliers was conducted to assess food suppliers' opinions of this requirement and the benefits suppliers realized when they transitioned from their previous food safety systems. The most common reason for becoming GFSI compliant was to meet customers' requirements; thus, supplier implementation of the GFSI standards was not entirely voluntary. Other reasons given for compliance were enhancing food safety and remaining competitive. About 54 % of food processing plants using GFSI benchmarked schemes followed the guidelines of Safe Quality Food 2000 and 37 % followed those of the British Retail Consortium. At the supplier level, 58 % followed Safe Quality Food 2000 and 31 % followed the British Retail Consortium. Respondents reported that the certification process took about 10 months. The most common reason for selecting a certain GFSI benchmarked scheme was because it was widely accepted by customers (retailers). Four other common reasons were (i) the standard has a good reputation in the industry, (ii) the standard was recommended by others, (iii) the standard is most often used in the industry, and (iv) the standard was required by one of their customers. Most suppliers agreed that increased safety of their products was required to comply with GFSI benchmarked schemes. They also agreed that the GFSI required a more carefully documented food safety management system, which often required

  8. Benchmarking NMR experiments: A relational database of protein pulse sequences

    NASA Astrophysics Data System (ADS)

    Senthamarai, Russell R. P.; Kuprov, Ilya; Pervushin, Konstantin

    2010-03-01

    Systematic benchmarking of multi-dimensional protein NMR experiments is a critical prerequisite for optimal allocation of NMR resources for structural analysis of challenging proteins, e.g. large proteins with limited solubility or proteins prone to aggregation. We propose a set of benchmarking parameters for essential protein NMR experiments organized into a lightweight (single XML file) relational database (RDB), which includes all the necessary auxiliaries (waveforms, decoupling sequences, calibration tables, setup algorithms and an RDB management system). The database is interfaced to the Spinach library ( http://spindynamics.org), which enables accurate simulation and benchmarking of NMR experiments on large spin systems. A key feature is the ability to use a single user-specified spin system to simulate the majority of deposited solution state NMR experiments, thus providing the (hitherto unavailable) unified framework for pulse sequence evaluation. This development enables predicting relative sensitivity of deposited implementations of NMR experiments, thus providing a basis for comparison, optimization and, eventually, automation of NMR analysis. The benchmarking is demonstrated with two proteins, of 170 amino acids I domain of αXβ2 Integrin and 440 amino acids NS3 helicase.

  9. Providing Nuclear Criticality Safety Analysis Education through Benchmark Experiment Evaluation

    SciTech Connect

    John D. Bess; J. Blair Briggs; David W. Nigg

    2009-11-01

    One of the challenges that today's new workforce of nuclear criticality safety engineers face is the opportunity to provide assessment of nuclear systems and establish safety guidelines without having received significant experience or hands-on training prior to graduation. Participation in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and/or the International Reactor Physics Experiment Evaluation Project (IRPhEP) provides students and young professionals the opportunity to gain experience and enhance critical engineering skills.

  10. Benchmark Evaluation of the Medium-Power Reactor Experiment Program Critical Configurations

    SciTech Connect

    Margaret A. Marshall; John D. Bess

    2013-02-01

    A series of small, compact critical assembly (SCCA) experiments were performed in 1962-1965 at the Oak Ridge National Laboratory Critical Experiments Facility (ORCEF) for the Medium-Power Reactor Experiment (MPRE) program. The MPRE was a stainless-steel clad, highly enriched uranium (HEU)-O2 fuelled, BeO reflected reactor design to provide electrical power to space vehicles. Cooling and heat transfer were to be achieved by boiling potassium in the reactor core and passing vapor directly through a turbine. Graphite- and beryllium-reflected assemblies were constructed at ORCEF to verify the critical mass, power distribution, and other reactor physics measurements needed to validate reactor calculations and reactor physics methods. The experimental series was broken into three parts, with the third portion of the experiments representing the beryllium-reflected measurements. The latter experiments are of interest for validating current reactor design efforts for a fission surface power reactor. The entire series has been evaluated as acceptable benchmark experiments and submitted for publication in the International Handbook of Evaluated Criticality Safety Benchmark Experiments and in the International Handbook of Evaluated Reactor Physics Benchmark Experiments.

  11. TRIGA Mark II Criticality Benchmark Experiment with Burned Fuel

    SciTech Connect

    Persic, Andreja; Ravnik, Matjaz; Zagar, Tomaz

    2000-12-15

    The experimental results of criticality benchmark experiments performed at the Jozef Stefan Institute TRIGA Mark II reactor are presented. The experiments were performed with partly burned fuel in two compact and uniform core configurations in the same arrangements as were used in the fresh fuel criticality benchmark experiment performed in 1991. In the experiments, both core configurations contained only 12 wt% U-ZrH fuel with 20% enriched uranium. The first experimental core contained 43 fuel elements with average burnup of 1.22 MWd or 2.8% {sup 235}U burned. The last experimental core configuration was composed of 48 fuel elements with average burnup of 1.15 MWd or 2.6% {sup 235}U burned. The experimental determination of k{sub eff} for both core configurations, one subcritical and one critical, are presented. Burnup for all fuel elements was calculated in two-dimensional four-group diffusion approximation using the TRIGLAV code. The burnup of several fuel elements was measured also by the reactivity method.

  12. Benchmark enclosure fire suppression experiments - phase 1 test report.

    SciTech Connect

    Figueroa, Victor G.; Nichols, Robert Thomas; Blanchat, Thomas K.

    2007-06-01

    A series of fire benchmark water suppression tests were performed that may provide guidance for dispersal systems for the protection of high value assets. The test results provide boundary and temporal data necessary for water spray suppression model development and validation. A review of fire suppression in presented for both gaseous suppression and water mist fire suppression. The experimental setup and procedure for gathering water suppression performance data are shown. Characteristics of the nozzles used in the testing are presented. Results of the experiments are discussed.

  13. SILENE Benchmark Critical Experiments for Criticality Accident Alarm Systems

    SciTech Connect

    Miller, Thomas Martin; Reynolds, Kevin H.

    2011-01-01

    In October 2010 a series of benchmark experiments was conducted at the Commissariat a Energie Atomique et aux Energies Alternatives (CEA) Valduc SILENE [1] facility. These experiments were a joint effort between the US Department of Energy (DOE) and the French CEA. The purpose of these experiments was to create three benchmarks for the verification and validation of radiation transport codes and evaluated nuclear data used in the analysis of criticality accident alarm systems (CAASs). This presentation will discuss the geometric configuration of these experiments and the quantities that were measured and will present some preliminary comparisons between the measured data and calculations. This series consisted of three single-pulsed experiments with the SILENE reactor. During the first experiment the reactor was bare (unshielded), but during the second and third experiments it was shielded by lead and polyethylene, respectively. During each experiment several neutron activation foils and thermoluminescent dosimeters (TLDs) were placed around the reactor, and some of these detectors were themselves shielded from the reactor by high-density magnetite and barite concrete, standard concrete, and/or BoroBond. All the concrete was provided by CEA Saclay, and the BoroBond was provided by Y-12 National Security Complex. Figure 1 is a picture of the SILENE reactor cell configured for pulse 1. Also included in these experiments were measurements of the neutron and photon spectra with two BICRON BC-501A liquid scintillators. These two detectors were provided and operated by CEA Valduc. They were set up just outside the SILENE reactor cell with additional lead shielding to prevent the detectors from being saturated. The final detectors involved in the experiments were two different types of CAAS detectors. The Babcock International Group provided three CIDAS CAAS detectors, which measured photon dose and dose rate with a Geiger-Mueller tube. CIDAS detectors are currently in

  14. MELCOR Verification, Benchmarking, and Applications experience at BNL

    SciTech Connect

    Madni, I.K.

    1992-01-01

    This paper presents a summary of MELCOR Verification, Benchmarking and Applications experience at Brookhaven National Laboratory (BNL), sponsored by the US Nuclear Regulatory Commission (NRC). Under MELCOR verification over the past several years, all released versions of the code were installed on BNL's computer system, verification exercises were performed, and defect investigation reports were sent to SNL. Benchmarking calculations of integral severe fuel damage tests performed at BNL have helped to identify areas of modeling strengths and weaknesses in MELCOR; the most appropriate choices for input parameters; selection of axial nodalization for core cells and heat structures; and workarounds that extend the capabilities of MELCOR. These insights are explored in greater detail in the paper, with the help of selected results and comparisons. Full plant applications calculations at BNL have helped to evaluate the ability of MELCOR to successfully simulate various accident sequences and calculate source terms to the environment for both BWRs and PWRs. A summary of results, including timing of key events, thermal-hydraulic response, and environmental releases of fission products are presented for selected calculations, along with comparisons with Source Term Code Package (STCP) calculations of the same sequences. Differences in results are explained on the basis of modeling differences between the two codes. The results of a sensitivity calculation are also shown. The paper concludes by highlighting some insights on bottomline issues, and the contribution of the BNL program to MELCOR development, assessment, and the identification of user needs for optimum use of the code.

  15. MELCOR Verification, Benchmarking, and Applications experience at BNL

    SciTech Connect

    Madni, I.K.

    1992-12-31

    This paper presents a summary of MELCOR Verification, Benchmarking and Applications experience at Brookhaven National Laboratory (BNL), sponsored by the US Nuclear Regulatory Commission (NRC). Under MELCOR verification over the past several years, all released versions of the code were installed on BNL`s computer system, verification exercises were performed, and defect investigation reports were sent to SNL. Benchmarking calculations of integral severe fuel damage tests performed at BNL have helped to identify areas of modeling strengths and weaknesses in MELCOR; the most appropriate choices for input parameters; selection of axial nodalization for core cells and heat structures; and workarounds that extend the capabilities of MELCOR. These insights are explored in greater detail in the paper, with the help of selected results and comparisons. Full plant applications calculations at BNL have helped to evaluate the ability of MELCOR to successfully simulate various accident sequences and calculate source terms to the environment for both BWRs and PWRs. A summary of results, including timing of key events, thermal-hydraulic response, and environmental releases of fission products are presented for selected calculations, along with comparisons with Source Term Code Package (STCP) calculations of the same sequences. Differences in results are explained on the basis of modeling differences between the two codes. The results of a sensitivity calculation are also shown. The paper concludes by highlighting some insights on bottomline issues, and the contribution of the BNL program to MELCOR development, assessment, and the identification of user needs for optimum use of the code.

  16. Benchmark Data Through The International Reactor Physics Experiment Evaluation Project (IRPHEP)

    SciTech Connect

    J. Blair Briggs; Dr. Enrico Sartori

    2005-09-01

    The International Reactor Physics Experiments Evaluation Project (IRPhEP) was initiated by the Organization for Economic Cooperation and Development (OECD) Nuclear Energy Agency’s (NEA) Nuclear Science Committee (NSC) in June of 2002. The IRPhEP focus is on the derivation of internationally peer reviewed benchmark models for several types of integral measurements, in addition to the critical configuration. While the benchmarks produced by the IRPhEP are of primary interest to the Reactor Physics Community, many of the benchmarks can be of significant value to the Criticality Safety and Nuclear Data Communities. Benchmarks that support the Next Generation Nuclear Plant (NGNP), for example, also support fuel manufacture, handling, transportation, and storage activities and could challenge current analytical methods. The IRPhEP is patterned after the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and is closely coordinated with the ICSBEP. This paper highlights the benchmarks that are currently being prepared by the IRPhEP that are also of interest to the Criticality Safety Community. The different types of measurements and associated benchmarks that can be expected in the first publication and beyond are described. The protocol for inclusion of IRPhEP benchmarks as ICSBEP benchmarks and for inclusion of ICSBEP benchmarks as IRPhEP benchmarks is detailed. The format for IRPhEP benchmark evaluations is described as an extension of the ICSBEP format. Benchmarks produced by the IRPhEP add new dimension to criticality safety benchmarking efforts and expand the collection of available integral benchmarks for nuclear data testing. The first publication of the "International Handbook of Evaluated Reactor Physics Benchmark Experiments" is scheduled for January of 2006.

  17. Analogue experiments as benchmarks for models of lava flow emplacement

    NASA Astrophysics Data System (ADS)

    Garel, F.; Kaminski, E. C.; Tait, S.; Limare, A.

    2013-12-01

    During an effusive volcanic eruption, the crisis management is mainly based on the prediction of lava flow advance and its velocity. The spreading of a lava flow, seen as a gravity current, depends on its "effective rheology" and on the effusion rate. Fast-computing models have arisen in the past decade in order to predict in near real time lava flow path and rate of advance. This type of model, crucial to mitigate volcanic hazards and organize potential evacuation, has been mainly compared a posteriori to real cases of emplaced lava flows. The input parameters of such simulations applied to natural eruptions, especially effusion rate and topography, are often not known precisely, and are difficult to evaluate after the eruption. It is therefore not straightforward to identify the causes of discrepancies between model outputs and observed lava emplacement, whereas the comparison of models with controlled laboratory experiments appears easier. The challenge for numerical simulations of lava flow emplacement is to model the simultaneous advance and thermal structure of viscous lava flows. To provide original constraints later to be used in benchmark numerical simulations, we have performed lab-scale experiments investigating the cooling of isoviscous gravity currents. The simplest experimental set-up is as follows: silicone oil, whose viscosity, around 5 Pa.s, varies less than a factor of 2 in the temperature range studied, is injected from a point source onto a horizontal plate and spreads axisymmetrically. The oil is injected hot, and progressively cools down to ambient temperature away from the source. Once the flow is developed, it presents a stationary radial thermal structure whose characteristics depend on the input flow rate. In addition to the experimental observations, we have developed in Garel et al., JGR, 2012 a theoretical model confirming the relationship between supply rate, flow advance and stationary surface thermal structure. We also provide

  18. INTEGRAL BENCHMARKS AVAILABLE THROUGH THE INTERNATIONAL REACTOR PHYSICS EXPERIMENT EVALUATION PROJECT AND THE INTERNATIONAL CRITICALITY SAFETY BENCHMARK EVALUATION PROJECT

    SciTech Connect

    J. Blair Briggs; Lori Scott; Enrico Sartori; Yolanda Rugama

    2008-09-01

    Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) continue to expand their efforts and broaden their scope to identify, evaluate, and provide integral benchmark data for method and data validation. Benchmark model specifications provided by these two projects are used heavily by the international reactor physics, nuclear data, and criticality safety communities. Thus far, 14 countries have contributed to the IRPhEP, and 20 have contributed to the ICSBEP. The status of the IRPhEP and ICSBEP is discussed in this paper, and the future of the two projects is outlined and discussed. Selected benchmarks that have been added to the IRPhEP and ICSBEP handbooks since PHYSOR’06 are highlighted, and the future of the two projects is discussed.

  19. Criticality experiments to provide benchmark data on neutron flux traps

    SciTech Connect

    Bierman, S.R.

    1988-06-01

    The experimental measurements covered by this report were designed to provide benchmark type data on water moderated LWR type fuel arrays containing neutron flux traps. The experiments were performed at the US Department of Energy Hanford Critical Mass Laboratory, operated by Pacific Northwest Laboratory. The experimental assemblies consisted of 2 /times/ 2 arrays of 4.31 wt % /sup 235/U enriched UO/sub 2/ fuel rods, uniformly arranged in water on a 1.891 cm square center-to-center spacing. Neutron flux traps were created between the fuel units using metal plates containing varying amounts of boron. Measurements were made to determine the effect that boron loading and distance between the fuel and flux trap had on the amount of fuel required for criticality. Also, measurements were made, using the pulse neutron source technique, to determine the effect of boron loading on the effective neutron multiplications constant. On two assemblies, reaction rate measurements were made using solid state track recorders to determine absolute fission rates in /sup 235/U and /sup 238/U. 14 refs., 12 figs., 7 tabs.

  20. Prior Computer Experience and Technology Acceptance

    ERIC Educational Resources Information Center

    Varma, Sonali

    2010-01-01

    Prior computer experience with information technology has been identified as a key variable (Lee, Kozar, & Larsen, 2003) that can influence an individual's future use of newer computer technology. The lack of a theory driven approach to measuring prior experience has however led to conceptually different factors being used interchangeably in…

  1. Women's Acceptance of Rape Myths and their Sexual Experiences.

    ERIC Educational Resources Information Center

    Anderson, Wayne P.; Cummings, Kimberly

    1993-01-01

    Explored relationship among female college students' (n=112) acceptance of traditional feminine roles, rape myths, and experiences of being physically or psychologically pressured into sexual intercourse. Found significant relationship between acceptance of traditional feminine social roles and belief in rape myths. Size of correlation (0.41)…

  2. Development of an ICSBEP Benchmark Evaluation, Nearly 20 Years of Experience

    SciTech Connect

    J. Blair Briggs; John D. Bess

    2011-06-01

    The basic structure of all ICSBEP benchmark evaluations is essentially the same and includes (1) a detailed description of the experiment; (2) an evaluation of the experiment, including an exhaustive effort to quantify the effects of uncertainties on measured quantities; (3) a concise presentation of benchmark-model specifications; (4) sample calculation results; and (5) a summary of experimental references. Computer code input listings and other relevant information are generally preserved in appendixes. Details of an ICSBEP evaluation is presented.

  3. Benchmark study of TRIPOLI-4 through experiment and MCNP codes

    SciTech Connect

    Michel, M.; Coulon, R.; Normand, S.; Huot, N.; Petit, O.

    2011-07-01

    Reliability on simulation results is essential in nuclear physics. Although MCNP5 and MCNPX are the world widely used 3D Monte Carlo radiation transport codes, alternative Monte Carlo simulation tools exist to simulate neutral and charged particles' interactions with matter. Therefore, benchmark are required in order to validate these simulation codes. For instance, TRIPOLI-4.7, developed at the French Alternative Energies and Atomic Energy Commission for neutron and photon transport, now also provides the user with a full feature electron-photon electromagnetic shower. Whereas the reliability of TRIPOLI-4.7 for neutron and photon transport has been validated yet, the new development regarding electron-photon matter interaction needs additional validation benchmarks. We will thus demonstrate how accurately TRIPOLI-4's 'deposited spectrum' tally can simulate gamma spectrometry problems, compared to MCNP's 'F8' tally. The experimental setup is based on an HPGe detector measuring the decay spectrum of an {sup 152}Eu source. These results are then compared with those given by MCNPX 2.6d and TRIPOLI-4 codes. This paper deals with both the experimental aspect and simulation. We will demonstrate that TRIPOLI-4 is a potential alternative to both MCNPX and MCNP5 for gamma-electron interaction simulation. (authors)

  4. 2010 CRITICALITY ACCIDENT ALARM SYSTEM BENCHMARK EXPERIMENTS AT THE CEA VALDUC SILENE FACILITY

    SciTech Connect

    Miller, Thomas Martin; Dunn, Michael E; Wagner, John C; McMahan, Kimberly L; Authier, Nicolas; Jacquet, Xavier; Rousseau, Guillaume; Wolff, Herve; Piot, Jerome; Savanier, Laurence; Baclet, Nathalie; Lee, Yi-kang; Masse, Veronique; Trama, Jean-Christophe; Gagnier, Emmanuel; Naury, Sylvie; Lenain, Richard; Hunter, Richard; Kim, Soon; Dulik, George Michael; Reynolds, Kevin H.

    2011-01-01

    Several experiments were performed at the CEA Valduc SILENE reactor facility, which are intended to be published as evaluated benchmark experiments in the ICSBEP Handbook. These evaluated benchmarks will be useful for the verification and validation of radiation transport codes and evaluated nuclear data, particularly those that are used in the analysis of CAASs. During these experiments SILENE was operated in pulsed mode in order to be representative of a criticality accident, which is rare among shielding benchmarks. Measurements of the neutron flux were made with neutron activation foils and measurements of photon doses were made with TLDs. Also unique to these experiments was the presence of several detectors used in actual CAASs, which allowed for the observation of their behavior during an actual critical pulse. This paper presents the preliminary measurement data currently available from these experiments. Also presented are comparisons of preliminary computational results with Scale and TRIPOLI-4 to the preliminary measurement data.

  5. Benchmark experiments for cyclotron-based neutron source for BNCT.

    PubMed

    Yonai, S; Itoga, T; Baba, M; Nakamura, T; Yokobori, H; Tahara, Y

    2004-11-01

    In the previous study, we found the feasibility of a cyclotron-based BNCT using the Ta(p,n) neutrons at 90 degrees bombarded by 50 MeV protons, and the iron, AlF(3), Al and (6)LiF moderators by simulations using the MCNPX code. In order to validate the simulations to realize the cyclotron-based BNCT, we measured the epithermal neutron energy spectrum passing through the moderators with our new spectrometer consisting of a (3)He gas counter covered with a silicon rubber loaded with (nat)B and polyethylene moderator and the depth distribution of the reaction rates of (197)Au(n,gamma)(198)Au in an acrylic phantom set behind the rear surface of the moderators. The measured results were compared with the calculations using the MCNPX code. We obtained the good agreement between the calculations and measurements within approximately 10% for the neutron energy spectra and within approximately 20% for the depth distribution of the reaction rates of (197)Au(n,gamma)(198)Au in the phantom. The comparison clarified a good accuracy of the calculation of the neutron energy spectrum passing through the moderator and the thermalization in a phantom. These experimental results will be a good benchmark data to evaluate the accuracy of the calculation code.

  6. Developing Evidence for Action on the Postgraduate Experience: An Effective Local Instrument to Move beyond Benchmarking

    ERIC Educational Resources Information Center

    Sampson, K. A.; Johnston, L.; Comer, K.; Brogt, E.

    2016-01-01

    Summative and benchmarking surveys to measure the postgraduate student research experience are well reported in the literature. While useful, we argue that local instruments that provide formative resources with an academic development focus are also required. If higher education institutions are to move beyond the identification of issues and…

  7. RANS Modeling of Benchmark Shockwave / Boundary Layer Interaction Experiments

    NASA Technical Reports Server (NTRS)

    Georgiadis, Nick; Vyas, Manan; Yoder, Dennis

    2010-01-01

    This presentation summarizes the computations of a set of shock wave / turbulent boundary layer interaction (SWTBLI) test cases using the Wind-US code, as part of the 2010 American Institute of Aeronautics and Astronautics (AIAA) shock / boundary layer interaction workshop. The experiments involve supersonic flows in wind tunnels with a shock generator that directs an oblique shock wave toward the boundary layer along one of the walls of the wind tunnel. The Wind-US calculations utilized structured grid computations performed in Reynolds-averaged Navier-Stokes mode. Three turbulence models were investigated: the Spalart-Allmaras one-equation model, the Menter Shear Stress Transport wavenumber-angular frequency two-equation model, and an explicit algebraic stress wavenumber-angular frequency formulation. Effects of grid resolution and upwinding scheme were also considered. The results from the CFD calculations are compared to particle image velocimetry (PIV) data from the experiments. As expected, turbulence model effects dominated the accuracy of the solutions with upwinding scheme selection indicating minimal effects.!

  8. Benchmark experiments on neutron streaming through JET Torus Hall penetrations

    NASA Astrophysics Data System (ADS)

    Batistoni, P.; Conroy, S.; Lilley, S.; Naish, J.; Obryk, B.; Popovichev, S.; Stamatelatos, I.; Syme, B.; Vasilopoulou, T.; contributors, JET

    2015-05-01

    Neutronics experiments are performed at JET for validating in a real fusion environment the neutronics codes and nuclear data applied in ITER nuclear analyses. In particular, the neutron fluence through the penetrations of the JET torus hall is measured and compared with calculations to assess the capability of state-of-art numerical tools to correctly predict the radiation streaming in the ITER biological shield penetrations up to large distances from the neutron source, in large and complex geometries. Neutron streaming experiments started in 2012 when several hundreds of very sensitive thermo-luminescence detectors (TLDs), enriched to different levels in 6LiF/7LiF, were used to measure the neutron and gamma dose separately. Lessons learnt from this first experiment led to significant improvements in the experimental arrangements to reduce the effects due to directional neutron source and self-shielding of TLDs. Here we report the results of measurements performed during the 2013-2014 JET campaign. Data from new positions, at further locations in the South West labyrinth and down to the Torus Hall basement through the air duct chimney, were obtained up to about a 40 m distance from the plasma neutron source. In order to avoid interference between TLDs due to self-shielding effects, only TLDs containing natural Lithium and 99.97% 7Li were used. All TLDs were located in the centre of large polyethylene (PE) moderators, with natLi and 7Li crystals evenly arranged within two PE containers, one in horizontal and the other in vertical orientation, to investigate the shadowing effect in the directional neutron field. All TLDs were calibrated in the quantities of air kerma and neutron fluence. This improved experimental arrangement led to reduced statistical spread in the experimental data. The Monte Carlo N-Particle (MCNP) code was used to calculate the air kerma due to neutrons and the neutron fluence at detector positions, using a JET model validated up to the

  9. Benchmarking of CASMO-4 cross-section data against critical experiments

    SciTech Connect

    Knott, D.; Edenius, M.

    1997-12-01

    Benchmarking of the CASMO-4 lattice physics code has typically been broken into three parts: (a) comparisons against Monte Carlo assembly and supercell calculations, (b) comparisons against critical experiments, and comparisons against measured isotopics. Each part is used to verify a different portion of the data library and methodology. This paper presents results from the critical experiment portion of the verification process for the CASMO-4 standard library and the new JEF-2.2 library.

  10. [Benchmark experiment to verify radiation transport calculations for dosimetry in radiation therapy].

    PubMed

    Renner, Franziska

    2016-09-01

    Monte Carlo simulations are regarded as the most accurate method of solving complex problems in the field of dosimetry and radiation transport. In (external) radiation therapy they are increasingly used for the calculation of dose distributions during treatment planning. In comparison to other algorithms for the calculation of dose distributions, Monte Carlo methods have the capability of improving the accuracy of dose calculations - especially under complex circumstances (e.g. consideration of inhomogeneities). However, there is a lack of knowledge of how accurate the results of Monte Carlo calculations are on an absolute basis. A practical verification of the calculations can be performed by direct comparison with the results of a benchmark experiment. This work presents such a benchmark experiment and compares its results (with detailed consideration of measurement uncertainty) with the results of Monte Carlo calculations using the well-established Monte Carlo code EGSnrc. The experiment was designed to have parallels to external beam radiation therapy with respect to the type and energy of the radiation, the materials used and the kind of dose measurement. Because the properties of the beam have to be well known in order to compare the results of the experiment and the simulation on an absolute basis, the benchmark experiment was performed using the research electron accelerator of the Physikalisch-Technische Bundesanstalt (PTB), whose beam was accurately characterized in advance. The benchmark experiment and the corresponding Monte Carlo simulations were carried out for two different types of ionization chambers and the results were compared. Considering the uncertainty, which is about 0.7 % for the experimental values and about 1.0 % for the Monte Carlo simulation, the results of the simulation and the experiment coincide.

  11. Evaluation of HEU-Beryllium Benchmark Experiments to Improve Computational Analysis of Space Reactors

    SciTech Connect

    Bess, John; Bledsoe, Keith C; Rearden, Bradley T

    2011-01-01

    An assessment was previously performed to evaluate modeling capabilities and quantify preliminary biases and uncertainties associated with the modeling methods and data utilized in designing a nuclear reactor such as a beryllium-reflected, highly-enriched-uranium (HEU)-O2 fission surface power (FSP) system for space nuclear power. The conclusion of the previous study was that current capabilities could preclude the necessity of a cold critical test of the FSP; however, additional testing would reduce uncertainties in the beryllium and uranium cross-section data and the overall uncertainty in the computational models. A series of critical experiments using HEU metal were performed in the 1960s and 1970s in support of criticality safety operations at the Y-12 Plant. Of the hundreds of experiments, three were identified as fast-fission configurations reflected by beryllium metal. These experiments have been evaluated as benchmarks for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE). Further evaluation of the benchmark experiments was performed using the sensitivity and uncertainty analysis capabilities of SCALE 6. The data adjustment methods of SCALE 6 have been employed in the validation of an example FSP design model to reduce the uncertainty due to the beryllium cross section data.

  12. Evaluation of HEU-Beryllium Benchmark Experiments to Improve Computational Analysis of Space Reactors

    SciTech Connect

    John D. Bess; Keith C. Bledsoe; Bradley T. Rearden

    2011-02-01

    An assessment was previously performed to evaluate modeling capabilities and quantify preliminary biases and uncertainties associated with the modeling methods and data utilized in designing a nuclear reactor such as a beryllium-reflected, highly-enriched-uranium (HEU)-O2 fission surface power (FSP) system for space nuclear power. The conclusion of the previous study was that current capabilities could preclude the necessity of a cold critical test of the FSP; however, additional testing would reduce uncertainties in the beryllium and uranium cross-section data and the overall uncertainty in the computational models. A series of critical experiments using HEU metal were performed in the 1960s and 1970s in support of criticality safety operations at the Y-12 Plant. Of the hundreds of experiments, three were identified as fast-fission configurations reflected by beryllium metal. These experiments have been evaluated as benchmarks for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE). Further evaluation of the benchmark experiments was performed using the sensitivity and uncertainty analysis capabilities of SCALE 6. The data adjustment methods of SCALE 6 have been employed in the validation of an example FSP design model to reduce the uncertainty due to the beryllium cross section data.

  13. Material Activation Benchmark Experiments at the NuMI Hadron Absorber Hall in Fermilab

    SciTech Connect

    Matsumura, H.; Matsuda, N.; Kasugai, Y.; Toyoda, A.; Yashima, H.; Sekimoto, S.; Iwase, H.; Oishi, K.; Sakamoto, Y.; Nakashima, H.; Leveling, A.; Boehnlein, D.; Lauten, G.; Mokhov, N.; Vaziri, K.

    2014-06-15

    In our previous study, double and mirror symmetric activation peaks found for Al and Au arranged spatially on the back of the Hadron absorber of the NuMI beamline in Fermilab were considerably higher than those expected purely from muon-induced reactions. From material activation bench-mark experiments, we conclude that this activation is due to hadrons with energy greater than 3 GeV that had passed downstream through small gaps in the hadron absorber.

  14. Graphite and Beryllium Reflector Critical Assemblies of UO2 (Benchmark Experiments 2 and 3)

    SciTech Connect

    Margaret A. Marshall; John D. Bess

    2012-11-01

    INTRODUCTION A series of experiments was carried out in 1962-65 at the Oak Ridge National Laboratory Critical Experiments Facility (ORCEF) for use in space reactor research programs. A core containing 93.2 wt% enriched UO2 fuel rods was used in these experiments. The first part of the experimental series consisted of 252 tightly-packed fuel rods (1.27-cm triangular pitch) with graphite reflectors [1], the second part used 252 graphite-reflected fuel rods organized in a 1.506-cm triangular-pitch array [2], and the final part of the experimental series consisted of 253 beryllium-reflected fuel rods in a 1.506-cm-triangular-pitch configuration and in a 7-tube-cluster configuration [3]. Fission rate distribution and cadmium ratio measurements were taken for all three parts of the experimental series. Reactivity coefficient measurements were taken for various materials placed in the beryllium reflected core. All three experiments in the series have been evaluated for inclusion in the International Reactor Physics Experiment Evaluation Project (IRPhEP) [4] and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbooks, [5]. The evaluation of the first experiment in the series was discussed at the 2011 ANS Winter meeting [6]. The evaluations of the second and third experiments are discussed below. These experiments are of interest as benchmarks because they support the validation of compact reactor designs with similar characteristics to the design parameters for a space nuclear fission surface power systems [7].

  15. Benchmark Shock Tube Experiments for Radiative Heating Relevant to Earth Re-Entry

    NASA Technical Reports Server (NTRS)

    Brandis, A. M.; Cruden, B. A.

    2017-01-01

    Detailed spectrally and spatially resolved radiance has been measured in the Electric Arc Shock Tube (EAST) facility for conditions relevant to high speed entry into a variety of atmospheres, including Earth, Venus, Titan, Mars and the Outer Planets. The tests that measured radiation relevant for Earth re-entry are the focus of this work and are taken from campaigns 47, 50, 52 and 57. These tests covered conditions from 8 km/s to 15.5 km/s at initial pressures ranging from 0.05 Torr to 1 Torr, of which shots at 0.1 and 0.2 Torr are analyzed in this paper. These conditions cover a range of points of interest for potential fight missions, including return from Low Earth Orbit, the Moon and Mars. The large volume of testing available from EAST is useful for statistical analysis of radiation data, but is problematic for identifying representative experiments for performing detailed analysis. Therefore, the intent of this paper is to select a subset of benchmark test data that can be considered for further detailed study. These benchmark shots are intended to provide more accessible data sets for future code validation studies and facility-to-facility comparisons. The shots that have been selected as benchmark data are the ones in closest agreement to a line of best fit through all of the EAST results, whilst also showing the best experimental characteristics, such as test time and convergence to equilibrium. The EAST data are presented in different formats for analysis. These data include the spectral radiance at equilibrium, the spatial dependence of radiance over defined wavelength ranges and the mean non-equilibrium spectral radiance (so-called 'spectral non-equilibrium metric'). All the information needed to simulate each experimental trace, including free-stream conditions, shock time of arrival (i.e. x-t) relation, and the spectral and spatial resolution functions, are provided.

  16. Quality in E-Learning--A Conceptual Framework Based on Experiences from Three International Benchmarking Projects

    ERIC Educational Resources Information Center

    Ossiannilsson, E.; Landgren, L.

    2012-01-01

    Between 2008 and 2010, Lund University took part in three international benchmarking projects, "E-xcellence+," the "eLearning Benchmarking Exercise 2009," and the "First Dual-Mode Distance Learning Benchmarking Club." A comparison of these models revealed a rather high level of correspondence. From this finding and…

  17. How to achieve and prove performance improvement - 15 years of experience in German wastewater benchmarking.

    PubMed

    Bertzbach, F; Franz, T; Möller, K

    2012-01-01

    This paper shows the results of performance improvement, which have been achieved in benchmarking projects in the wastewater industry in Germany over the last 15 years. A huge number of changes in operational practice and also in achieved annual savings can be shown, induced in particular by benchmarking at process level. Investigation of this question produces some general findings for the inclusion of performance improvement in a benchmarking project and for the communication of its results. Thus, we elaborate on the concept of benchmarking at both utility and process level, which is still a necessary distinction for the integration of performance improvement into our benchmarking approach. To achieve performance improvement via benchmarking it should be made quite clear that this outcome depends, on one hand, on a well conducted benchmarking programme and, on the other, on the individual situation within each participating utility.

  18. Analysis of experiments in the Phase III GCFR benchmark critical assembly

    SciTech Connect

    Hess, A.L.; Baylor, K.J.

    1980-04-01

    Experiments carried out in the third gas-cooled fast breeder reactor (GCFR) benchmark critical assembly on the Zero Power Reactor-9 at Argonne National Laboratory were analyzed using methods and computer codes employed routinely for design and performance evaluations on power-plant GCFR cores. The program for the Phase III GCFR assembly, with a 1900-liter, three-enrichment zone core, included measurements of reaction-rate profiles in a typical power-flattened design, studies of material reactivity coefficients, reaction ratio and breeding parameter determinations, and comparison of pin with plate fuel loadings. Calculated parameters to compare with all of the measured results were obtained using 10-group cross sections based on ENDF/B-4 and two-dimensional diffusion theory, with adjustments for fuel-cell heterogeneity and void-lattice streaming effects.

  19. Maternal immunization. Clinical experiences, challenges, and opportunities in vaccine acceptance.

    PubMed

    Moniz, Michelle H; Beigi, Richard H

    2014-01-01

    Maternal immunization holds tremendous promise to improve maternal and neonatal health for a number of infectious conditions. The unique susceptibilities of pregnant women to infectious conditions, as well as the ability of maternally-derived antibody to offer vital neonatal protection (via placental transfer), together have produced the recent increased attention on maternal immunization. The Advisory Committee on Immunization Practices (ACIP) currently recommends 2 immunizations for all pregnant women lacking contraindication, inactivated Influenza and tetanus toxoid, reduced diphtheria toxoid, and acellular pertussis (Tdap). Given ongoing research the number of vaccines recommended during pregnancy is likely to increase. Thus, achieving high vaccination coverage of pregnant women for all recommended immunizations is a key public health enterprise. This review will focus on the present state of vaccine acceptance in pregnancy, with attention to currently identified barriers and determinants of vaccine acceptance. Additionally, opportunities for improvement will be considered.

  20. Use of Student Ratings to Benchmark Universities: Multilevel Modeling of Responses to the Australian Course Experience Questionnaire (CEQ)

    ERIC Educational Resources Information Center

    Marsh, Herbert W.; Ginns, Paul; Morin, Alexandre J. S.; Nagengast, Benjamin; Martin, Andrew J.

    2011-01-01

    Recently graduated university students from all Australian Universities rate their overall departmental and university experiences (DUEs), and their responses (N = 44,932, 41 institutions) are used by the government to benchmark departments and universities. We evaluate this DUE strategy of rating overall departments and universities rather than…

  1. MELCOR-H2 Benchmarking of the SNL Transient Sulfuric Acid Decomposition Experiments

    SciTech Connect

    Rodriguez, Sal B.; Gauntt, Randall O.; Gelbard, Fred; Pickard, Paul; Cole, Randy; McFadden, Katherine; Drennen, Tom; Martin, Billy; Louie, David; Archuleta, Louis; Revankar, Shripad T.; Vierow, Karen; El-Genk, Mohamed; Tournier, Jean Michel

    2007-07-01

    MELCOR is a world-renowned nuclear reactor safety analysis code that is used to simulate both light water and gas-cooled reactors. MELCOR-H2 is an extension of MELCOR that can model detailed nuclear reactors that are fully coupled with modular secondary-system components and the sulfur iodine (SI) thermochemical cycle for the generation of hydrogen and electricity. The models are applicable to both steady state and transient calculations. Previous work has shown that the hydrogen generation rate calculated by MELCOR-H2 for the SI cycle was within the expected theoretical yield, thus providing a macroscopic confirmation that MELCOR-H2's computational approach is reasonable. However, in order to better quantify its adequacy, benchmarking of the code with experimental data is required. Sulfuric acid decomposition experiments were conducted during late 2006 at Sandia National Laboratories, and MELCOR-H2 was used to simulate them. We developed an input deck based on the experiment's geometry, as well as the initial and boundary conditions, and then proceeded to compare the experimental acid conversion efficiency and SO{sub 2} production data with the code output. The comparison showed that the simulation output was typically within less than 10% of experimental data, and that key experimental data trends such as acid conversion efficiency, molar acid flow rate, and solution mole % were computed adequately by the MELCOR-H2. (authors)

  2. Criticality experiments and benchmarks for cross section evaluation: the neptunium case

    NASA Astrophysics Data System (ADS)

    Leong, L. S.; Tassan-Got, L.; Audouin, L.; Paradela, C.; Wilson, J. N.; Tarrio, D.; Berthier, B.; Duran, I.; Le Naour, C.; Stéphan, C.

    2013-03-01

    The 237Np neutron-induced fission cross section has been recently measured in a large energy range (from eV to GeV) at the n_TOF facility at CERN. When compared to previous measurement the n_TOF fission cross section appears to be higher by 5-7% beyond the fission threshold. To check the relevance of n_TOF data, we apply a criticality experiment performed at Los Alamos with a 6 kg sphere of 237Np, surrounded by enriched uranium 235U so as to approach criticality with fast neutrons. The multiplication factor ke f f of the calculation is in better agreement with the experiment (the deviation of 750 pcm is reduced to 250 pcm) when we replace the ENDF/B-VII.0 evaluation of the 237Np fission cross section by the n_TOF data. We also explore the hypothesis of deficiencies of the inelastic cross section in 235U which has been invoked by some authors to explain the deviation of 750 pcm. With compare to inelastic large distortion calculation, it is incompatible with existing measurements. Also we show that the v of 237Np can hardly be incriminated because of the high accuracy of the existing data. Fission rate ratios or averaged fission cross sections measured in several fast neutron fields seem to give contradictory results on the validation of the 237Np cross section but at least one of the benchmark experiments, where the active deposits have been well calibrated for the number of atoms, favors the n_TOF data set. These outcomes support the hypothesis of a higher fission cross section of 237Np.

  3. Monte Carlo Simulation of the TRIGA Mark II Benchmark Experiment with Burned Fuel

    SciTech Connect

    Jeraj, Robert; Zagar, Tomaz; Ravnik, Matjaz

    2002-03-15

    Monte Carlo calculations of a criticality experiment with burned fuel on the TRIGA Mark II research reactor are presented. The main objective was to incorporate burned fuel composition calculated with the WIMSD4 deterministic code into the MCNP4B Monte Carlo code and compare the calculated k{sub eff} with the measurements. The criticality experiment was performed in 1998 at the ''Jozef Stefan'' Institute TRIGA Mark II reactor in Ljubljana, Slovenia, with the same fuel elements and loading pattern as in the TRIGA criticality benchmark experiment with fresh fuel performed in 1991. The only difference was that in 1998, the fuel elements had on average burnup of {approx}3%, corresponding to 1.3-MWd energy produced in the core in the period between 1991 and 1998. The fuel element burnup accumulated during 1991-1998 was calculated with the TRIGLAV in-house-developed fuel management two-dimensional multigroup diffusion code. The burned fuel isotopic composition was calculated with the WIMSD4 code and compared to the ORIGEN2 calculations. Extensive comparison of burned fuel material composition was performed for both codes for burnups up to 20% burned {sup 235}U, and the differences were evaluated in terms of reactivity. The WIMSD4 and ORIGEN2 results agreed well for all isotopes important in reactivity calculations, giving increased confidence in the WIMSD4 calculation of the burned fuel material composition. The k{sub eff} calculated with the combined WIMSD4 and MCNP4B calculations showed good agreement with the experimental values. This shows that linking of WIMSD4 with MCNP4B for criticality calculations with burned fuel is feasible and gives reliable results.

  4. 2-D Circulation Control Airfoil Benchmark Experiments Intended for CFD Code Validation

    NASA Technical Reports Server (NTRS)

    Englar, Robert J.; Jones, Gregory S.; Allan, Brian G.; Lin, Johb C.

    2009-01-01

    A current NASA Research Announcement (NRA) project being conducted by Georgia Tech Research Institute (GTRI) personnel and NASA collaborators includes the development of Circulation Control (CC) blown airfoils to improve subsonic aircraft high-lift and cruise performance. The emphasis of this program is the development of CC active flow control concepts for both high-lift augmentation, drag control, and cruise efficiency. A collaboration in this project includes work by NASA research engineers, whereas CFD validation and flow physics experimental research are part of NASA s systematic approach to developing design and optimization tools for CC applications to fixed-wing aircraft. The design space for CESTOL type aircraft is focusing on geometries that depend on advanced flow control technologies that include Circulation Control aerodynamics. The ability to consistently predict advanced aircraft performance requires improvements in design tools to include these advanced concepts. Validation of these tools will be based on experimental methods applied to complex flows that go beyond conventional aircraft modeling techniques. This paper focuses on recent/ongoing benchmark high-lift experiments and CFD efforts intended to provide 2-D CFD validation data sets related to NASA s Cruise Efficient Short Take Off and Landing (CESTOL) study. Both the experimental data and related CFD predictions are discussed.

  5. Acceptance of cravings: how smoking cessation experiences affect craving beliefs.

    PubMed

    Nosen, Elizabeth; Woody, Sheila R

    2014-08-01

    Metacognitive models theorize that more negative appraisals of craving-related thoughts and feelings, and greater efforts to avoid or control these experiences, exacerbate suffering and increase chances the person will use substances to obtain relief. Thus far, little research has examined how attempts to quit smoking influence the way people perceive and respond to cravings. As part of a larger study, 176 adult smokers interested in quitting participated in two lab sessions, four days apart. Half the sample began a quit attempt the day after the first session; craving-related beliefs, metacognitive strategies, and negative affect were assessed at the second session. Participants who failed to abstain from smoking more strongly endorsed appraisals of craving-related thoughts as negative and personally relevant. Negative appraisals correlated strongly with distress and withdrawal symptoms. Attempting to quit smoking increased use of distraction, thought suppression and re-appraisal techniques, with no difference between successful and unsuccessful quitters. Negative beliefs about cravings and rumination predicted less change in smoking one month later. Results suggest that smoking cessation outcomes and metacognitive beliefs likely have a bidirectional relationship that is strongly related to negative affect. Greater consideration of the impact of cessation experiences on mood and craving beliefs is warranted.

  6. Acceptance of Vaccinations in Pandemic Outbreaks: A Discrete Choice Experiment

    PubMed Central

    Determann, Domino; Korfage, Ida J.; Lambooij, Mattijs S.; Bliemer, Michiel; Richardus, Jan Hendrik; Steyerberg, Ewout W.; de Bekker-Grob, Esther W.

    2014-01-01

    Background Preventive measures are essential to limit the spread of new viruses; their uptake is key to their success. However, the vaccination uptake in pandemic outbreaks is often low. We aim to elicit how disease and vaccination characteristics determine preferences of the general public for new pandemic vaccinations. Methods In an internet-based discrete choice experiment (DCE) a representative sample of 536 participants (49% participation rate) from the Dutch population was asked for their preference for vaccination programs in hypothetical communicable disease outbreaks. We used scenarios based on two disease characteristics (susceptibility to and severity of the disease) and five vaccination program characteristics (effectiveness, safety, advice regarding vaccination, media attention, and out-of-pocket costs). The DCE design was based on a literature review, expert interviews and focus group discussions. A panel latent class logit model was used to estimate which trade-offs individuals were willing to make. Results All above mentioned characteristics proved to influence respondents’ preferences for vaccination. Preference heterogeneity was substantial. Females who stated that they were never in favor of vaccination made different trade-offs than males who stated that they were (possibly) willing to get vaccinated. As expected, respondents preferred and were willing to pay more for more effective vaccines, especially if the outbreak was more serious (€6–€39 for a 10% more effective vaccine). Changes in effectiveness, out-of-pocket costs and in the body that advises the vaccine all substantially influenced the predicted uptake. Conclusions We conclude that various disease and vaccination program characteristics influence respondents’ preferences for pandemic vaccination programs. Agencies responsible for preventive measures during pandemics can use the knowledge that out-of-pocket costs and the way advice is given affect vaccination uptake to improve

  7. Labeling Sexual Victimization Experiences: The Role of Sexism, Rape Myth Acceptance, and Tolerance for Sexual Harassment.

    PubMed

    LeMaire, Kelly L; Oswald, Debra L; Russell, Brenda L

    2016-01-01

    This study investigated whether attitudinal variables, such as benevolent and hostile sexism toward men and women, female rape myth acceptance, and tolerance of sexual harassment are related to women labeling their sexual assault experiences as rape. In a sample of 276 female college students, 71 (25.7%) reported at least one experience that met the operational definition of rape, although only 46.5% of those women labeled the experience "rape." Benevolent sexism, tolerance of sexual harassment, and rape myth acceptance, but not hostile sexism, significantly predicted labeling of previous sexual assault experiences by the victims. Specifically, those with more benevolent sexist attitudes toward both men and women, greater rape myth acceptance, and more tolerant attitudes of sexual harassment were less likely to label their past sexual assault experience as rape. The results are discussed for their clinical and theoretical implications.

  8. Overview of the 2014 Edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook)

    SciTech Connect

    John D. Bess; J. Blair Briggs; Jim Gulliford; Ian Hill

    2014-10-01

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) is a widely recognized world class program. The work of the IRPhEP is documented in the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook). Integral data from the IRPhEP Handbook is used by reactor safety and design, nuclear data, criticality safety, and analytical methods development specialists, worldwide, to perform necessary validations of their calculational techniques. The IRPhEP Handbook is among the most frequently quoted reference in the nuclear industry and is expected to be a valuable resource for future decades.

  9. Core Benchmarks Descriptions

    SciTech Connect

    Pavlovichev, A.M.

    2001-05-24

    Actual regulations while designing of new fuel cycles for nuclear power installations comprise a calculational justification to be performed by certified computer codes. It guarantees that obtained calculational results will be within the limits of declared uncertainties that are indicated in a certificate issued by Gosatomnadzor of Russian Federation (GAN) and concerning a corresponding computer code. A formal justification of declared uncertainties is the comparison of calculational results obtained by a commercial code with the results of experiments or of calculational tests that are calculated with an uncertainty defined by certified precision codes of MCU type or of other one. The actual level of international cooperation provides an enlarging of the bank of experimental and calculational benchmarks acceptable for a certification of commercial codes that are being used for a design of fuel loadings with MOX fuel. In particular, the work is practically finished on the forming of calculational benchmarks list for a certification of code TVS-M as applied to MOX fuel assembly calculations. The results on these activities are presented.

  10. Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program

    SciTech Connect

    Bess, John D.; Montierth, Leland; Köberl, Oliver; Snoj, Luka

    2014-10-09

    Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the ²³⁵U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of keff with MCNP5 and ENDF/B-VII.0 neutron nuclear data are greater than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of keff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.

  11. Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program

    DOE PAGES

    Bess, John D.; Montierth, Leland; Köberl, Oliver; ...

    2014-10-09

    Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the ²³⁵U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of keff with MCNP5 and ENDF/B-VII.0 neutron nuclear data are greatermore » than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of keff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  12. Integrating CFD, CAA, and Experiments Towards Benchmark Datasets for Airframe Noise Problems

    NASA Technical Reports Server (NTRS)

    Choudhari, Meelan M.; Yamamoto, Kazuomi

    2012-01-01

    Airframe noise corresponds to the acoustic radiation due to turbulent flow in the vicinity of airframe components such as high-lift devices and landing gears. The combination of geometric complexity, high Reynolds number turbulence, multiple regions of separation, and a strong coupling with adjacent physical components makes the problem of airframe noise highly challenging. Since 2010, the American Institute of Aeronautics and Astronautics has organized an ongoing series of workshops devoted to Benchmark Problems for Airframe Noise Computations (BANC). The BANC workshops are aimed at enabling a systematic progress in the understanding and high-fidelity predictions of airframe noise via collaborative investigations that integrate state of the art computational fluid dynamics, computational aeroacoustics, and in depth, holistic, and multifacility measurements targeting a selected set of canonical yet realistic configurations. This paper provides a brief summary of the BANC effort, including its technical objectives, strategy, and selective outcomes thus far.

  13. ENDF/B-VII.1 Neutron Cross Section Data Testing with Critical Assembly Benchmarks and Reactor Experiments

    SciTech Connect

    Kahler, A.C.; Herman, M.; Kahler,A.C.; MacFarlane,R.E.; Mosteller,R.D.; Kiedrowski,B.C.; Frankle,S.C.; Chadwick,M.B.; McKnight,R.D.; Lell,R.M.; Palmiotti,G.; Hiruta,H.; Herman,M.; Arcilla,R.; Mughabghab,S.F.; Sublet,J.C.; Trkov,A.; Trumbull,T.H.; Dunn,M.

    2011-12-01

    The ENDF/B-VII.1 library is the latest revision to the United States Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 423 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [M. B. Chadwick et al., 'ENDF/B-VII.1 Nuclear Data for Science and Technology: Cross Sections, Covariances, Fission Product Yields and Decay Data,' Nuclear Data Sheets, 112, 2887 (2011)]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unmoderated and uranium reflected {sup 235}U and {sup 239}Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also confirmed for

  14. ENDF/B-VII.1 Neutron Cross Section Data Testing with Critical Assembly Benchmarks and Reactor Experiments

    SciTech Connect

    Kahler, A.; Macfarlane, R E; Mosteller, R D; Kiedrowski, B C; Frankle, S C; Chadwick, M. B.; Mcknight, R D; Lell, R M; Palmiotti, G; Hiruta, h; Herman, Micheal W; Arcilla, r; Mughabghab, S F; Sublet, J C; Trkov, A.; Trumbull, T H; Dunn, Michael E

    2011-01-01

    The ENDF/B-VII.1 library is the latest revision to the United States' Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 423 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [1]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unrnoderated and uranium reflected (235)U and (239)Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also confirmed for selected actinide reaction rates such as (236)U; (238,242)Pu and (241,243)Am capture in fast systems. Other deficiencies, such as the overprediction of Pu solution system critical eigenvalues

  15. ENDF/B-VII.1 Neutron Cross Section Data Testing with Critical Assembly Benchmarks and Reactor Experiments

    NASA Astrophysics Data System (ADS)

    Kahler, A. C.; MacFarlane, R. E.; Mosteller, R. D.; Kiedrowski, B. C.; Frankle, S. C.; Chadwick, M. B.; McKnight, R. D.; Lell, R. M.; Palmiotti, G.; Hiruta, H.; Herman, M.; Arcilla, R.; Mughabghab, S. F.; Sublet, J. C.; Trkov, A.; Trumbull, T. H.; Dunn, M.

    2011-12-01

    The ENDF/B-VII.1 library is the latest revision to the United States' Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 423 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [M. B. Chadwick et al., "ENDF/B-VII.1 Nuclear Data for Science and Technology: Cross Sections, Covariances, Fission Product Yields and Decay Data," Nuclear Data Sheets, 112, 2887 (2011)]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unmoderated and uranium reflected 235U and 239Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also confirmed for selected

  16. Three dimensional modeling of Laser-Plasma interaction: benchmarking our predictive modeling tools vs. experiments

    SciTech Connect

    Divol, L; Berger, R; Meezan, N; Froula, D H; Dixit, S; Suter, L; Glenzer, S H

    2007-11-08

    We have developed a new target platform to study Laser Plasma Interaction in ignition-relevant condition at the Omega laser facility (LLE/Rochester)[1]. By shooting an interaction beam along the axis of a gas-filled hohlraum heated by up to 17 kJ of heater beam energy, we were able to create a millimeter-scale underdense uniform plasma at electron temperatures above 3 keV. Extensive Thomson scattering measurements allowed us to benchmark our hydrodynamic simulations performed with HYDRA[2]. As a result of this effort, we can use with much confidence these simulations as input parameters for our LPI simulation code pF3d[3]. In this paper, we show that by using accurate hydrodynamic profiles and full three-dimensional simulations including a realistic modeling of the laser intensity pattern generated by various smoothing options, whole beam three-dimensional linear kinetic modeling of stimulated Brillouin scattering reproduces quantitatively the experimental measurements(SBS thresholds, reflectivity values and the absence of measurable SRS). This good agreement was made possible by the recent increase in computing power routinely available for such simulations. These simulations accurately predicted the strong reduction of SBS measured when polarization smoothing is used.

  17. Three-dimensional modeling of laser-plasma interaction: Benchmarking our predictive modeling tools versus experiments

    SciTech Connect

    Divol, L.; Berger, R. L.; Meezan, N. B.; Froula, D. H.; Dixit, S.; Michel, P.; London, R.; Strozzi, D.; Ross, J.; Williams, E. A.; Still, B.; Suter, L. J.; Glenzer, S. H.

    2008-05-15

    New experimental capabilities [Froula et al., Phys. Rev. Lett. 98, 085001 (2007)] have been developed to study laser-plasma interaction (LPI) in ignition-relevant condition at the Omega laser facility (LLE/Rochester). By shooting an interaction beam along the axis of a gas-filled hohlraum heated by up to 17 kJ of heater beam energy, a millimeter-scale underdense uniform plasma at electron temperatures above 3 keV was created. Extensive Thomson scattering measurements allowed to benchmark hydrodynamic simulations performed with HYDRA [Meezan et al., Phys. Plasmas 14, 056304 (2007)]. As a result of this effort, these simulations can be used with much confidence as input parameters for the LPI simulation code PF3D [Berger et al., Phys. Plasmas 5, 4337 (1998)]. In this paper, it is shown that by using accurate hydrodynamic profiles and full three-dimensional simulations including a realistic modeling of the laser intensity pattern generated by various smoothing options, whole beam three-dimensional linear kinetic modeling of stimulated Brillouin scattering (SBS) reproduces quantitatively the experimental measurements (SBS thresholds, reflectivity values, and the absence of measurable stimulated Raman scattering). This good agreement was made possible by the recent increase in computing power routinely available for such simulations. These simulations accurately predicted the strong reduction of SBS measured when polarization smoothing is used.

  18. Self-Compassion Promotes Personal Improvement From Regret Experiences via Acceptance.

    PubMed

    Zhang, Jia Wei; Chen, Serena

    2016-02-01

    Why do some people report more personal improvement from their regret experiences than others? Three studies examined whether self-compassion promotes personal improvement derived from recalled regret experiences. In Study 1, we coded anonymous regret descriptions posted on a blog website. People who spontaneously described their regret with greater self-compassion were also judged as having expressed more personal improvement. In Study 2, higher trait self-compassion predicted greater self-reported and observer-rated personal improvement derived from recalled regret experiences. In Study 3, people induced to take a self-compassionate perspective toward a recalled regret experience reported greater acceptance, forgiveness, and personal improvement. A multiple mediation analysis comparing acceptance and forgiveness showed self-compassion led to greater personal improvement, in part, through heightened acceptance. Furthermore, self-compassion's effects on personal improvement were distinct from self-esteem and were not explained by adaptive emotional responses. Overall, the results suggest that self-compassion spurs positive adjustment in the face of regrets.

  19. TWODANT benchmark. Progress report

    SciTech Connect

    Lee, Sung

    1994-01-11

    TWODANT (Two-Dimensional, Diffusion-Accelerated, Neutral-Particle Transport) code has been benchmarked against 6 critical experiments (Jezebel plutonium critical assembly) and their k effective values compared with those of KENO and MCNP codes.

  20. Advances in code validation for mixed-oxide fuel use in light-water reactors through benchmark experiments in the VENUS critical facility

    SciTech Connect

    D'hondt, Pierre; Baeten, Peter; Lance, Bernard; Marloye, Daniel; Basselier, Jacques

    2004-07-01

    Based on the experience accumulated during 25-years of collaboration SCK.CEN together with Belgonucleaire decided to implement a series of Benchmark experiments in the VENUS critical facility in Mol, Belgium in order to give to organizations concerned with MOX fuel the possibility to calibrate and to improve their neutronic calculation tools. In this paper these Benchmark programmes and their outcome are highlighted, they have demonstrated that VENUS is a very flexible and easy to use tool for the investigation of neutronic data as well as for the study of licensing, safety and operation aspects for MOX use in LWR's. (authors)

  1. Physics of Colloids in Space--Plus (PCS+) Experiment Completed Flight Acceptance Testing

    NASA Technical Reports Server (NTRS)

    Doherty, Michael P.

    2004-01-01

    The Physics of Colloids in Space--Plus (PCS+) experiment successfully completed system-level flight acceptance testing in the fall of 2003. This testing included electromagnetic interference (EMI) testing, vibration testing, and thermal testing. PCS+, an Expedite the Process of Experiments to Space Station (EXPRESS) Rack payload will deploy a second set of colloid samples within the PCS flight hardware system that flew on the International Space Station (ISS) from April 2001 to June 2002. PCS+ is slated to return to the ISS in late 2004 or early 2005.

  2. The effects of video compression on acceptability of images for monitoring life sciences experiments

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Chuang, Sherry L.

    1992-01-01

    Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters

  3. Pore-scale and continuum simulations of solute transport micromodel benchmark experiments

    DOE PAGES

    Oostrom, M.; Mehmani, Y.; Romero-Gomez, P.; ...

    2014-06-18

    Four sets of nonreactive solute transport experiments were conducted with micromodels. Three experiments with one variable, i.e., flow velocity, grain diameter, pore-aspect ratio, and flow-focusing heterogeneity were in each set. The data sets were offered to pore-scale modeling groups to test their numerical simulators. Each set consisted of two learning experiments, for which our results were made available, and one challenge experiment, for which only the experimental description and base input parameters were provided. The experimental results showed a nonlinear dependence of the transverse dispersion coefficient on the Peclet number, a negligible effect of the pore-aspect ratio on transverse mixing,more » and considerably enhanced mixing due to flow focusing. Five pore-scale models and one continuum-scale model were used to simulate the experiments. Of the pore-scale models, two used a pore-network (PN) method, two others are based on a lattice Boltzmann (LB) approach, and one used a computational fluid dynamics (CFD) technique. Furthermore, we used the learning experiments, by the PN models, to modify the standard perfect mixing approach in pore bodies into approaches to simulate the observed incomplete mixing. The LB and CFD models used the learning experiments to appropriately discretize the spatial grid representations. For the continuum modeling, the required dispersivity input values were estimated based on published nonlinear relations between transverse dispersion coefficients and Peclet number. Comparisons between experimental and numerical results for the four challenge experiments show that all pore-scale models were all able to satisfactorily simulate the experiments. The continuum model underestimated the required dispersivity values, resulting in reduced dispersion. The PN models were able to complete the simulations in a few minutes, whereas the direct models, which account for the micromodel geometry and underlying flow and transport physics

  4. Pore-scale and continuum simulations of solute transport micromodel benchmark experiments

    SciTech Connect

    Oostrom, M.; Mehmani, Y.; Romero-Gomez, P.; Tang, Y.; Liu, H.; Yoon, H.; Kang, Q.; Joekar-Niasar, V.; Balhoff, M. T.; Dewers, T.; Tartakovsky, G. D.; Leist, E. A.; Hess, N. J.; Perkins, W. A.; Rakowski, C. L.; Richmond, M. C.; Serkowski, J. A.; Werth, C. J.; Valocchi, A. J.; Wietsma, T. W.; Zhang, C.

    2014-06-18

    Four sets of nonreactive solute transport experiments were conducted with micromodels. Three experiments with one variable, i.e., flow velocity, grain diameter, pore-aspect ratio, and flow-focusing heterogeneity were in each set. The data sets were offered to pore-scale modeling groups to test their numerical simulators. Each set consisted of two learning experiments, for which our results were made available, and one challenge experiment, for which only the experimental description and base input parameters were provided. The experimental results showed a nonlinear dependence of the transverse dispersion coefficient on the Peclet number, a negligible effect of the pore-aspect ratio on transverse mixing, and considerably enhanced mixing due to flow focusing. Five pore-scale models and one continuum-scale model were used to simulate the experiments. Of the pore-scale models, two used a pore-network (PN) method, two others are based on a lattice Boltzmann (LB) approach, and one used a computational fluid dynamics (CFD) technique. Furthermore, we used the learning experiments, by the PN models, to modify the standard perfect mixing approach in pore bodies into approaches to simulate the observed incomplete mixing. The LB and CFD models used the learning experiments to appropriately discretize the spatial grid representations. For the continuum modeling, the required dispersivity input values were estimated based on published nonlinear relations between transverse dispersion coefficients and Peclet number. Comparisons between experimental and numerical results for the four challenge experiments show that all pore-scale models were all able to satisfactorily simulate the experiments. The continuum model underestimated the required dispersivity values, resulting in reduced dispersion. The PN models were able to complete the simulations in a few minutes, whereas the direct models, which account for the micromodel geometry and underlying flow and transport physics, needed

  5. Pore-scale and Continuum Simulations of Solute Transport Micromodel Benchmark Experiments

    SciTech Connect

    Oostrom, Martinus; Mehmani, Yashar; Romero Gomez, Pedro DJ; Tang, Y.; Liu, H.; Yoon, Hongkyu; Kang, Qinjun; Joekar Niasar, Vahid; Balhoff, Matthew; Dewers, T.; Tartakovsky, Guzel D.; Leist, Emily AE; Hess, Nancy J.; Perkins, William A.; Rakowski, Cynthia L.; Richmond, Marshall C.; Serkowski, John A.; Werth, Charles J.; Valocchi, Albert J.; Wietsma, Thomas W.; Zhang, Changyong

    2016-08-01

    Four sets of micromodel nonreactive solute transport experiments were conducted with flow velocity, grain diameter, pore-aspect ratio, and flow focusing heterogeneity as the variables. The data sets were offered to pore-scale modeling groups to test their simulators. Each set consisted of two learning experiments, for which all results was made available, and a challenge experiment, for which only the experimental description and base input parameters were provided. The experimental results showed a nonlinear dependence of the dispersion coefficient on the Peclet number, a negligible effect of the pore-aspect ratio on transverse mixing, and considerably enhanced mixing due to flow focusing. Five pore-scale models and one continuum-scale model were used to simulate the experiments. Of the pore-scale models, two used a pore-network (PN) method, two others are based on a lattice-Boltzmann (LB) approach, and one employed a computational fluid dynamics (CFD) technique. The learning experiments were used by the PN models to modify the standard perfect mixing approach in pore bodies into approaches to simulate the observed incomplete mixing. The LB and CFD models used these experiments to appropriately discretize the grid representations. The continuum model use published non-linear relations between transverse dispersion coefficients and Peclet numbers to compute the required dispersivity input values. Comparisons between experimental and numerical results for the four challenge experiments show that all pore-scale models were all able to satisfactorily simulate the experiments. The continuum model underestimated the required dispersivity values and, resulting in less dispersion. The PN models were able to complete the simulations in a few minutes, whereas the direct models needed up to several days on supercomputers to resolve the more complex problems.

  6. Trust, confidence, procedural fairness, outcome fairness, moral conviction, and the acceptance of GM field experiments.

    PubMed

    Siegrist, Michael; Connor, Melanie; Keller, Carmen

    2012-08-01

    In 2005, Swiss citizens endorsed a moratorium on gene technology, resulting in the prohibition of the commercial cultivation of genetically modified crops and the growth of genetically modified animals until 2013. However, scientific research was not affected by this moratorium, and in 2008, GMO field experiments were conducted that allowed us to examine the factors that influence their acceptance by the public. In this study, trust and confidence items were analyzed using principal component analysis. The analysis revealed the following three factors: "economy/health and environment" (value similarity based trust), "trust and honesty of industry and scientists" (value similarity based trust), and "competence" (confidence). The results of a regression analysis showed that all the three factors significantly influenced the acceptance of GM field experiments. Furthermore, risk communication scholars have suggested that fairness also plays an important role in the acceptance of environmental hazards. We, therefore, included measures for outcome fairness and procedural fairness in our model. However, the impact of fairness may be moderated by moral conviction. That is, fairness may be significant for people for whom GMO is not an important issue, but not for people for whom GMO is an important issue. The regression analysis showed that, in addition to the trust and confidence factors, moral conviction, outcome fairness, and procedural fairness were significant predictors. The results suggest that the influence of procedural fairness is even stronger for persons having high moral convictions compared with persons having low moral convictions.

  7. Public acceptability of population-level interventions to reduce alcohol consumption: a discrete choice experiment.

    PubMed

    Pechey, Rachel; Burge, Peter; Mentzakis, Emmanouil; Suhrcke, Marc; Marteau, Theresa M

    2014-07-01

    Public acceptability influences policy action, but the most acceptable policies are not always the most effective. This discrete choice experiment provides a novel investigation of the acceptability of different interventions to reduce alcohol consumption and the effect of information on expected effectiveness, using a UK general population sample of 1202 adults. Policy options included high, medium and low intensity versions of: Minimum Unit Pricing (MUP) for alcohol; reducing numbers of alcohol retail outlets; and regulating alcohol advertising. Outcomes of interventions were predicted for: alcohol-related crimes; alcohol-related hospital admissions; and heavy drinkers. First, the models obtained were used to predict preferences if expected outcomes of interventions were not taken into account. In such models around half of participants or more were predicted to prefer the status quo over implementing outlet reductions or higher intensity MUP. Second, preferences were predicted when information on expected outcomes was considered, with most participants now choosing any given intervention over the status quo. Acceptability of MUP interventions increased by the greatest extent: from 43% to 63% preferring MUP of £1 to the status quo. Respondents' own drinking behaviour also influenced preferences, with around 90% of non-drinkers being predicted to choose all interventions over the status quo, and with more moderate than heavy drinkers favouring a given policy over the status quo. Importantly, the study findings suggest public acceptability of alcohol interventions is dependent on both the nature of the policy and its expected effectiveness. Policy-makers struggling to mobilise support for hitherto unpopular but promising policies should consider giving greater prominence to their expected outcomes.

  8. TESTING AND ACCEPTANCE OF FUEL PLATES FOR RERTR FUEL DEVELOPMENT EXPERIMENTS

    SciTech Connect

    J.M. Wight; G.A. Moore; S.C. Taylor

    2008-10-01

    This paper discusses how candidate fuel plates for RERTR Fuel Development experiments are examined and tested for acceptance prior to reactor insertion. These tests include destructive and nondestructive examinations (DE and NDE). The DE includes blister annealing for dispersion fuel plates, bend testing of adjacent cladding, and microscopic examination of archive fuel plates. The NDE includes Ultrasonic (UT) scanning and radiography. UT tests include an ultrasonic scan for areas of “debonds” and a high frequency ultrasonic scan to determine the "minimum cladding" over the fuel. Radiography inspections include identifying fuel outside of the maximum fuel zone and measurements and calculations for fuel density. Details of each test are provided and acceptance criteria are defined. These tests help to provide a high level of confidence the fuel plate will perform in the reactor without a breach in the cladding.

  9. Verification and validation benchmarks.

    SciTech Connect

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  10. Aquatic Life Benchmarks for Pesticide Registration

    EPA Pesticide Factsheets

    Each Aquatic Life Benchmark is based on the most sensitive, scientifically acceptable toxicity endpoint available to EPA for a given taxon (for example, freshwater fish) of all scientifically acceptable toxicity data available to EPA.

  11. The role of age and motivation for the experience of social acceptance and rejection.

    PubMed

    Nikitin, Jana; Schoch, Simone; Freund, Alexandra M

    2014-07-01

    A study with n = 55 younger (18-33 years, M = 23.67) and n = 58 older (61-85 years, M = 71.44) adults investigated age-related differences in social approach and avoidance motivation and their consequences for the experience of social interactions. Results confirmed the hypothesis that a predominant habitual approach motivation in younger adults shifts toward a stronger avoidance motivation in older adults. Moreover, age and momentary motivation predicted the experience of an actual social interaction. Younger adults reported stronger negative emotions in a rejection situation when striving to approach acceptance rather than avoid rejection. Conversely, older adults reported fewer positive emotions in a rejection situation when they attempted to avoid rejection rather than approach acceptance. Taken together, the present study demonstrates that the same motivation has different consequences for the experience of potentially threatening social situations in younger and older adults. People seem to react emotionally when the achievement of important developmental goals (approaching others in young adulthood, avoiding negative social interactions in older adulthood) is thwarted. Moreover, results suggest that approach and avoidance motivation play an important role for socioemotional development.

  12. Electron-impact ionization of helium: A comprehensive experiment benchmarks theory

    SciTech Connect

    Ren, X.; Pflueger, T.; Senftleben, A.; Xu, S.; Dorn, A.; Ullrich, J.; Bray, I.; Fursa, D.V.; Colgan, J.; Pindzola, M.S.

    2011-05-15

    Single ionization of helium by 70.6-eV electron impact is studied in a comprehensive experiment covering a major part of the entire collision kinematics and the full 4{pi} solid angle for the emitted electron. The absolutely normalized triple-differential experimental cross sections are compared with results from the convergent close-coupling (CCC) and the time-dependent close-coupling (TDCC) theories. Whereas excellent agreement with the TDCC prediction is only found for equal energy sharing, the CCC calculations are in excellent agreement with essentially all experimentally observed dynamical features, including the absolute magnitude of the cross sections.

  13. Limitations of Community College Benchmarking and Benchmarks

    ERIC Educational Resources Information Center

    Bers, Trudy H.

    2006-01-01

    This chapter distinguishes between benchmarks and benchmarking, describes a number of data and cultural limitations to benchmarking projects, and suggests that external demands for accountability are the dominant reason for growing interest in benchmarking among community colleges.

  14. NASA Controller Acceptability Study 1(CAS-1) Experiment Description and Initial Observations

    NASA Technical Reports Server (NTRS)

    Chamberlain, James P.; Consiglio, Maria C.; Comstock, James R., Jr.; Ghatas, Rania W.; Munoz, Cesar

    2015-01-01

    This paper describes the Controller Acceptability Study 1 (CAS-1) experiment that was conducted by NASA Langley Research Center personnel from January through March 2014 and presents partial CAS-1 results. CAS-1 employed 14 air traffic controller volunteers as research subjects to assess the viability of simulated future unmanned aircraft systems (UAS) operating alongside manned aircraft in moderate-density, moderate-complexity Class E airspace. These simulated UAS were equipped with a prototype pilot-in-the-loop (PITL) Detect and Avoid (DAA) system, specifically the Self-Separation (SS) function of such a system based on Stratway+ software to replace the see-and-avoid capabilities of manned aircraft pilots. A quantitative CAS-1 objective was to determine horizontal miss distance (HMD) values for SS encounters that were most acceptable to air traffic controllers, specifically HMD values that were assessed as neither unsafely small nor disruptively large. HMD values between 0.5 and 3.0 nautical miles (nmi) were assessed for a wide array of encounter geometries between UAS and manned aircraft. The paper includes brief introductory material about DAA systems and their SS functions, followed by descriptions of the CAS-1 simulation environment, prototype PITL SS capability, and experiment design, and concludes with presentation and discussion of partial CAS-1 data and results.

  15. New level-resolved collision data for neutral argon, benchmarked against the ALEXIS plasma experiment

    NASA Astrophysics Data System (ADS)

    Arnold, Nicholas; Loch, Stuart; Ballance, Connor; Thomas, Ed

    2016-10-01

    Performing spectroscopic measurements of emission lines in low temperature laboratory plasmas is challenging because the plasma is often neutral-dominated and not in thermal equilibrium. The densities and temperatures are such that coronal models do not apply; meaning that generalized collisional-radiative (GCR) methods must be employed to theoretically analyze atomic processes. However, for most noble gases, detailed, level-resolved atomic data for neutral and low-charge states does not exist in the literature. We report on a new project, where we use existing atomic physics codes to calculate level-resolved atomic data for neutral and low charge states of argon and compare with previously published, term-resolved theoretical results. In addition, we use the Atomic Structure and Data Analysis (ADAS) suite of codes to calculate a GCR model for low temperature neutral argon, which we compare to published measurements of argon optical emission cross sections. Finally, we compare synthetic spectra generated from our data with observations taken from the Auburn Linear Experiment for Instability Studies (ALEXIS) in an attempt to develop new optical plasma diagnostics for electron temperature and plasma density measurements. This project is supported by the U.S. Department of Energy. Grant Number: DE-FG02-00ER54476.

  16. Effects of an Educational Experience Incorporating an Inventory of Factors Potentially Influencing Student Acceptance of Biological Evolution

    ERIC Educational Resources Information Center

    Wiles, Jason R.; Alters, Brian

    2011-01-01

    This investigation provides an extensive review of scientific, religious, and otherwise non-scientific factors that may influence student acceptance of biological evolution. We also measure the extent to which students' levels of acceptance changed following an educational experience designed to address an inclusive inventory of factors identified…

  17. Benchmark Experiments of Thermal Neutron and Capture Gamma-Ray Distributions in Concrete Using {sup 252}Cf

    SciTech Connect

    Asano, Yoshihiro; Sugita, Takeshi; Hirose, Hideyuki; Suzaki, Takenori

    2005-10-15

    The distributions of thermal neutrons and capture gamma rays in ordinary concrete were investigated by using {sup 252}Cf. Two subjects are considered. One is the benchmark experiments for the thermal neutron and the capture gamma-ray distributions in ordinary concrete. The thermal neutron and the capture gamma-ray distributions were measured by using gold-foil activation detectors and thermoluminescence detectors. These were compared with the simulations by using the discrete ordinates code ANISN with two different group structure types of cross-section library of a new Japanese version, JENDL-3.3, showing reasonable agreement with both fine and rough structure groups of thermal neutron energy. The other is a comparison of the simulations with two different cross-section libraries, JENDL-3.3 and ENDF/B-VI, for the deep penetration of neutrons in the concrete, showing close agreement in 0- to 100-cm-thick concrete. However, the differences in flux grow with an increase in concrete thickness, reaching up to approximately eight times near 4-m thickness.

  18. Willingness-To-Accept Pharmaceutical Retail Inconvenience: Evidence from a Contingent Choice Experiment

    PubMed Central

    Finlay, Keith; Stoecker, Charles; Cunningham, Scott

    2015-01-01

    Objectives Restrictions on retail purchases of pseudoephedrine are one regulatory approach to reduce the social costs of methamphetamine production and use, but may impose costs on legitimate users of nasal decongestants. This is the first study to evaluate the costs of restricting access to medications on consumer welfare. Our objective was to measure the inconvenience cost consumers place on restrictions for cold medication purchases including identification requirements, purchase limits, over-the-counter availability, prescription requirements, and the active ingredient. Methods We conducted a contingent choice experiment with Amazon Mechanical Turk workers that presented participants with randomized, hypothetical product prices and combinations of restrictions that reflect the range of public policies. We used a conditional logit model to calculate willingness-to-accept each restriction. Results Respondents’ willingness-to-accept prescription requirements was $14.17 ($9.76–$18.58) and behind-the-counter restrictions was $9.68 ($7.03–$12.33) per box of pseudoephedrine product. Participants were willing to pay $4.09 ($1.66–$6.52) per box to purchase pseudoephedrine-based products over phenylephrine-based products. Conclusions Restricting access to medicines as a means of reducing the social costs of non-medical use can imply large inconvenience costs for legitimate consumers. These results are relevant to discussions of retail access restrictions on other medications. PMID:26024444

  19. 'Feel the Feeling': Psychological practitioners' experience of acceptance and commitment therapy well-being training in the workplace.

    PubMed

    Wardley, Matt Nj; Flaxman, Paul E; Willig, Carla; Gillanders, David

    2016-08-01

    This empirical study investigates psychological practitioners' experience of worksite training in acceptance and commitment therapy using an interpretative phenomenological analysis methodology. Semi-structured interviews were conducted with eight participants, and three themes emerged from the interpretative phenomenological analysis data analysis: influence of previous experiences, self and others and impact and application The significance of the experiential nature of the acceptance and commitment therapy training is explored as well as the dual aspects of developing participants' self-care while also considering their own clinical practice. Consistencies and inconsistencies across acceptance and commitment therapy processes are considered as well as clinical implications, study limitations and future research suggestions.

  20. Pain Management Experiences and the Acceptability of Cognitive Behavioral Strategies Among American Indians and Alaska Natives

    PubMed Central

    Haozous, Emily A.; Doorenbos, Ardith Z.; Stoner, Susan

    2014-01-01

    Purpose The purpose of this project was to explore the chronic pain experience and establish cultural appropriateness of cognitive behavioral pain management (CBPM) techniques in American Indians and Alaska Natives (AI/ANs). Design A semistructured interview guide was used with three focus groups of AI/AN patients in the U.S. Southwest and Pacific Northwest regions to explore pain and CBPM in AI/ANs. Findings The participants provided rich qualitative data regarding chronic pain and willingness to use CBPM. Themes included empty promises and health care insufficiencies, individuality, pain management strategies, and suggestions for health care providers. Conclusion Results suggest that there is room for improvement in chronic pain care among AI/ANs and that CBPM would likely be a viable and culturally appropriate approach for chronic pain management. Implications This research provides evidence that CBPM is culturally acceptable and in alignment with existing traditional AI/AN strategies for coping and healing. PMID:25403169

  1. A high resolution, broad energy acceptance spectrometer for laser wakefield acceleration experiments

    SciTech Connect

    Sears, Christopher M. S.; Cuevas, Sofia Benavides; Veisz, Laszlo; Schramm, Ulrich; Schmid, Karl; Buck, Alexander; Habs, Dieter; Krausz, Ferenc

    2010-07-15

    Laser wakefield experiments present a unique challenge in measuring the resulting electron energy properties due to the large energy range of interest, typically several 100 MeV, and the large electron beam divergence and pointing jitter >1 mrad. In many experiments the energy resolution and accuracy are limited by the convolved transverse spot size and pointing jitter of the beam. In this paper we present an electron energy spectrometer consisting of two magnets designed specifically for laser wakefield experiments. In the primary magnet the field is produced by permanent magnets. A second optional electromagnet can be used to obtain better resolution for electron energies above 75 MeV. The spectrometer has an acceptance of 2.5-400 MeV (E{sub max}/E{sub min}>100) with a resolution of better than 1% rms for electron energies above 25 MeV. This high resolution is achieved by refocusing electrons in the energy plane and without any postprocessing image deconvolution. Finally, the spectrometer employs two complimentary detection mechanisms: (1) absolutely calibrated scintillation screens imaged by cameras outside the vacuum chamber and (2) an array of scintillating fibers coupled to a low-noise charge-coupled device.

  2. Benchmark Evaluation of Start-Up and Zero-Power Measurements at the High-Temperature Engineering Test Reactor

    SciTech Connect

    Bess, John D.; Fujimoto, Nozomu

    2014-10-09

    Benchmark models were developed to evaluate six cold-critical and two warm-critical, zero-power measurements of the HTTR. Additional measurements of a fully-loaded subcritical configuration, core excess reactivity, shutdown margins, six isothermal temperature coefficients, and axial reaction-rate distributions were also evaluated as acceptable benchmark experiments. Insufficient information is publicly available to develop finely-detailed models of the HTTR as much of the design information is still proprietary. However, the uncertainties in the benchmark models are judged to be of sufficient magnitude to encompass any biases and bias uncertainties incurred through the simplification process used to develop the benchmark models. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the impurity content of the various graphite blocks that comprise the HTTR. Monte Carlo calculations of keff are between approximately 0.9 % and 2.7 % greater than the benchmark values. Reevaluation of the HTTR models as additional information becomes available could improve the quality of this benchmark and possibly reduce the computational biases. High-quality characterization of graphite impurities would significantly improve the quality of the HTTR benchmark assessment. Simulation of the other reactor physics measurements are in good agreement with the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.

  3. Benchmark Evaluation of Start-Up and Zero-Power Measurements at the High-Temperature Engineering Test Reactor

    DOE PAGES

    Bess, John D.; Fujimoto, Nozomu

    2014-10-09

    Benchmark models were developed to evaluate six cold-critical and two warm-critical, zero-power measurements of the HTTR. Additional measurements of a fully-loaded subcritical configuration, core excess reactivity, shutdown margins, six isothermal temperature coefficients, and axial reaction-rate distributions were also evaluated as acceptable benchmark experiments. Insufficient information is publicly available to develop finely-detailed models of the HTTR as much of the design information is still proprietary. However, the uncertainties in the benchmark models are judged to be of sufficient magnitude to encompass any biases and bias uncertainties incurred through the simplification process used to develop the benchmark models. Dominant uncertainties in themore » experimental keff for all core configurations come from uncertainties in the impurity content of the various graphite blocks that comprise the HTTR. Monte Carlo calculations of keff are between approximately 0.9 % and 2.7 % greater than the benchmark values. Reevaluation of the HTTR models as additional information becomes available could improve the quality of this benchmark and possibly reduce the computational biases. High-quality characterization of graphite impurities would significantly improve the quality of the HTTR benchmark assessment. Simulation of the other reactor physics measurements are in good agreement with the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  4. Benchmark Evaluation of the NRAD Reactor LEU Core Startup Measurements

    SciTech Connect

    J. D. Bess; T. L. Maddock; M. A. Marshall

    2011-09-01

    The Neutron Radiography (NRAD) reactor is a 250-kW TRIGA-(Training, Research, Isotope Production, General Atomics)-conversion-type reactor at the Idaho National Laboratory; it is primarily used for neutron radiography analysis of irradiated and unirradiated fuels and materials. The NRAD reactor was converted from HEU to LEU fuel with 60 fuel elements and brought critical on March 31, 2010. This configuration of the NRAD reactor has been evaluated as an acceptable benchmark experiment and is available in the 2011 editions of the International Handbook of Evaluated Criticality Safety Benchmark Experiments (ICSBEP Handbook) and the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook). Significant effort went into precisely characterizing all aspects of the reactor core dimensions and material properties; detailed analyses of reactor parameters minimized experimental uncertainties. The largest contributors to the total benchmark uncertainty were the 234U, 236U, Er, and Hf content in the fuel; the manganese content in the stainless steel cladding; and the unknown level of water saturation in the graphite reflector blocks. A simplified benchmark model of the NRAD reactor was prepared with a keff of 1.0012 {+-} 0.0029 (1s). Monte Carlo calculations with MCNP5 and KENO-VI and various neutron cross section libraries were performed and compared with the benchmark eigenvalue for the 60-fuel-element core configuration; all calculated eigenvalues are between 0.3 and 0.8% greater than the benchmark value. Benchmark evaluations of the NRAD reactor are beneficial in understanding biases and uncertainties affecting criticality safety analyses of storage, handling, or transportation applications with LEU-Er-Zr-H fuel.

  5. Acceptability of Financial Incentives for Health Behaviours: A Discrete Choice Experiment

    PubMed Central

    Giles, Emma L.; Becker, Frauke; Ternent, Laura; Sniehotta, Falko F.; McColl, Elaine

    2016-01-01

    Background Healthy behaviours are important determinants of health and disease, but many people find it difficult to perform these behaviours. Systematic reviews support the use of personal financial incentives to encourage healthy behaviours. There is concern that financial incentives may be unacceptable to the public, those delivering services and policymakers, but this has been poorly studied. Without widespread acceptability, financial incentives are unlikely to be widely implemented. We sought to answer two questions: what are the relative preferences of UK adults for attributes of financial incentives for healthy behaviours? Do preferences vary according to the respondents’ socio-demographic characteristics? Methods We conducted an online discrete choice experiment. Participants were adult members of a market research panel living in the UK selected using quota sampling. Preferences were examined for financial incentives for: smoking cessation, regular physical activity, attendance for vaccination, and attendance for screening. Attributes of interest (and their levels) were: type of incentive (none, cash, shopping vouchers or lottery tickets); value of incentive (a continuous variable); schedule of incentive (same value each week, or value increases as behaviour change is sustained); other information provided (none, written information, face-to-face discussion, or both); and recipients (all eligible individuals, people living in low-income households, or pregnant women). Results Cash or shopping voucher incentives were preferred as much as, or more than, no incentive in all cases. Lower value incentives and those offered to all eligible individuals were preferred. Preferences for additional information provided alongside incentives varied between behaviours. Younger participants and men were more likely to prefer incentives. There were no clear differences in preference according to educational attainment. Conclusions Cash or shopping voucher

  6. Microbicide Acceptability Among High-Risk Urban U.S. Women: Experiences and Perceptions of Sexually Transmitted HIV Prevention

    PubMed Central

    Weeks, Margaret R.; Mosack, Katie E.; Abbott, Maryann; Sylla, Laurie Novick; Valdes, Barbara; Prince, Mary

    2005-01-01

    SUMMARY A study of microbicide acceptability among high-risk African American, Puerto Rican, non-Hispanic White, and other women in Hartford, Connecticut indicated limited experience with vaginal contraceptives but significant interest in and willingness to adopt vaginal microbicides for HIV/STI prevention. Objectives To measure microbicide acceptability among high-risk women in Hartford, Connecticut and contextual factors likely to affect acceptability and use. Goal To assess usefulness of microbicides for HIV/STI prevention for high-risk women. Study Design Ethnographic interviews (n=75) and a survey (n=471) explored women’s perspectives on HIV/STI prevention, vaginal contraceptives similar to microbicides, and microbicide acceptability. Participants (n=94) in a two-week behavioral trial used an over-the-counter vaginal moisturizer to simulate microbicide use during sex with primary, casual, and/or paying partners. Results Findings showed limited experience with vaginal contraceptives, but high interest in microbicides as an alternative to condoms, indicated by an acceptability index score of 2.73 (SD .49, scale 1–4) in the overall sample. General microbicide acceptability varied by ethnicity, prior contraceptive and violence/abuse experiences, relationship power, and other attitudinal factors. The simulation trial indicated significant willingness to use the product in various locations and with all types of partners. Conclusions Vaginal microbicides may improve prevention outcomes for high-risk inner-city women. PMID:15502677

  7. Relationship of maternal and general self-acceptance to pre- and postpartum affective experience.

    PubMed

    Dimitrovsky, L; Lev, S; Itskowitz, R

    1998-09-01

    Forty-nine married primiparous Israeli women responded to W. W. K. Zung's (1965) Self-Rating Depression Scale, N. M. Bradburn's (1969) Affect Balance Scale, and measures of general and maternal self-acceptance during the last trimester of pregnancy and again 6 to 8 weeks following childbirth. There was a significant decrease in depression from pre- to postpartum for the total group. Women high in general self-acceptance were less depressed and displayed less negative affect than those low in general self-acceptance. There were no corresponding differences between the high and low maternal self-acceptance groups. Both pre- and postpartum women tended to rate themselves significantly higher for maternal self-acceptance than for general self-acceptance.

  8. Benchmarking: a method for continuous quality improvement in health.

    PubMed

    Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe

    2012-05-01

    Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields of application and examples of how it has been used in the healthcare sector. Benchmarking is often thought to consist simply of comparing indicators and is not perceived in its entirety, that is, as a tool based on voluntary and active collaboration among several organizations to create a spirit of competition and to apply best practices. The key feature of benchmarking is its integration within a comprehensive and participatory policy of continuous quality improvement (CQI). Conditions for successful benchmarking focus essentially on careful preparation of the process, monitoring of the relevant indicators, staff involvement and inter-organizational visits. Compared to methods previously implemented in France (CQI and collaborative projects), benchmarking has specific features that set it apart as a healthcare innovation. This is especially true for healthcare or medical-social organizations, as the principle of inter-organizational visiting is not part of their culture. Thus, this approach will need to be assessed for feasibility and acceptability before it is more widely promoted.

  9. Benchmarking: A Method for Continuous Quality Improvement in Health

    PubMed Central

    Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe

    2012-01-01

    Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields of application and examples of how it has been used in the healthcare sector. Benchmarking is often thought to consist simply of comparing indicators and is not perceived in its entirety, that is, as a tool based on voluntary and active collaboration among several organizations to create a spirit of competition and to apply best practices. The key feature of benchmarking is its integration within a comprehensive and participatory policy of continuous quality improvement (CQI). Conditions for successful benchmarking focus essentially on careful preparation of the process, monitoring of the relevant indicators, staff involvement and inter-organizational visits. Compared to methods previously implemented in France (CQI and collaborative projects), benchmarking has specific features that set it apart as a healthcare innovation. This is especially true for healthcare or medical–social organizations, as the principle of inter-organizational visiting is not part of their culture. Thus, this approach will need to be assessed for feasibility and acceptability before it is more widely promoted. PMID:23634166

  10. Growth and Expansion of the International Criticality Safety Benchmark Evaluation Project and the Newly Organized International Reactor Physics Experiment Evaluation Project

    SciTech Connect

    J. Blair Briggs; Lori Scott; Yolanda Rugama; Enrico Satori

    2007-05-01

    Since ICNC 2003, the International Criticality Safety Benchmark Evaluation Project (ICSBEP) has continued to expand its efforts and broaden its scope. Criticality-alarm / shielding type benchmarks and fundamental physics measurements that are relevant to criticality safety applications are not only included in the scope of the project, but benchmark data are also included in the latest version of the handbook. A considerable number of improvements have been made to the searchable database, DICE and the criticality-alarm / shielding benchmarks and fundamental physics measurements have been included in the database. There were 12 countries participating on the ICSBEP in 2003. That number has increased to 18 with recent contributions of data and/or resources from Brazil, Czech Republic, Poland, India, Canada, and China. South Africa, Germany, Argentina, and Australia have been invited to participate. Since ICNC 2003, the contents of the “International Handbook of Evaluated Criticality Safety Benchmark Experiments” have increased from 350 evaluations (28,000 pages) containing benchmark specifications for 3070 critical or subcritical configurations to 442 evaluations (over 38,000 pages) containing benchmark specifications for 3957 critical or subcritical configurations, 23 criticality-alarm-placement / shielding configurations with multiple dose points for each, and 20 configurations that have been categorized as fundamental physics measurements that are relevant to criticality safety applications in the 2006 Edition of the ICSBEP Handbook. Approximately 30 new evaluations and 250 additional configurations are expected to be added to the 2007 Edition of the Handbook. Since ICNC 2003, a reactor physics counterpart to the ICSBEP, The International Reactor Physics Experiment Evaluation Project (IRPhEP) was initiated. Beginning in 1999, the IRPhEP was conducted as a pilot activity by the by the Organization of Economic Cooperation and Development (OECD) Nuclear Energy

  11. Orientation of Oblique Airborne Image Sets - Experiences from the Isprs/eurosdr Benchmark on Multi-Platform Photogrammetry

    NASA Astrophysics Data System (ADS)

    Gerke, M.; Nex, F.; Remondino, F.; Jacobsen, K.; Kremer, J.; Karel, W.; Hu, H.; Ostrowski, W.

    2016-06-01

    During the last decade the use of airborne multi camera systems increased significantly. The development in digital camera technology allows mounting several mid- or small-format cameras efficiently onto one platform and thus enables image capture under different angles. Those oblique images turn out to be interesting for a number of applications since lateral parts of elevated objects, like buildings or trees, are visible. However, occlusion or illumination differences might challenge image processing. From an image orientation point of view those multi-camera systems bring the advantage of a better ray intersection geometry compared to nadir-only image blocks. On the other hand, varying scale, occlusion and atmospheric influences which are difficult to model impose problems to the image matching and bundle adjustment tasks. In order to understand current limitations of image orientation approaches and the influence of different parameters such as image overlap or GCP distribution, a commonly available dataset was released. The originally captured data comprises of a state-of-the-art image block with very high overlap, but in the first stage of the so-called ISPRS/EUROSDR benchmark on multi-platform photogrammetry only a reduced set of images was released. In this paper some first results obtained with this dataset are presented. They refer to different aspects like tie point matching across the viewing directions, influence of the oblique images onto the bundle adjustment, the role of image overlap and GCP distribution. As far as the tie point matching is concerned we observed that matching of overlapping images pointing to the same cardinal direction, or between nadir and oblique views in general is quite successful. Due to the quite different perspective between images of different viewing directions the standard tie point matching, for instance based on interest points does not work well. How to address occlusion and ambiguities due to different views onto

  12. Applications of Integral Benchmark Data

    SciTech Connect

    Giuseppe Palmiotti; Teruhiko Kugo; Fitz Trumble; Albert C. Kahler; Dale Lancaster

    2014-10-09

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) provide evaluated integral benchmark data that may be used for validation of reactor physics / nuclear criticality safety analytical methods and data, nuclear data testing, advanced modeling and simulation, and safety analysis licensing activities. The handbooks produced by these programs are used in over 30 countries. Five example applications are presented in this paper: (1) Use of IRPhEP Data in Uncertainty Analyses and Cross Section Adjustment, (2) Uncertainty Evaluation Methods for Reactor Core Design at JAEA Using Reactor Physics Experimental Data, (3) Application of Benchmarking Data to a Broad Range of Criticality Safety Problems, (4) Cross Section Data Testing with ICSBEP Benchmarks, and (5) Use of the International Handbook of Evaluated Reactor Physics Benchmark Experiments to Support the Power Industry.

  13. Evaluation of the concrete shield compositions from the 2010 criticality accident alarm system benchmark experiments at the CEA Valduc SILENE facility

    SciTech Connect

    Miller, Thomas Martin; Celik, Cihangir; Dunn, Michael E; Wagner, John C; McMahan, Kimberly L; Authier, Nicolas; Jacquet, Xavier; Rousseau, Guillaume; Wolff, Herve; Savanier, Laurence; Baclet, Nathalie; Lee, Yi-kang; Trama, Jean-Christophe; Masse, Veronique; Gagnier, Emmanuel; Naury, Sylvie; Blanc-Tranchant, Patrick; Hunter, Richard; Kim, Soon; Dulik, George Michael; Reynolds, Kevin H.

    2015-01-01

    In October 2010, a series of benchmark experiments were conducted at the French Commissariat a l'Energie Atomique et aux Energies Alternatives (CEA) Valduc SILENE facility. These experiments were a joint effort between the United States Department of Energy Nuclear Criticality Safety Program and the CEA. The purpose of these experiments was to create three benchmarks for the verification and validation of radiation transport codes and evaluated nuclear data used in the analysis of criticality accident alarm systems. This series of experiments consisted of three single-pulsed experiments with the SILENE reactor. For the first experiment, the reactor was bare (unshielded), whereas in the second and third experiments, it was shielded by lead and polyethylene, respectively. The polyethylene shield of the third experiment had a cadmium liner on its internal and external surfaces, which vertically was located near the fuel region of SILENE. During each experiment, several neutron activation foils and thermoluminescent dosimeters (TLDs) were placed around the reactor. Nearly half of the foils and TLDs had additional high-density magnetite concrete, high-density barite concrete, standard concrete, and/or BoroBond shields. CEA Saclay provided all the concrete, and the US Y-12 National Security Complex provided the BoroBond. Measurement data from the experiments were published at the 2011 International Conference on Nuclear Criticality (ICNC 2011) and the 2013 Nuclear Criticality Safety Division (NCSD 2013) topical meeting. Preliminary computational results for the first experiment were presented in the ICNC 2011 paper, which showed poor agreement between the computational results and the measured values of the foils shielded by concrete. Recently the hydrogen content, boron content, and density of these concrete shields were further investigated within the constraints of the previously available data. New computational results for the first experiment are now available that

  14. High School Students' Perceptions of Evolution Instruction: Acceptance and Evolution Learning Experiences

    ERIC Educational Resources Information Center

    Donnelly, Lisa A.; Kazempour, Mahsa; Amirshokoohi, Aidin

    2009-01-01

    Evolution is an important and sometimes controversial component of high school biology. In this study, we used a mixed methods approach to explore students' evolution acceptance and views of evolution teaching and learning. Students explained their acceptance and rejection of evolution in terms of evidence and conflicts with religion and…

  15. A Quantitative Examination of User Experience as an Antecedent to Student Perception in Technology Acceptance Modeling

    ERIC Educational Resources Information Center

    Butler, Rory

    2013-01-01

    Internet-enabled mobile devices have increased the accessibility of learning content for students. Given the ubiquitous nature of mobile computing technology, a thorough understanding of the acceptance factors that impact a learner's intention to use mobile technology as an augment to their studies is warranted. Student acceptance of mobile…

  16. Effects of an Educational Experience Incorporating an Inventory of Factors Potentially Influencing Student Acceptance of Biological Evolution

    NASA Astrophysics Data System (ADS)

    Wiles, Jason R.; Alters, Brian

    2011-12-01

    This investigation provides an extensive review of scientific, religious, and otherwise non-scientific factors that may influence student acceptance of biological evolution. We also measure the extent to which students' levels of acceptance changed following an educational experience designed to address an inclusive inventory of factors identified as potentially affecting student acceptance of evolution (n = 81, pre-test/post-test) n = 37, one-year longitudinal). Acceptance of evolution was measured using the Measure of Acceptance of the Theory of Evolution (MATE) instrument among participants enrolled in a secondary-level academic programme during the summer prior to their final year of high school and as they transitioned to the post-secondary level. Student acceptance of evolution was measured to be significantly higher than initial levels both immediately following and over one year after the educational experience. Results reported herein carry implications for future quantitative and qualitative research as well as for cross-disciplinary instruction plans related to evolutionary science and non-scientific factors which may influence student understanding of evolution.

  17. FLOWTRAN benchmarking with onset of flow instability data from 1988 Columbia University single-tube OFI experiment

    SciTech Connect

    Chen, K.; Paul, P.K.; Barbour, K.L.

    1990-06-01

    Benchmarking FLOWTRAN, Version 16.2, with an Onset of Significant Voiding (OSV) criterion against measured Onset of Flow Instability (OFI) data from the 1988--89 Columbia University downflow tests has shown that FLOWTRAN with OSV is a conservative OFI predictor. Calculated limiting flow rates based on the Savannah River Site (SRS) OSV criterion were always higher than the measured flow rates at OFI. This work supplements recent FLOWTRAN benchmarking against 1963 downflow tests at Columbia University and 1988 downflow tests at the Heat Transfer Laboratory. These studies provide confidence that using FLOWTRAN with an OSV based criterion for SRS reactor limits analyses will generate operating limits that are conservative with respect to OFI, the criterion selected to prevent fuel damage.

  18. Final acceptance of the 200 GHz telescope unit for the QUIJOTE CMB experiment

    NASA Astrophysics Data System (ADS)

    Sanquirce-García, R.; Sainz-Pardo, I.; Etxeita-Arriaga, B.; Murga-Llano, Gaizka; Fernandez-Santos, E.; Sánchez-de-la-Rosa, V.; Viera-Curbelo, T. A.; Gómez-Reñasco, F.; Aguiar-González, M.; Pérez de Taoro, M. R.; Rubiño-Martín, J. A.

    2016-07-01

    The QUIJOTE (Q-U-I JOint TEnerife) experiment is a scientific collaboration, led by the Instituto de Astrofísica de Canarias (IAC), with the aim of measuring the polarization of the Cosmic Microwave Background (CMB) in the frequency range 10-40 GHz and at large angular scales (around 1°). The project is composed of 2 telescopes and 3 instruments, located in Teide Observatory (Tenerife, Spain). Idoḿs contribution for this project is divided in two phases. Phase I consisted on the design, assembly and factory testing of the first telescope (2008), the integration and functional tests for the 5 polarimeters of the first instrument (2009), and the design and construction supervision of the building which protects both telescopes (2009), including the installation and commissioning of the mechanism for domes apertures. Phase II comprised the design, factory assembly and testing, transport and final commissioning on site of the second telescope, which finished in January 2015. The optical design of both telescopes should allow them to reach up to 200 GHz. The required opto-mechanical performance was checked under nominal conditions, reaching a pointing and tracking accuracy lower than 5 arcsec in both axes, 8 times better than specified. Particular inspections and tests were carried out for critical systems, as the rotary joint that transmits fluid, power and signal to the rotary elements, or for the safety system to ensure personnel and hardware protection under emergency conditions. This paper contains a comprehensive description of the power electronics and acquisition/control design required for safely operation under nominal and emergency conditions, as well as a detailed description of the factory and observatory tests required for the final acceptance of the telescope

  19. Testing (Validating?) Cross Sections with ICSBEP Benchmarks

    SciTech Connect

    Kahler, Albert C. III

    2012-06-28

    We discuss how to use critical benchmarks from the International Handbook of Evaluated Criticality Safety Benchmark Experiments to determine the applicability of specific cross sections to the end-user's problem of interest. Particular attention is paid to making sure the selected suite of benchmarks includes the user's range of applicability (ROA).

  20. Phase-covariant quantum benchmarks

    SciTech Connect

    Calsamiglia, J.; Aspachs, M.; Munoz-Tapia, R.; Bagan, E.

    2009-05-15

    We give a quantum benchmark for teleportation and quantum storage experiments suited for pure and mixed test states. The benchmark is based on the average fidelity over a family of phase-covariant states and certifies that an experiment cannot be emulated by a classical setup, i.e., by a measure-and-prepare scheme. We give an analytical solution for qubits, which shows important differences with standard state estimation approach, and compute the value of the benchmark for coherent and squeezed states, both pure and mixed.

  1. Coming to terms with it all: adult burn survivors' 'lived experience' of acknowledgement and acceptance during rehabilitation.

    PubMed

    Kornhaber, R; Wilson, A; Abu-Qamar, M Z; McLean, L

    2014-06-01

    Although studies have explored the 'lived experience' of burn survivors, little is known about their experiences encountered during rehabilitation. A descriptive phenomenological study was conducted to gain an in-depth insight into burn survivors' experiences' of acknowledgement and acceptance of their injury and the challenges experienced during their rehabilitation journey. A descriptive phenomenological methodology was used to construct themes depicting how burn survivors endeavoured to acknowledge and accept their injury and subsequent altered body image. Twenty men and one woman up to eight years after-burn within Australia were selected through purposeful sampling, and data were collected through in-depth individual interviews conducted in 2011 (N = 21). Interviews were analysed using Colaizzi's method of data analysis. The emergent theme acknowledgement identified four cluster themes that represented how burn survivors came to terms with their injury and an altered body image: (1) reasoning (2) humour (3) the challenge of acceptance (4) self-awareness. Coming to terms with a severe burn is a challenging experience. Reasoning and humour are strategies utilised by burn survivors that facilitate with acknowledgement and acceptance. Understanding these concepts through the burn survivors' perspective will, potentially, facilitate a better understanding of how to best provide for this cohort of patients.

  2. A Qualitative Case Study to Investigate the Technology Acceptance Experience Outlined in the TAM Using the Kubler-Ross Stages of Grieving and Acceptance

    ERIC Educational Resources Information Center

    Sotelo, Benjamin Eladio

    2015-01-01

    The Technology Acceptance Model (TAM) has been an important model for the understanding of end user acceptance regarding technology and a framework used in thousands of researched scenarios since publication in 1986. Similarly, the Kubler-Ross model of death and dying has also been used as a model for the study of acceptance within the medical…

  3. The ORSphere Benchmark Evaluation and Its Potential Impact on Nuclear Criticality Safety

    SciTech Connect

    John D. Bess; Margaret A. Marshall; J. Blair Briggs

    2013-10-01

    In the early 1970’s, critical experiments using an unreflected metal sphere of highly enriched uranium (HEU) were performed with the focus to provide a “very accurate description…as an ideal benchmark for calculational methods and cross-section data files.” Two near-critical configurations of the Oak Ridge Sphere (ORSphere) were evaluated as acceptable benchmark experiments for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments (ICSBEP Handbook). The results from those benchmark experiments were then compared with additional unmoderated and unreflected HEU metal benchmark experiment configurations currently found in the ICSBEP Handbook. For basic geometries (spheres, cylinders, and slabs) the eigenvalues calculated using MCNP5 and ENDF/B-VII.0 were within 3 of their respective benchmark values. There appears to be generally a good agreement between calculated and benchmark values for spherical and slab geometry systems. Cylindrical geometry configurations tended to calculate low, including more complex bare HEU metal systems containing cylinders. The ORSphere experiments do not calculate within their 1s uncertainty and there is a possibility that the effect of the measured uncertainties for the GODIVA I benchmark may need reevaluated. There is significant scatter in the calculations for the highly-correlated ORCEF cylinder experiments, which are constructed from close-fitting HEU discs and annuli. Selection of a nuclear data library can have a larger impact on calculated eigenvalue results than the variation found within calculations of a given experimental series, such as the ORCEF cylinders, using a single nuclear data set.

  4. High School Students' Perceptions of Evolution Instruction: Acceptance and Evolution Learning Experiences

    NASA Astrophysics Data System (ADS)

    Donnelly, Lisa A.; Kazempour, Mahsa; Amirshokoohi, Aidin

    2009-11-01

    Evolution is an important and sometimes controversial component of high school biology. In this study, we used a mixed methods approach to explore students’ evolution acceptance and views of evolution teaching and learning. Students explained their acceptance and rejection of evolution in terms of evidence and conflicts with religion and authority. Students largely supported the teaching of evolution and offered several reasons for its inclusion in high school biology. Students also offered several suggestions for improving evolution instruction. Evolution acceptors’ and rejecters’ views of evolution teaching and learning differed in a number of respects, and these differences may be explained using border crossing as a theoretical lens. Relevant implications for evolution instruction are discussed.

  5. Benchmarking in Student Affairs.

    ERIC Educational Resources Information Center

    Mosier, Robert E.; Schwarzmueller, Gary J.

    2002-01-01

    Discusses the use of benchmarking in student affairs, focusing on issues related to student housing. Provides examples of how benchmarking has influenced administrative practice at many institutions. (EV)

  6. Correlation of nuclear criticality safety computer codes with plutonium benchmark experiments and derivation of subcritical limits. [MGBS, TGAN, KEFF, HRXN, GLASS, ANISN, SPBL, and KENO

    SciTech Connect

    Clark, H.K.

    1981-10-01

    A compilation of benchmark critical experiments was made for essentially one-dimensional systems containing plutonium. The systems consist of spheres, series of experiments with cylinders and cuboids that permit extrapolation to infinite cylinders and slabs, and large cylinders for which separability of the neutron flux into a product of spatial components is a good approximation. Data from the experiments were placed in a form readily usable as computer code input. Aqueous solutions of Pu(NO/sub 3/)/sub 4/ are treated as solutions of PuO/sub 2/ in nitric acid. The apparent molal volume of PuO/sub 2/ as a function of plutonium concentration was derived from analyses of solution density data and was incorporated in the Savannah River Laboratory computer codes along with density tables for nitric acid. The biases of three methods of calculation were established by correlation with the benchmark experiments. The oldest method involves two-group diffusion theory and has been used extensively at the Savannah River Laboratory. The other two involve S/sub n/ transport theory with, in one method, Hansen-Roach cross sections and, in the other, cross sections derived from ENDF/B-IV. Subcritical limits were calculated by all three methods. Significant differences were found among the results and between the results and limits currently in the American National Standard for Nuclear Criticality Safety in Operations with Fissionable Materials Outside Reactor (ANSI N16.1), which were calculated by yet another method, despite the normalization of all four methods to the same experimental data. The differences were studied, and a set of subcritical limits was proposed to supplement and in some cases to replace those in the ANSI Standard, which is currently being reviewed.

  7. Laser-plasma interaction in ignition relevant plasmas: benchmarking our 3D modelling capabilities versus recent experiments

    SciTech Connect

    Divol, L; Froula, D H; Meezan, N; Berger, R; London, R A; Michel, P; Glenzer, S H

    2007-09-27

    We have developed a new target platform to study Laser Plasma Interaction in ignition-relevant condition at the Omega laser facility (LLE/Rochester)[1]. By shooting an interaction beam along the axis of a gas-filled hohlraum heated by up to 17 kJ of heater beam energy, we were able to create a millimeter-scale underdense uniform plasma at electron temperatures above 3 keV. Extensive Thomson scattering measurements allowed us to benchmark our hydrodynamic simulations performed with HYDRA [1]. As a result of this effort, we can use with much confidence these simulations as input parameters for our LPI simulation code pF3d [2]. In this paper, we show that by using accurate hydrodynamic profiles and full three-dimensional simulations including a realistic modeling of the laser intensity pattern generated by various smoothing options, fluid LPI theory reproduces the SBS thresholds and absolute reflectivity values and the absence of measurable SRS. This good agreement was made possible by the recent increase in computing power routinely available for such simulations.

  8. Vegetable and Fruit Acceptance during Infancy: Impact of Ontogeny, Genetics, and Early Experiences.

    PubMed

    Mennella, Julie A; Reiter, Ashley R; Daniels, Loran M

    2016-01-01

    Many of the chronic illnesses that plague modern society derive in large part from poor food choices. Thus, it is not surprising that the Dietary Guidelines for Americans, aimed at the population ≥2 y of age, recommends limiting consumption of salt, fat, and simple sugars, all of which have sensory properties that we humans find particularly palatable, and increasing the variety and contribution of fruits and vegetables in the diet, to promote health and prevent disease. Similar recommendations may soon be targeted at even younger Americans: the B-24 Project, led by the US Department of Health and Human Services and the USDA, is currently evaluating evidence to include infants and children from birth to 2 y of age in the dietary guidelines. This article reviews the underinvestigated behavioral phenomena surrounding how to introduce vegetables and fruits into infants' diets, for which there is much medical lore but, to our knowledge, little evidence-based research. Because the chemical senses are the major determinants of whether young children will accept a food (e.g., they eat only what they like), these senses take on even greater importance in understanding the bases for food choices in children. We focus on early life, in contrast with many other studies that attempt to modify food habits in older children and thus may miss sensitive periods that modulate long-term acceptance. Our review also takes into consideration ontogeny and sources of individual differences in taste perception, in particular, the role of genetic variation in bitter taste perception.

  9. Benchmarks for GADRAS performance validation.

    SciTech Connect

    Mattingly, John K.; Mitchell, Dean James; Rhykerd, Charles L., Jr.

    2009-09-01

    The performance of the Gamma Detector Response and Analysis Software (GADRAS) was validated by comparing GADRAS model results to experimental measurements for a series of benchmark sources. Sources for the benchmark include a plutonium metal sphere, bare and shielded in polyethylene, plutonium oxide in cans, a highly enriched uranium sphere, bare and shielded in polyethylene, a depleted uranium shell and spheres, and a natural uranium sphere. The benchmark experimental data were previously acquired and consist of careful collection of background and calibration source spectra along with the source spectra. The calibration data were fit with GADRAS to determine response functions for the detector in each experiment. A one-dimensional model (pie chart) was constructed for each source based on the dimensions of the benchmark source. The GADRAS code made a forward calculation from each model to predict the radiation spectrum for the detector used in the benchmark experiment. The comparisons between the GADRAS calculation and the experimental measurements are excellent, validating that GADRAS can correctly predict the radiation spectra for these well-defined benchmark sources.

  10. Effects of azimuth-symmetric acceptance cutoffs on the measured asymmetry in unpolarized Drell-Yan fixed-target experiments

    NASA Astrophysics Data System (ADS)

    Bianconi, A.; Bussa, M. P.; Destefanis, M.; Ferrero, L.; Greco, M.; Maggiora, M.; Spataro, S.

    2013-04-01

    Fixed-target unpolarized Drell-Yan experiments often feature an acceptance depending on the polar angle of the lepton tracks in the laboratory frame. Typically leptons are detected in a defined angular range, with a dead zone in the forward region. If the cutoffs imposed by the angular acceptance are independent of the azimuth, at first sight they do not appear dangerous for a measurement of the cos(2 φ) asymmetry, which is relevant because of its association with the violation of the Lam-Tung rule and with the Boer-Mulders function. On the contrary, direct simulations show that up to 10 percent asymmetries are produced by these cutoffs. These artificial asymmetries present qualitative features that allow them to mimic the physical ones. They introduce some model dependence in the measurements of the cos(2 φ) asymmetry, since a precise reconstruction of the acceptance in the Collins-Soper frame requires a Monte Carlo simulation, that in turn requires some detailed physical input to generate event distributions. Although experiments in the eighties seem to have been aware of this problem, the possibility of using the Boer-Mulders function as an input parameter in the extraction of transversity has much increased the requirements of precision on this measurement. Our simulations show that the safest approach to these measurements is a strong cutoff on the Collins-Soper polar angle. This reduces statistics, but does not necessarily decrease the precision in a measurement of the Boer-Mulders function.

  11. Large Area Crop Inventory Experiment (LACIE). Review of LACIE methodology, a project evaluation of technical acceptability

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The author has identified the following significant results. Results indicated that the LANDSAT data and the classification technology can estimate the small grains area within a sample segment accurately and reliably enough to meet the LACIE goals. Overall, the LACIE estimates in a 9 x 11 kilometer segment agree well with ground and aircraft determined area within these segments. The estimated c.v. of the random classification error was acceptably small. These analyses confirmed that bias introduced by various factors, such as LANDSAT spatial resolution, lack of spectral resolution, classifier bias, and repeatability, was not excessive in terms of the required performance criterion. Results of these tests did indicate a difficulty in differentiating wheat from other closely related small grains. However, satisfactory wheat area estimates were obtained through the reduction of the small grain area estimates in accordance with relative amounts of these crops as determined from historic data; these procedures are being further refined.

  12. Preliminary experiments on the acceptability of animaloid companion robots by older people with early dementia.

    PubMed

    Odetti, Luca; Anerdi, Giuseppe; Barbieri, Maria Paola; Mazzei, Debora; Rizza, Elisa; Dario, Paolo; Rodriguez, Guido; Micera, Silvestro

    2007-01-01

    Animaloid companion robots represent a very interesting paradigm. An increasing number of studies on this topic has been carried out in the past, involving such robots and older users affected by some kind of cognitive disease, from mild cognitive impairment (MCI) to more severe stages of Alzheimer's disease and other types of dementia. In the study described in this paper, an AIBO robotic dog was programmed and used to show simple reactive behaviors during the interaction with old adults. Experimental sessions were carried out with a group of 24 older subjects with cognitive deficits of relatively small entity (MMSE>23). Preliminary results seem to show the acceptability of this approach especially in subjects with a good relationship with technology. In the next future, the interaction between the robot and the old adults will be tested in more complex situations.

  13. Justice and the moral acceptability of rationing medical care: the Oregon experiment.

    PubMed

    Nelson, R M; Drought, T

    1992-02-01

    The Oregon Basic Health Services Act of 1989 seeks to establish universal access to basic medical care for all currently uninsured Oregon residents. To control the increasing cost of medical care, the Oregon plan will restrict funding according to a priority list of medical interventions. The basic level of medical care provided to residents with incomes below the federal poverty line will vary according to the funds made available by the Oregon legislature. A rationing plan such as Oregon's which potentially excludes medically necessary procedures from the basic level of health care may be just, for the right to publically-sponsored medical care is restricted by opposing rights of private property. However, the moral acceptability of the Oregon plan cannot be determined in the absence of knowing the level of resources to be provided. Finally, Oregon to date has failed to include the individuals being rationed in discussions as to how the scarce resources are to be distributed.

  14. A parallel high-order accurate finite element nonlinear Stokes ice sheet model and benchmark experiments: A PARALLEL FEM STOKES ICE SHEET MODEL

    SciTech Connect

    Leng, Wei; Ju, Lili; Gunzburger, Max; Price, Stephen; Ringler, Todd

    2012-01-04

    The numerical modeling of glacier and ice sheet evolution is a subject of growing interest, in part because of the potential for models to inform estimates of global sea level change. This paper focuses on the development of a numerical model that determines the velocity and pressure fields within an ice sheet. Our numerical model features a high-fidelity mathematical model involving the nonlinear Stokes system and combinations of no-sliding and sliding basal boundary conditions, high-order accurate finite element discretizations based on variable resolution grids, and highly scalable parallel solution strategies, all of which contribute to a numerical model that can achieve accurate velocity and pressure approximations in a highly efficient manner. We demonstrate the accuracy and efficiency of our model by analytical solution tests, established ice sheet benchmark experiments, and comparisons with other well-established ice sheet models.

  15. In situ and real time characterization of interface microstructure in 3D alloy solidification: benchmark microgravity experiments in the DECLIC-Directional Solidification Insert on ISS

    NASA Astrophysics Data System (ADS)

    Ramirez, A.; Chen, L.; Bergeon, N.; Billia, B.; Gu, Jiho; Trivedi, R.

    2012-01-01

    Dynamical microstructure formation and selection during solidification processing, which has a major influence on the properties in the use of elaborated materials, occur during the growth process. In situ observation of the solid-liquid interface morphology evolution is thus necessary. On earth, convection effects dominate in bulk samples and may strongly interact with microstructure dynamics and alter pattern characterization. Series of solidification experiments with 3D cylindrical sample geometry were conducted in succinonitrile (SCN) -0.24 wt%camphor (model transparent system), in microgravity environment in the Directional Solidification Insert of the DECLIC facility of CNES (French space agency) on the International Space Station (ISS). Microgravity enabled homogeneous values of control parameters over the whole interface allowing the obtaining of homogeneous patterns suitable to get quantitative benchmark data. First analyses of the characteristics of the pattern (spacing, order, etc.) and of its dynamics in microgravity will be presented.

  16. Vegetable and Fruit Acceptance during Infancy: Impact of Ontogeny, Genetics, and Early Experiences1234

    PubMed Central

    Mennella, Julie A; Reiter, Ashley R

    2016-01-01

    Many of the chronic illnesses that plague modern society derive in large part from poor food choices. Thus, it is not surprising that the Dietary Guidelines for Americans, aimed at the population ≥2 y of age, recommends limiting consumption of salt, fat, and simple sugars, all of which have sensory properties that we humans find particularly palatable, and increasing the variety and contribution of fruits and vegetables in the diet, to promote health and prevent disease. Similar recommendations may soon be targeted at even younger Americans: the B-24 Project, led by the US Department of Health and Human Services and the USDA, is currently evaluating evidence to include infants and children from birth to 2 y of age in the dietary guidelines. This article reviews the underinvestigated behavioral phenomena surrounding how to introduce vegetables and fruits into infants’ diets, for which there is much medical lore but, to our knowledge, little evidence-based research. Because the chemical senses are the major determinants of whether young children will accept a food (e.g., they eat only what they like), these senses take on even greater importance in understanding the bases for food choices in children. We focus on early life, in contrast with many other studies that attempt to modify food habits in older children and thus may miss sensitive periods that modulate long-term acceptance. Our review also takes into consideration ontogeny and sources of individual differences in taste perception, in particular, the role of genetic variation in bitter taste perception. PMID:26773029

  17. Breeding matters: Natal experience influences population state-dependent host acceptance by an eruptive insect herbivore

    PubMed Central

    2017-01-01

    Eruptive forest insects are highly influential agents of change in forest ecosystems, and their effects have increased with recent climate change. State-dependent life histories contribute significantly to the population dynamics of eruptive forest insect herbivores; however, the proximate mechanisms by which these species shift between states is poorly understood. Laboratory bioassays were conducted using the mountain pine beetle (Dendroctonus ponderosae) to determine the effect of maternal host selection on offspring host preferences, as they apply to population state-dependent behaviors. Female mountain pine beetles exhibited state-dependent preference for artificial host material amended with monoterpenes in the absence of other cues, such that individuals reared in high-density epidemic-state simulations rejected low monoterpene conditions, while low-density endemic-state beetles accepted low monoterpene conditions. State-specific behavior in offspring was dependent on rearing conditions, as a function of maternal host selection, and these effects were observed within one generation. Density-dependent host selection behaviors exhibited by female mountain pine beetle offspring is reinforced by context-dependent maternal effects arising from parental host selection, and in situ exposure to conspecifics. These results demonstrate potential proximate mechanisms that control population dynamics in eruptive forest insects, and will allow for more accurate predictions of continued impact and spread of these species. PMID:28207862

  18. Acceptance testing of the prototype electrometer for the SAMPIE flight experiment

    NASA Technical Reports Server (NTRS)

    Hillard, G. Barry

    1992-01-01

    The Solar Array Module Plasma Interaction Experiment (SAMPIE) has two key instruments at the heart of its data acquisition capability. One of these, the electrometer, is designed to measure both ion and electron current from most of the samples included in the experiment. The accuracy requirement, specified by the project's Principal Investigator, is for agreement within 10 percent with a calibrated laboratory instrument. Plasma chamber testing was performed to assess the capabilities of the prototype design. Agreement was determined to be within 2 percent for electron collection and within 3 percent for ion collection.

  19. The Role of Age and Motivation for the Experience of Social Acceptance and Rejection

    ERIC Educational Resources Information Center

    Nikitin, Jana; Schoch, Simone; Freund, Alexandra M.

    2014-01-01

    A study with n = 55 younger (18-33 years, M = 23.67) and n = 58 older (61-85 years, M = 71.44) adults investigated age-related differences in social approach and avoidance motivation and their consequences for the experience of social interactions. Results confirmed the hypothesis that a predominant habitual approach motivation in younger adults…

  20. Use of Sensitivity and Uncertainty Analysis in the Design of Reactor Physics and Criticality Benchmark Experiments for Advanced Nuclear Fuel

    SciTech Connect

    Rearden, B.T.; Anderson, W.J.; Harms, G.A.

    2005-08-15

    Framatome ANP, Sandia National Laboratories (SNL), Oak Ridge National Laboratory (ORNL), and the University of Florida are cooperating on the U.S. Department of Energy Nuclear Energy Research Initiative (NERI) project 2001-0124 to design, assemble, execute, analyze, and document a series of critical experiments to validate reactor physics and criticality safety codes for the analysis of commercial power reactor fuels consisting of UO{sub 2} with {sup 235}U enrichments {>=}5 wt%. The experiments will be conducted at the SNL Pulsed Reactor Facility.Framatome ANP and SNL produced two series of conceptual experiment designs based on typical parameters, such as fuel-to-moderator ratios, that meet the programmatic requirements of this project within the given restraints on available materials and facilities. ORNL used the Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) to assess, from a detailed physics-based perspective, the similarity of the experiment designs to the commercial systems they are intended to validate. Based on the results of the TSUNAMI analysis, one series of experiments was found to be preferable to the other and will provide significant new data for the validation of reactor physics and criticality safety codes.

  1. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  2. Proton Form Factor Puzzle and the CEBAF Large Acceptance Spectrometer (CLAS) two-photon exchange experiment

    NASA Astrophysics Data System (ADS)

    Rimal, Dipak

    The electromagnetic form factors are the most fundamental observables that encode information about the internal structure of the nucleon. The electric (GE) and the magnetic ( GM) form factors contain information about the spatial distribution of the charge and magnetization inside the nucleon. A significant discrepancy exists between the Rosenbluth and the polarization transfer measurements of the electromagnetic form factors of the proton. One possible explanation for the discrepancy is the contributions of two-photon exchange (TPE) effects. Theoretical calculations estimating the magnitude of the TPE effect are highly model dependent, and limited experimental evidence for such effects exists. Experimentally, the TPE effect can be measured by comparing the ratio of positron-proton elastic scattering cross section to that of the electron-proton [R = sigma(e +p)/sigma(e+p)]. The ratio R was measured over a wide range of kinematics, utilizing a 5.6 GeV primary electron beam produced by the Continuous Electron Beam Accelerator Facility (CEBAF) at Jefferson Lab. This dissertation explored dependence of R on kinematic variables such as squared four-momentum transfer (Q2) and the virtual photon polarization parameter (epsilon). A mixed electron-positron beam was produced from the primary electron beam in experimental Hall B. The mixed beam was scattered from a liquid hydrogen (LH2) target. Both the scattered lepton and the recoil proton were detected by the CEBAF Large Acceptance Spectrometer (CLAS). The elastic events were then identified by using elastic scattering kinematics. This work extracted the Q2 dependence of R at high epsilon(epsilon > 0.8) and the $epsilon dependence of R at approx 0.85 GeV2. In these kinematics, our data confirm the validity of the hadronic calculations of the TPE effect by Blunden, Melnitchouk, and Tjon. This hadronic TPE effect, with additional corrections contributed by higher excitations of the intermediate state nucleon, largely

  3. SAS Code for Calculating Intraclass Correlation Coefficients and Effect Size Benchmarks for Site-Randomized Education Experiments

    ERIC Educational Resources Information Center

    Brandon, Paul R.; Harrison, George M.; Lawton, Brian E.

    2013-01-01

    When evaluators plan site-randomized experiments, they must conduct the appropriate statistical power analyses. These analyses are most likely to be valid when they are based on data from the jurisdictions in which the studies are to be conducted. In this method note, we provide software code, in the form of a SAS macro, for producing statistical…

  4. The effects of video compression on acceptability of images for monitoring life sciences' experiments

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Chuang, Sherry L.

    1993-01-01

    Current plans indicate that there will be a large number of life science experiments carried out during the thirty year-long mission of the Biological Flight Research Laboratory (BFRL) on board Space Station Freedom (SSF). Non-human life science experiments will be performed in the BFRL. Two distinct types of activities have already been identified for this facility: (1) collect, store, distribute, analyze and manage engineering and science data from the Habitats, Glovebox and Centrifuge, (2) perform a broad range of remote science activities in the Glovebox and Habitat chambers in conjunction with the remotely located principal investigator (PI). These activities require extensive video coverage, viewing and/or recording and distribution to video displays on board SSF and to the ground. This paper concentrates mainly on the second type of activity. Each of the two BFRL habitat racks are designed to be configurable for either six rodent habitats per rack, four plant habitats per rack, or a combination of the above. Two video cameras will be installed in each habitat with a spare attachment for a third camera when needed. Therefore, a video system that can accommodate up to 12-18 camera inputs per habitat rack must be considered.

  5. A Uranium Bioremediation Reactive Transport Benchmark

    SciTech Connect

    Yabusaki, Steven B.; Sengor, Sevinc; Fang, Yilin

    2015-06-01

    A reactive transport benchmark problem set has been developed based on in situ uranium bio-immobilization experiments that have been performed at a former uranium mill tailings site in Rifle, Colorado, USA. Acetate-amended groundwater stimulates indigenous microorganisms to catalyze the reduction of U(VI) to a sparingly soluble U(IV) mineral. The interplay between the flow, acetate loading periods and rates, microbially-mediated and geochemical reactions leads to dynamic behavior in metal- and sulfate-reducing bacteria, pH, alkalinity, and reactive mineral surfaces. The benchmark is based on an 8.5 m long one-dimensional model domain with constant saturated flow and uniform porosity. The 159-day simulation introduces acetate and bromide through the upgradient boundary in 14-day and 85-day pulses separated by a 10 day interruption. Acetate loading is tripled during the second pulse, which is followed by a 50 day recovery period. Terminal electron accepting processes for goethite, phyllosilicate Fe(III), U(VI), and sulfate are modeled using Monod-type rate laws. Major ion geochemistry modeled includes mineral reactions, as well as aqueous and surface complexation reactions for UO2++, Fe++, and H+. In addition to the dynamics imparted by the transport of the acetate pulses, U(VI) behavior involves the interplay between bioreduction, which is dependent on acetate availability, and speciation-controlled surface complexation, which is dependent on pH, alkalinity and available surface complexation sites. The general difficulty of this benchmark is the large number of reactions (74), multiple rate law formulations, a multisite uranium surface complexation model, and the strong interdependency and sensitivity of the reaction processes. Results are presented for three simulators: HYDROGEOCHEM, PHT3D, and PHREEQC.

  6. Nuclear data verification based on Monte Carlo simulations of the LLNL pulsed-sphere benchmark experiments (1979 & 1986) using the Mercury code

    SciTech Connect

    Descalle, M; Pruet, J

    2008-06-09

    Livermore's nuclear data group developed a new verification and validation test suite to ensure the quality of data used in application codes. This is based on models of LLNL's pulsed sphere fusion shielding benchmark experiments. Simulations were done with Mercury, a 3D particle transport Monte Carlo code using continuous-energy cross-section libraries. Results were compared to measurements of neutron leakage spectra generated by 14MeV neutrons in 17 target assemblies (for a blank target assembly, H{sub 2}O, Teflon, C, N{sub 2}, Al, Si, Ti, Fe, Cu, Ta, W, Au, Pb, {sup 232}Th, {sup 235}U, {sup 238}U, and {sup 239}Pu). We also tested the fidelity of simulations for photon production associated with neutron interactions in the different materials. Gamma-ray leakage energy per neutron was obtained from a simple 1D spherical geometry assembly and compared to three codes (TART, COG, MCNP5) and several versions of the Evaluated Nuclear Data File (ENDF) and Evaluated Nuclear Data Libraries (ENDL) cross-section libraries. These tests uncovered a number of errors in photon production cross-sections, and were instrumental to the V&V of different cross-section libraries. Development of the pulsed sphere tests also uncovered the need for new Mercury capabilities. To enable simulations of neutron time-of-flight experiments the nuclear data group implemented an improved treatment of biased angular scattering in MCAPM.

  7. Benchmark experiment for electron-impact ionization of argon: Absolute triple-differential cross sections via three-dimensional electron emission images

    SciTech Connect

    Ren Xueguang; Senftleben, Arne; Pflueger, Thomas; Dorn, Alexander; Ullrich, Joachim; Bartschat, Klaus

    2011-05-15

    Single ionization of argon by 195-eV electron impact is studied in an experiment, where the absolute triple-differential cross sections are presented as three-dimensional electron emission images for a series of kinematic conditions. Thereby a comprehensive set of experimental data for electron-impact ionization of a many-electron system is produced to provide a benchmark for comparison with theoretical predictions. Theoretical models using a hybrid first-order and second-order distorted-wave Born plus R-matrix approach are employed to compare their predictions with the experimental data. While the relative shape of the calculated cross section is generally in reasonable agreement with experiment, the magnitude appears to be the most significant problem with the theoretical treatment for the conditions studied in the present work. This suggests that the most significant challenge in the further development of theory for this process may lie in the reproduction of the absolute scale rather than the angular dependence of the cross section.

  8. Benchmark experiment for the cross section of the 100Mo(p,2n)99mTc and 100Mo(p,pn)99Mo reactions

    NASA Astrophysics Data System (ADS)

    Takács, S.; Ditrói, F.; Aikawa, M.; Haba, H.; Otuka, N.

    2016-05-01

    As nuclear medicine community has shown an increasing interest in accelerator produced 99mTc radionuclide, the possible alternative direct production routes for producing 99mTc were investigated intensively. One of these accelerator production routes is based on the 100Mo(p,2n)99mTc reaction. The cross section of this nuclear reaction was studied by several laboratories earlier but the available data-sets are not in good agreement. For large scale accelerator production of 99mTc based on the 100Mo(p,2n)99mTc reaction, a well-defined excitation function is required to optimise the production process effectively. One of our recent publications pointed out that most of the available experimental excitation functions for the 100Mo(p,2n)99mTc reaction have the same general shape while their amplitudes are different. To confirm the proper amplitude of the excitation function, results of three independent experiments were presented (Takács et al., 2015). In this work we present results of a thick target count rate measurement of the Eγ = 140.5 keV gamma-line from molybdenum irradiated by Ep = 17.9 MeV proton beam, as an integral benchmark experiment, to prove the cross section data reported for the 100Mo(p,2n)99mTc and 100Mo(p,pn)99Mo reactions in Takács et al. (2015).

  9. Patient-specific IMRT verification using independent fluence-based dose calculation software: experimental benchmarking and initial clinical experience

    NASA Astrophysics Data System (ADS)

    Georg, Dietmar; Stock, Markus; Kroupa, Bernhard; Olofsson, Jörgen; Nyholm, Tufve; Ahnesjö, Anders; Karlsson, Mikael

    2007-08-01

    Experimental methods are commonly used for patient-specific intensity-modulated radiotherapy (IMRT) verification. The purpose of this study was to investigate the accuracy and performance of independent dose calculation software (denoted as 'MUV' (monitor unit verification)) for patient-specific quality assurance (QA). 52 patients receiving step-and-shoot IMRT were considered. IMRT plans were recalculated by the treatment planning systems (TPS) in a dedicated QA phantom, in which an experimental 1D and 2D verification (0.3 cm3 ionization chamber; films) was performed. Additionally, an independent dose calculation was performed. The fluence-based algorithm of MUV accounts for collimator transmission, rounded leaf ends, tongue-and-groove effect, backscatter to the monitor chamber and scatter from the flattening filter. The dose calculation utilizes a pencil beam model based on a beam quality index. DICOM RT files from patient plans, exported from the TPS, were directly used as patient-specific input data in MUV. For composite IMRT plans, average deviations in the high dose region between ionization chamber measurements and point dose calculations performed with the TPS and MUV were 1.6 ± 1.2% and 0.5 ± 1.1% (1 S.D.). The dose deviations between MUV and TPS slightly depended on the distance from the isocentre position. For individual intensity-modulated beams (total 367), an average deviation of 1.1 ± 2.9% was determined between calculations performed with the TPS and with MUV, with maximum deviations up to 14%. However, absolute dose deviations were mostly less than 3 cGy. Based on the current results, we aim to apply a confidence limit of 3% (with respect to the prescribed dose) or 6 cGy for routine IMRT verification. For off-axis points at distances larger than 5 cm and for low dose regions, we consider 5% dose deviation or 10 cGy acceptable. The time needed for an independent calculation compares very favourably with the net time for an experimental approach

  10. NASA Software Engineering Benchmarking Study

    NASA Technical Reports Server (NTRS)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    .onsolidate, collect and, if needed, develop common processes principles and other assets across the Agency in order to provide more consistency in software development and acquisition practices and to reduce the overall cost of maintaining or increasing current NASA CMMI maturity levels. 6. Provide additional support for small projects that includes: (a) guidance for appropriate tailoring of requirements for small projects, (b) availability of suitable tools, including support tool set-up and training, and (c) training for small project personnel, assurance personnel and technical authorities on the acceptable options for tailoring requirements and performing assurance on small projects. 7. Develop software training classes for the more experienced software engineers using on-line training, videos, or small separate modules of training that can be accommodated as needed throughout a project. 8. Create guidelines to structure non-classroom training opportunities such as mentoring, peer reviews, lessons learned sessions, and on-the-job training. 9. Develop a set of predictive software defect data and a process for assessing software testing metric data against it. 10. Assess Agency-wide licenses for commonly used software tools. 11. Fill the knowledge gap in common software engineering practices for new hires and co-ops.12. Work through the Science, Technology, Engineering and Mathematics (STEM) program with universities in strengthening education in the use of common software engineering practices and standards. 13. Follow up this benchmark study with a deeper look into what both internal and external organizations perceive as the scope of software assurance, the value they expect to obtain from it, and the shortcomings they experience in the current practice. 14. Continue interactions with external software engineering environment through collaborations, knowledge sharing, and benchmarking.

  11. Impact of Dialectical Behavior Therapy versus Community Treatment by Experts on Emotional Experience, Expression, and Acceptance in Borderline Personality Disorder

    PubMed Central

    Neacsiu, Andrada D.; Lungu, Anita; Harned, Melanie S.; Rizvi, Shireen L.; Linehan, Marsha M.

    2014-01-01

    Evidence suggests that heightened negative affectivity is a prominent feature of Borderline Personality Disorder (BPD) that often leads to maladaptive behaviors. Nevertheless, there is little research examining treatment effects on the experience and expression of specific negative emotions. Dialectical Behavior Therapy (DBT) is an effective treatment for BPD, hypothesized to reduce negative affectivity (Linehan, 1993a). The present study analyzes secondary data from a randomized controlled trial with the aim to assess the unique effectiveness of DBT when compared to Community Treatment by Experts (CTBE) in changing the experience, expression, and acceptance of negative emotions. Suicidal and/or self-injuring women with BPD (n = 101) were randomly assigned to DBT or CTBE for one year of treatment and one year of follow-up. Several indices of emotional experience and expression were assessed. Results indicate that DBT decreased experiential avoidance and expressed anger significantly more than CTBE. No differences between DBT and CTBE were found in improving guilt, shame, anxiety, or anger suppression, trait, and control. These results suggest that DBT has unique effects on improving the expression of anger and experiential avoidance, whereas changes in the experience of specific negative emotions may be accounted for by general factors associated with expert therapy. Implications of the findings are discussed. PMID:24418652

  12. Benchmarking: The New Tool.

    ERIC Educational Resources Information Center

    Stralser, Steven

    1995-01-01

    This article suggests that benchmarking, the process of comparing one's own operation with the very best, can be used to make improvements in colleges and universities. Six steps are outlined: determining what to benchmark, forming a team, discovering who to benchmark, collecting and analyzing data, using the data to redesign one's own operation,…

  13. Benchmarking for Higher Education.

    ERIC Educational Resources Information Center

    Jackson, Norman, Ed.; Lund, Helen, Ed.

    The chapters in this collection explore the concept of benchmarking as it is being used and developed in higher education (HE). Case studies and reviews show how universities in the United Kingdom are using benchmarking to aid in self-regulation and self-improvement. The chapters are: (1) "Introduction to Benchmarking" (Norman Jackson…

  14. A comparison of five benchmarks

    NASA Technical Reports Server (NTRS)

    Huss, Janice E.; Pennline, James A.

    1987-01-01

    Five benchmark programs were obtained and run on the NASA Lewis CRAY X-MP/24. A comparison was made between the programs codes and between the methods for calculating performance figures. Several multitasking jobs were run to gain experience in how parallel performance is measured.

  15. A Blind Test Experiment in Volcano Geodesy: a Benchmark for Inverse Methods of Ground Deformation and Gravity Data

    NASA Astrophysics Data System (ADS)

    D'Auria, Luca; Fernandez, Jose; Puglisi, Giuseppe; Rivalta, Eleonora; Camacho, Antonio; Nikkhoo, Mehdi; Walter, Thomas

    2016-04-01

    The inversion of ground deformation and gravity data is affected by an intrinsic ambiguity because of the mathematical formulation of the inverse problem. Current methods for the inversion of geodetic data rely on both parametric (i.e. assuming a source geometry) and non-parametric approaches. The former are able to catch the fundamental features of the ground deformation source but, if the assumptions are wrong or oversimplified, they could provide misleading results. On the other hand, the latter class of methods, even if not relying on stringent assumptions, could suffer from artifacts, especially when dealing with poor datasets. In the framework of the EC-FP7 MED-SUV project we aim at comparing different inverse approaches to verify how they cope with basic goals of Volcano Geodesy: determining the source depth, the source shape (size and geometry), the nature of the source (magmatic/hydrothermal) and hinting the complexity of the source. Other aspects that are important in volcano monitoring are: volume/mass transfer toward shallow depths, propagation of dikes/sills, forecasting the opening of eruptive vents. On the basis of similar experiments already done in the fields of seismic tomography and geophysical imaging, we have devised a bind test experiment. Our group was divided into one model design team and several inversion teams. The model design team devised two physical models representing volcanic events at two distinct volcanoes (one stratovolcano and one caldera). They provided the inversion teams with: the topographic reliefs, the calculated deformation field (on a set of simulated GPS stations and as InSAR interferograms) and the gravity change (on a set of simulated campaign stations). The nature of the volcanic events remained unknown to the inversion teams until after the submission of the inversion results. Here we present the preliminary results of this comparison in order to determine which features of the ground deformation and gravity source

  16. A Blind Test Experiment in Volcano Geodesy: a Benchmark for Inverse Methods of Ground Deformation and Gravity Data

    NASA Astrophysics Data System (ADS)

    D'Auria, L.; Fernandez, J.; Puglisi, G.; Rivalta, E.; Camacho, A. G.; Nikkhoo, M.; Walter, T. R.

    2015-12-01

    The inversion of ground deformation and gravity data is affected by an intrinsic ambiguity because of the mathematical formulation of the inverse problem. Current methods for the inversion of geodetic data rely on both parametric (i.e. assuming a source geometry) and non-parametric approaches. The former are able to catch the fundamental features of the ground deformation source but, if the assumptions are wrong or oversimplified, they could provide misleading results. On the other hand, the latter class of methods, even if not relying on stringent assumptions, could suffer from artifacts, especially when dealing with poor datasets. In the framework of the EC-FP7 MED-SUV project we aim at comparing different inverse approaches to verify how they cope with basic goals of Volcano Geodesy: determining the source depth, the source shape (size and geometry), the nature of the source (magmatic/hydrothermal) and hinting the complexity of the source. Other aspects that are important in volcano monitoring are: volume/mass transfer toward shallow depths, propagation of dikes/sills, forecasting the opening of eruptive vents. On the basis of similar experiments already done in the fields of seismic tomography and geophysical imaging, we have devised a bind test experiment. Our group was divided into one model design team and several inversion teams. The model design team devised two physical models representing volcanic events at two distinct volcanoes (one stratovolcano and one caldera). They provided the inversion teams with: the topographic reliefs, the calculated deformation field (on a set of simulated GPS stations and as InSAR interferograms) and the gravity change (on a set of simulated campaign stations). The nature of the volcanic events remained unknown to the inversion teams until after the submission of the inversion results. Here we present the preliminary results of this comparison in order to determine which features of the ground deformation and gravity source

  17. Benchmark Data Set for Wheat Growth Models: Field Experiments and AgMIP Multi-Model Simulations.

    NASA Technical Reports Server (NTRS)

    Asseng, S.; Ewert, F.; Martre, P.; Rosenzweig, C.; Jones, J. W.; Hatfield, J. L.; Ruane, A. C.; Boote, K. J.; Thorburn, P.J.; Rotter, R. P.

    2015-01-01

    The data set includes a current representative management treatment from detailed, quality-tested sentinel field experiments with wheat from four contrasting environments including Australia, The Netherlands, India and Argentina. Measurements include local daily climate data (solar radiation, maximum and minimum temperature, precipitation, surface wind, dew point temperature, relative humidity, and vapor pressure), soil characteristics, frequent growth, nitrogen in crop and soil, crop and soil water and yield components. Simulations include results from 27 wheat models and a sensitivity analysis with 26 models and 30 years (1981-2010) for each location, for elevated atmospheric CO2 and temperature changes, a heat stress sensitivity analysis at anthesis, and a sensitivity analysis with soil and crop management variations and a Global Climate Model end-century scenario.

  18. Criticality Benchmark Analysis of Water-Reflected Uranium Oxyfluoride Slabs

    SciTech Connect

    Margaret A. Marshall; John D. Bess

    2009-11-01

    A series of twelve experiments were conducted in the mid 1950's at the Oak Ridge National Laboratory Critical Experiments Facility to determine the critical conditions of a semi-infinite water-reflected slab of aqueous uranium oxyfluoride (UO2F2). A different slab thickness was used for each experiment. Results from the twelve experiment recorded in the laboratory notebook were published in Reference 1. Seven of the twelve experiments were determined to be acceptable benchmark experiments for the inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. This evaluation will not only be available to handbook users for the validation of computer codes and integral cross-section data, but also for the reevaluation of experimental data used in the ANSI/ANS-8.1 standard. This evaluation is important as part of the technical basis of the subcritical slab limits in ANSI/ANS-8.1. The original publication of the experimental results was used for the determination of bias and bias uncertainties for subcritical slab limits, as documented by Hugh Clark's paper 'Subcritical Limits for Uranium-235 Systems'.

  19. A framework for benchmarking land models

    SciTech Connect

    Luo, Yiqi; Randerson, J.; Abramowitz, G.; Bacour, C.; Blyth, E.; Carvalhais, N.; Ciais, Philippe; Dalmonech, D.; Fisher, J.B.; Fisher, R.; Friedlingstein, P.; Hibbard, Kathleen A.; Hoffman, F. M.; Huntzinger, Deborah; Jones, C.; Koven, C.; Lawrence, David M.; Li, D.J.; Mahecha, M.; Niu, S.L.; Norby, Richard J.; Piao, S.L.; Qi, X.; Peylin, P.; Prentice, I.C.; Riley, William; Reichstein, M.; Schwalm, C.; Wang, Y.; Xia, J. Y.; Zaehle, S.; Zhou, X. H.

    2012-10-09

    Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1) targeted aspects of model performance to be evaluated, (2) a set of benchmarks as defined references to test model performance, (3) metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4) model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1) a priori thresholds of acceptable model performance and (2) a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties of land models

  20. A framework for benchmarking land models

    SciTech Connect

    Luo, Yiqi; Randerson, James T.; Hoffman, Forrest; Norby, Richard J

    2012-01-01

    Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1) targeted aspects of model performance to be evaluated, (2) a set of benchmarks as defined references to test model performance, (3) metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4) model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1) a priori thresholds of acceptable model performance and (2) a scoring system to combine data model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties of land models

  1. Nomenclatural Benchmarking: The roles of digital typification and telemicroscopy

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The process of nomenclatural benchmarking is the examination of type specimens of all available names to ascertain which currently accepted species the specimen bearing the name falls within. We propose a strategy for addressing four challenges for nomenclatural benchmarking. First, there is the mat...

  2. The Acceptability of Acupuncture for Low Back Pain: A Qualitative Study of Patient’s Experiences Nested within a Randomised Controlled Trial

    PubMed Central

    Hopton, Ann; Thomas, Kate; MacPherson, Hugh

    2013-01-01

    Introduction The National Institute for Health and Clinical Excellence guidelines recommend acupuncture as a clinically effective treatment for chronic back pain. However, there is insufficient knowledge of what factors contribute to patients’ positive and negative experiences of acupuncture, and how those factors interact in terms of the acceptability of treatment. This study used patient interviews following acupuncture treatment for back pain to identify, understand and describe the elements that contribute or detract from acceptability of treatment. Methods The study used semi-structured interviews. Twelve patients were interviewed using an interview schedule as a sub-study nested within a randomised controlled trial of acupuncture for chronic back pain. The interviews were analysed using thematic analysis. Results and Discussion Three over-arching themes emerged from the analysis. The first entitled facilitators of acceptability contained five subthemes; experience of pain relief, improvements in physical activity, relaxation, psychological benefit, reduced reliance on medication. The second over-arching theme identified barriers to acceptability, which included needle-related discomfort and temporary worsening of symptoms, pressure to continue treatment and financial cost. The third over-arching theme comprised mediators of acceptability, which included pre-treatment mediators such as expectation and previous experience, and treatment-related mediators of time, therapeutic alliance, lifestyle advice and the patient’s active involvement in recovery. These themes inform our understanding of the acceptability of acupuncture to patients with low back pain. Conclusion The acceptability of acupuncture treatment for low back pain is complex and multifaceted. The therapeutic relationship between the practitioner and patient emerged as a strong driver for acceptability, and as a useful vehicle to develop the patients’ self-efficacy in pain management in the

  3. Acceptability of Interventions Delivered Online and Through Mobile Phones for People Who Experience Severe Mental Health Problems: A Systematic Review

    PubMed Central

    Lobban, Fiona; Emsley, Richard; Bucci, Sandra

    2016-01-01

    Background Psychological interventions are recommended for people with severe mental health problems (SMI). However, barriers exist in the provision of these services and access is limited. Therefore, researchers are beginning to develop and deliver interventions online and via mobile phones. Previous research has indicated that interventions delivered in this format are acceptable for people with SMI. However, a comprehensive systematic review is needed to investigate the acceptability of online and mobile phone-delivered interventions for SMI in depth. Objective This systematic review aimed to 1) identify the hypothetical acceptability (acceptability prior to or without the delivery of an intervention) and actual acceptability (acceptability where an intervention was delivered) of online and mobile phone-delivered interventions for SMI, 2) investigate the impact of factors such as demographic and clinical characteristics on acceptability, and 3) identify common participant views in qualitative studies that pinpoint factors influencing acceptability. Methods We conducted a systematic search of the databases PubMed, Embase, PsycINFO, CINAHL, and Web of Science in April 2015, which yielded a total of 8017 search results, with 49 studies meeting the full inclusion criteria. Studies were included if they measured acceptability through participant views, module completion rates, or intervention use. Studies delivering interventions were included if the delivery method was online or via mobile phones. Results The hypothetical acceptability of online and mobile phone-delivered interventions for SMI was relatively low, while actual acceptability tended to be high. Hypothetical acceptability was higher for interventions delivered via text messages than by emails. The majority of studies that assessed the impact of demographic characteristics on acceptability reported no significant relationships between the two. Additionally, actual acceptability was higher when

  4. Thermal Performance Benchmarking (Presentation)

    SciTech Connect

    Moreno, G.

    2014-11-01

    This project will benchmark the thermal characteristics of automotive power electronics and electric motor thermal management systems. Recent vehicle systems will be benchmarked to establish baseline metrics, evaluate advantages and disadvantages of different thermal management systems, and identify areas of improvement to advance the state-of-the-art.

  5. Benchmarks in Management Training.

    ERIC Educational Resources Information Center

    Paddock, Susan C.

    1997-01-01

    Data were collected from 12 states with Certified Public Manager training programs to establish benchmarks. The 38 benchmarks were in the following areas: program leadership, stability of administrative/financial support, consistent management philosophy, administrative control, participant selection/support, accessibility, application of…

  6. A Seafloor Benchmark for 3-dimensional Geodesy

    NASA Astrophysics Data System (ADS)

    Chadwell, C. D.; Webb, S. C.; Nooner, S. L.

    2014-12-01

    We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone

  7. Performance Evaluation and Benchmarking of Intelligent Systems

    SciTech Connect

    Madhavan, Raj; Messina, Elena; Tunstel, Edward

    2009-09-01

    To design and develop capable, dependable, and affordable intelligent systems, their performance must be measurable. Scientific methodologies for standardization and benchmarking are crucial for quantitatively evaluating the performance of emerging robotic and intelligent systems technologies. There is currently no accepted standard for quantitatively measuring the performance of these systems against user-defined requirements; and furthermore, there is no consensus on what objective evaluation procedures need to be followed to understand the performance of these systems. The lack of reproducible and repeatable test methods has precluded researchers working towards a common goal from exchanging and communicating results, inter-comparing system performance, and leveraging previous work that could otherwise avoid duplication and expedite technology transfer. Currently, this lack of cohesion in the community hinders progress in many domains, such as manufacturing, service, healthcare, and security. By providing the research community with access to standardized tools, reference data sets, and open source libraries of solutions, researchers and consumers will be able to evaluate the cost and benefits associated with intelligent systems and associated technologies. In this vein, the edited book volume addresses performance evaluation and metrics for intelligent systems, in general, while emphasizing the need and solutions for standardized methods. To the knowledge of the editors, there is not a single book on the market that is solely dedicated to the subject of performance evaluation and benchmarking of intelligent systems. Even books that address this topic do so only marginally or are out of date. The research work presented in this volume fills this void by drawing from the experiences and insights of experts gained both through theoretical development and practical implementation of intelligent systems in a variety of diverse application domains. The book presents

  8. Enhancing user acceptance of mandated mobile health information systems: the ePOC (electronic point-of-care project) experience.

    PubMed

    Burgess, Lois; Sargent, Jason

    2007-01-01

    From a clinical perspective, the use of mobile technologies, such as Personal Digital Assistants (PDAs) within hospital environments is not new. A paradigm shift however is underway towards the acceptance and utility of these systems within mobile-based healthcare environments. Introducing new technologies and associated work practices has intrinsic risks which must be addressed. This paper contends that intervening to address user concerns as they arise throughout the system development lifecycle will lead to greater levels of user acceptance, while ultimately enhancing the deliverability of a system that provides a best fit with end user needs. It is envisaged this research will lead to the development of a formalised user acceptance framework based on an agile approach to user acceptance measurement. The results of an ongoing study of user perceptions towards a mandated electronic point-of-care information system in the Northern Illawarra Ambulatory Care Team (TACT) are presented.

  9. Benchmark Evaluation of Plutonium Nitrate Solution Arrays

    SciTech Connect

    M. A. Marshall; J. D. Bess

    2011-09-01

    In October and November of 1981 thirteen approach-to-critical experiments were performed on a remote split table machine (RSTM) in the Critical Mass Laboratory of Pacific Northwest Laboratory (PNL) in Richland, Washington, using planar arrays of polyethylene bottles filled with plutonium (Pu) nitrate solution. Arrays of up to sixteen bottles were used to measure the critical number of bottles and critical array spacing with a tight fitting Plexiglas{reg_sign} reflector on all sides of the arrays except the top. Some experiments used Plexiglas shells fitted around each bottles to determine the effect of moderation on criticality. Each bottle contained approximately 2.4 L of Pu(NO3)4 solution with a Pu content of 105 g Pu/L and a free acid molarity H+ of 5.1. The plutonium was of low 240Pu (2.9 wt.%) content. These experiments were performed to fill a gap in experimental data regarding criticality limits for storing and handling arrays of Pu solution in reprocessing facilities. Of the thirteen approach-to-critical experiments eleven resulted in extrapolations to critical configurations. Four of the approaches were extrapolated to the critical number of bottles; these were not evaluated further due to the large uncertainty associated with the modeling of a fraction of a bottle. The remaining seven approaches were extrapolated to critical array spacing of 3-4 and 4-4 arrays; these seven critical configurations were evaluation for inclusion as acceptable benchmark experiments in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbook. Detailed and simple models of these configurations were created and the associated bias of these simplifications was determined to range from 0.00116 and 0.00162 {+-} 0.00006 ?keff. Monte Carlo analysis of all models was completed using MCNP5 with ENDF/BVII.0 neutron cross section libraries. A thorough uncertainty analysis of all critical, geometric, and material parameters was performed using parameter

  10. Experiences and Acceptance of Intimate Partner Violence: Associations with STI Symptoms and Ability to Negotiate Sexual Safety among Young Liberian Women

    PubMed Central

    Callands, Tamora A.; Sipsma, Heather L.; Betancourt, Theresa S.; Hansen, Nathan B.

    2013-01-01

    Women who experience intimate partner violence may be at elevated risk for poor sexual health outcomes including sexual transmitted infections (STIs). This association however, has not been consistently demonstrated in low-income or post-conflict countries; furthermore, the role that attitudes towards intimate partner violence play in sexual health outcomes and behaviour has rarely been examined. We examined associations between intimate partner violence experiences, accepting attitudes towards physical intimate partner violence, and sexual health and behavioural outcomes among 592 young women in post-conflict Liberia. Participants’ experiences with either moderate or severe physical violence or sexual violence were common. Additionally, accepting attitudes towards physical intimate partner violence were positively associated with reporting STI symptoms, intimate partner violence experiences and the ability to negotiate safe sex. Findings suggest that for sexual health promotion and risk reduction intervention efforts to achieve full impact, interventions must address the contextual influence of violence, including individual attitudes toward intimate partner violence. PMID:23586393

  11. Benchmarking expert system tools

    NASA Technical Reports Server (NTRS)

    Riley, Gary

    1988-01-01

    As part of its evaluation of new technologies, the Artificial Intelligence Section of the Mission Planning and Analysis Div. at NASA-Johnson has made timing tests of several expert system building tools. Among the production systems tested were Automated Reasoning Tool, several versions of OPS5, and CLIPS (C Language Integrated Production System), an expert system builder developed by the AI section. Also included in the test were a Zetalisp version of the benchmark along with four versions of the benchmark written in Knowledge Engineering Environment, an object oriented, frame based expert system tool. The benchmarks used for testing are studied.

  12. Toxicological Benchmarks for Wildlife

    SciTech Connect

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red-tailed hawk

  13. Geothermal Heat Pump Benchmarking Report

    SciTech Connect

    1997-01-17

    A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.

  14. ICSBEP Benchmarks For Nuclear Data Applications

    NASA Astrophysics Data System (ADS)

    Briggs, J. Blair

    2005-05-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organization for Economic Cooperation and Development (OECD) — Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Serbia and Montenegro (formerly Yugoslavia), Kazakhstan, Spain, Israel, Brazil, Poland, and the Czech Republic are now participating. South Africa, India, China, and Germany are considering participation. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled "International Handbook of Evaluated Criticality Safety Benchmark Experiments." The 2004 Edition of the Handbook contains benchmark specifications for 3331 critical or subcritical configurations that are intended for use in validation efforts and for testing basic nuclear data. New to the 2004 Edition of the Handbook is a draft criticality alarm / shielding type benchmark that should be finalized in 2005 along with two other similar benchmarks. The Handbook is being used extensively for nuclear data testing and is expected to be a valuable resource for code and data validation and improvement efforts for decades to come. Specific benchmarks that are useful for testing structural materials such as iron, chromium, nickel, and manganese; beryllium; lead; thorium; and 238U are highlighted.

  15. Diagnostic Algorithm Benchmarking

    NASA Technical Reports Server (NTRS)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  16. Benchmarking TENDL-2012

    NASA Astrophysics Data System (ADS)

    van der Marck, S. C.; Koning, A. J.; Rochman, D. A.

    2014-04-01

    The new release of the TENDL nuclear data library, TENDL-2012, was tested by performing many benchmark calculations. Close to 2000 criticality safety benchmark cases were used, as well as many benchmark shielding cases. All the runs could be compared with similar runs based on the nuclear data libraries ENDF/B-VII.1, JENDL-4.0, and JEFF-3.1.1 respectively. The results are that many of the criticality safety results obtained with TENDL-2012 are close to the ones for the other libraries. In particular the results for the thermal spectrum cases with LEU fuel are good. Nevertheless, there is a fair amount of cases for which the TENDL-2012 results are not as good as the other libraries. Especially a number of fast spectrum cases with reflectors are not well described. The results for the shielding benchmarks are mostly similar to the ones for the other libraries. Some isolated cases with differences are identified.

  17. Benchmarking in Foodservice Operations.

    DTIC Science & Technology

    2007-11-02

    51. Lingle JH, Schiemann WA. From balanced scorecard to strategic gauges: Is measurement worth it? Mgt Rev. 1996; 85(3):56-61. 52. Struebing L...studies lasted from nine to twelve months, and could extend beyond that time for numerous reasons (49). Benchmarking was not industrial tourism , a...not simply data comparison, a fad, a means for reducing resources, a quick-fix program, or industrial tourism . Benchmarking was a complete process

  18. XWeB: The XML Warehouse Benchmark

    NASA Astrophysics Data System (ADS)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  19. 2016 Senior Researcher Award Acceptance Address: Developing Productive Researchers Through Mentoring, Rethinking Doctoral Dissertations, and Facilitating Positive Publishing Experiences

    ERIC Educational Resources Information Center

    Sims, Wendy L.

    2016-01-01

    In her acceptance address, Wendy Sims provides a unique perspective based on thoughts and reflections resulting from her 8 years of service as the ninth Editor of the "Journal of Research in Music Education" ("JRME"). Specifically, she addresses how college-level music education researchers can promote positive attitudes toward…

  20. The MCNP6 Analytic Criticality Benchmark Suite

    SciTech Connect

    Brown, Forrest B.

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling) and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.

  1. Benchmarking ENDF/B-VII.0

    NASA Astrophysics Data System (ADS)

    van der Marck, Steven C.

    2006-12-01

    The new major release VII.0 of the ENDF/B nuclear data library has been tested extensively using benchmark calculations. These were based upon MCNP-4C3 continuous-energy Monte Carlo neutronics simulations, together with nuclear data processed using the code NJOY. Three types of benchmarks were used, viz., criticality safety benchmarks, (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 700 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6Li, 7Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D 2O, H 2O, concrete, polyethylene and teflon). For testing delayed neutron data more than thirty measurements in widely varying systems were used. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, and two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. In criticality safety, many benchmarks were chosen from the category with a thermal spectrum, low-enriched uranium, compound fuel (LEU-COMP-THERM), because this is typical of most current-day reactors, and because these benchmarks were previously underpredicted by as much as 0.5% by most nuclear data libraries (such as ENDF/B-VI.8, JEFF-3.0). The calculated results presented here show that this underprediction is no longer there for ENDF/B-VII.0. The average over 257

  2. Measurement Analysis When Benchmarking Java Card Platforms

    NASA Astrophysics Data System (ADS)

    Paradinas, Pierre; Cordry, Julien; Bouzefrane, Samia

    The advent of the Java Card standard has been a major turning point in smart card technology. With the growing acceptance of this standard, understanding the performance behaviour of these platforms is becoming crucial. To meet this need, we present in this paper, a benchmark framework that enables performance evaluation at the bytecode level. This paper focuses on the validity of our time measurements on smart cards.

  3. Pregnant and Postpartum Women's Experiences and Perspectives on the Acceptability and Feasibility of Copackaged Medicine for Antenatal Care and PMTCT in Lesotho

    PubMed Central

    Gill, Michelle M.; Hoffman, Heather J.; Tiam, Appolinaire; Mohai, Florence M.; Mokone, Majoalane; Isavwa, Anthony; Mohale, Sesomo; Makhohlisa, Matela; Ankrah, Victor; Luo, Chewe; Guay, Laura

    2015-01-01

    Objective. To improve PMTCT and antenatal care-related service delivery, a pack with centrally prepackaged medicine was rolled out to all pregnant women in Lesotho in 2011. This study assessed acceptability and feasibility of this copackaging mechanism for drug delivery among pregnant and postpartum women. Methods. Acceptability and feasibility were assessed in a mixed method, cross-sectional study through structured interviews (SI) and semistructured interviews (SSI) conducted in 2012 and 2013. Results. 290 HIV-negative women and 437 HIV-positive women (n = 727) participated. Nearly all SI participants found prepackaged medicines acceptable, though modifications such as size reduction of the pack were suggested. Positive experiences included that the pack helped women take pills as instructed and contents promoted healthy pregnancies. Negative experiences included inadvertent pregnancy disclosure and discomfort carrying the pack in communities. Implementation was also feasible; 85.2% of SI participants reported adequate counseling time, though 37.8% felt pack use caused clinic delays. SSI participants reported improvement in service quality following pack introduction, due to more comprehensive counseling. Conclusions. A prepackaged drug delivery mechanism for ANC/PMTCT medicines was acceptable and feasible. Findings support continued use of this approach in Lesotho with improved design modifications to reflect the current PMTCT program of lifelong treatment for all HIV-positive pregnant women. PMID:26649193

  4. Application of surface-harmonics code SUHAM-U and Monte-Carlo code UNK-MC for calculations of 2D light water benchmark-experiment VENUS-2 with UO{sub 2} and MOX fuel

    SciTech Connect

    Boyarinov, V. F.; Davidenko, V. D.; Nevinitsa, V. A.; Tsibulsky, V. F.

    2006-07-01

    Verification of the SUHAM-U code has been carried out by the calculation of two-dimensional benchmark-experiment on critical light-water facility VENUS-2. Comparisons with experimental data and calculations by Monte-Carlo code UNK with the same nuclear data library B645 for basic isotopes have been fulfilled. Calculations of two-dimensional facility were carried out with using experimentally measured buckling values. Possibility of SUHAM code application for computations of PWR reactor with uranium and MOX fuel has been demonstrated. (authors)

  5. HLW Return from France to Germany - 15 Years of Experience in Public Acceptance and Technical Aspects - 12149

    SciTech Connect

    Graf, Wilhelm

    2012-07-01

    Germany over the whole 15-year long project running time could be faced efficiently. It has to be concluded that despite of all problems the anti-nuclear activities have caused so far, all transports of vitrified HLW have always been completed successfully by adapting the commonly established safety, security and public acceptance measures to the special conditions and needs in Germany and coordinating the activities of all parties involved but at the expense of high costs for industry and government and a challenging operational complexity. Apart from an anticipatory project planning a good communication between all involved industrial parties and the French and the German government was the key to the effective management of such shipments and to minimize the radiological, economic, environmental, public and political impact. The future will show how efficiently the gained experience can be used for further return projects which are to be realized since no reprocessed waste has yet been returned from UK and neither the medium-level nor the low-level radioactive waste has been transferred from France to Germany. (author)

  6. PNNL Information Technology Benchmarking

    SciTech Connect

    DD Hostetler

    1999-09-08

    Benchmarking is a methodology for searching out industry best practices that lead to superior performance. It is exchanging information, not just with any organization, but with organizations known to be the best within PNNL, in industry, or in dissimilar industries with equivalent functions. It is used as a continuous improvement tool for business and technical processes, products, and services. Information technology--comprising all computer and electronic communication products and services--underpins the development and/or delivery of many PNNL products and services. This document describes the Pacific Northwest National Laboratory's (PNNL's) approach to information technology (IT) benchmarking. The purpose is to engage other organizations in the collaborative process of benchmarking in order to improve the value of IT services provided to customers. TM document's intended audience consists of other US Department of Energy (DOE) national laboratories and their IT staff. Although the individual participants must define the scope of collaborative benchmarking, an outline of IT service areas for possible benchmarking is described.

  7. Translational benchmark risk analysis

    PubMed Central

    Piegorsch, Walter W.

    2010-01-01

    Translational development – in the sense of translating a mature methodology from one area of application to another, evolving area – is discussed for the use of benchmark doses in quantitative risk assessment. Illustrations are presented with traditional applications of the benchmark paradigm in biology and toxicology, and also with risk endpoints that differ from traditional toxicological archetypes. It is seen that the benchmark approach can apply to a diverse spectrum of risk management settings. This suggests a promising future for this important risk-analytic tool. Extensions of the method to a wider variety of applications represent a significant opportunity for enhancing environmental, biomedical, industrial, and socio-economic risk assessments. PMID:20953283

  8. Benchmarking infrastructure for mutation text mining

    PubMed Central

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  9. Workshops and problems for benchmarking eddy current codes

    SciTech Connect

    Turner, L.R.; Davey, K.; Ida, N.; Rodger, D.; Kameari, A.; Bossavit, A.; Emson, C.R.I.

    1988-08-01

    A series of six workshops was held in 1986 and 1987 to compare eddy current codes, using six benchmark problems. The problems included transient and steady-state ac magnetic fields, close and far boundary conditions, magnetic and non-magnetic materials. All the problems were based either on experiments or on geometries that can be solved analytically. The workshops and solutions to the problems are described. Results show that many different methods and formulations give satisfactory solutions, and that in many cases reduced dimensionality or coarse discretization can give acceptable results while reducing the computer time required. A second two-year series of TEAM (Testing Electromagnetic Analysis Methods) workshops, using six more problems, is underway. 12 refs., 15 figs., 4 tabs.

  10. COG validation: SINBAD Benchmark Problems

    SciTech Connect

    Lent, E M; Sale, K E; Buck, R M; Descalle, M

    2004-02-23

    We validated COG, a 3D Monte Carlo radiation transport code, against experimental data and MNCP4C simulations from the Shielding Integral Benchmark Archive Database (SINBAD) compiled by RSICC. We modeled three experiments: the Osaka Nickel and Aluminum sphere experiments conducted at the OKTAVIAN facility, and the liquid oxygen experiment conducted at the FNS facility. COG results are in good agreement with experimental data and generally within a few % of MCNP results. There are several possible sources of discrepancy between MCNP and COG results: (1) the cross-section database versions are different, MCNP uses ENDFB VI 1.1 while COG uses ENDFB VIR7, (2) the code implementations are different, and (3) the models may differ slightly. We also limited the use of variance reduction methods when running the COG version of the problems.

  11. The role of integral experiments and nuclear cross section evaluations in space nuclear reactor design

    NASA Astrophysics Data System (ADS)

    Moses, David L.; McKnight, Richard D.

    The importance of the nuclear and neutronic properties of candidate space reactor materials to the design process has been acknowledged as has been the use of benchmark reactor physics experiments to verify and qualify analytical tools used in design, safety, and performance evaluation. Since June 1966, the Cross Section Evaluation Working Group (CSEWG) has acted as an interagency forum for the assessment and evaluation of nuclear reaction data used in the nuclear design process. CSEWG data testing has involved the specification and calculation of benchmark experiments which are used widely for commercial reactor design and safety analysis. These benchmark experiments preceded the issuance of the industry standards for acceptance, but the benchmarks exceed the minimum acceptance criteria for such data. Thus, a starting place has been provided in assuring the accuracy and uncertainty of nuclear data important to space reactor applications.

  12. Mask Waves Benchmark

    DTIC Science & Technology

    2007-10-01

    24 . Measured frequency vs. set frequency for all data .............................................. 23 25. Benchmark Probe#1 wave amplitude variation...4 8 A- 24 . Wave amplitude by probe, blower speed, lip setting for 0.768 Hz on the short I b an k...frequency and wavemaker bank .................................... 24 B- 1. Coefficient of variation as percentage for all conditions for long bank and bridge

  13. Benchmarks: WICHE Region 2012

    ERIC Educational Resources Information Center

    Western Interstate Commission for Higher Education, 2013

    2013-01-01

    Benchmarks: WICHE Region 2012 presents information on the West's progress in improving access to, success in, and financing of higher education. The information is updated annually to monitor change over time and encourage its use as a tool for informed discussion in policy and education communities. To establish a general context for the…

  14. Python/Lua Benchmarks

    SciTech Connect

    Busby, L.

    2014-08-01

    This is an adaptation of the pre-existing Scimark benchmark code to a variety of Python and Lua implementations. It also measures performance of the Fparser expression parser and C and C++ code on a variety of simple scientific expressions.

  15. Monte Carlo Benchmark

    SciTech Connect

    2010-10-20

    The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.

  16. Benchmarking the World's Best

    ERIC Educational Resources Information Center

    Tucker, Marc S.

    2012-01-01

    A century ago, the United States was a world leader in industrial benchmarking. However, after World War II, once no one could compete with the U.S., it became complacent. Many industrialized countries now have higher student achievement and more equitable and efficient education systems. A higher proportion of young people in their workforces…

  17. HPCS HPCchallenge Benchmark Suite

    DTIC Science & Technology

    2007-11-02

    measured HPCchallenge Benchmark performance on various HPC architectures — from Cray X1s to Beowulf clusters — in the presentation and paper...from Cray X1s to Beowulf clusters — using the updated results at http://icl.cs.utk.edu/hpcc/hpcc_results.cgi Even a small percentage of random

  18. From traditional cognitive-behavioural therapy to acceptance and commitment therapy for chronic pain: a mixed-methods study of staff experiences of change.

    PubMed

    Barker, Estelle; McCracken, Lance M

    2014-08-01

    Health care organizations, both large and small, frequently undergo processes of change. In fact, if health care organizations are to improve over time, they must change; this includes pain services. The purpose of the present study was to examine a process of change in treatment model within a specialty interdisciplinary pain service in the UK. This change entailed a switch from traditional cognitive-behavioural therapy to a form of cognitive-behavioural therapy called acceptance and commitment therapy. An anonymous online survey, including qualitative and quantitative components, was carried out approximately 15 months after the initial introduction of the new treatment model and methods. Fourteen out of 16 current clinical staff responded to the survey. Three themes emerged in qualitative analyses: positive engagement in change; uncertainty and discomfort; and group cohesion versus discord. Quantitative results from closed questions showed a pattern of uncertainty about the superiority of one model over the other, combined with more positive views on progress reflected, and the experience of personal benefits, from adopting the new model. The psychological flexibility model, the model behind acceptance and commitment therapy, may clarify both processes in patient behaviour and processes of staff experience and skilful treatment delivery. This integration of processes on both sides of treatment delivery may be a strength of acceptance and commitment therapy.

  19. The Lasting Influences of Early Food-Related Variety Experience: A Longitudinal Study of Vegetable Acceptance from 5 Months to 6 Years in Two Populations.

    PubMed

    Maier-Nöth, Andrea; Schaal, Benoist; Leathwood, Peter; Issanchou, Sylvie

    2016-01-01

    Children's vegetable consumption falls below current recommendations, highlighting the need to identify strategies that can successfully promote better acceptance of vegetables. Recently, experimental studies have reported promising interventions that increase acceptance of vegetables. The first, offering infants a high variety of vegetables at weaning, increased acceptance of new foods, including vegetables. The second, offering an initially disliked vegetable at 8 subsequent meals markedly increased acceptance for that vegetable. So far, these effects have been shown to persist for at least several weeks. We now present follow-up data at 15 months, 3 and 6 years obtained through questionnaire (15 mo, 3y) and experimental (6y) approaches. At 15 months, participants who had been breast-fed were reported as eating and liking more vegetables than those who had been formula-fed. The initially disliked vegetable that became accepted after repeated exposure was still liked and eaten by 79% of the children. At 3 years, the initially disliked vegetable was still liked and eaten by 73% of the children. At 6 years, observations in an experimental setting showed that children who had been breast-fed and children who had experienced high vegetable variety at the start of weaning ate more of new vegetables and liked them more. They were also more willing to taste vegetables than formula-fed children or the no or low variety groups. The initially disliked vegetable was still liked by 57% of children. This follow-up study suggests that experience with chemosensory variety in the context of breastfeeding or at the onset of complementary feeding can influence chemosensory preferences for vegetables into childhood.

  20. Storage-Intensive Supercomputing Benchmark Study

    SciTech Connect

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    DBE Xeon Dual Socket Blackford Server Motherboard; 2 Intel Xeon Dual-Core 2.66 GHz processors; 1 GB DDR2 PC2-5300 RAM (2 x 512); 80GB Hard Drive (Seagate SATA II Barracuda). The Fusion board is presently capable of 4X in a PCIe slot. The image resampling benchmark was run on a dual Xeon workstation with NVIDIA graphics card (see Chapter 5 for full specification). An XtremeData Opteron+FPGA was used for the language classification application. We observed that these benchmarks are not uniformly I/O intensive. The only benchmark that showed greater that 50% of the time in I/O was the graph algorithm when it accessed data files over NFS. When local disk was used, the graph benchmark spent at most 40% of its time in I/O. The other benchmarks were CPU dominated. The image resampling benchmark and language classification showed order of magnitude speedup over software by using co-processor technology to offload the CPU-intensive kernels. Our experiments to date suggest that emerging hardware technologies offer significant benefit to boosting the performance of data-intensive algorithms. Using GPU and FPGA co-processors, we were able to improve performance by more than an order of magnitude on the benchmark algorithms, eliminating the processor bottleneck of CPU-bound tasks. Experiments with a prototype solid state nonvolative memory available today show 10X better throughput on random reads than disk, with a 2X speedup on a graph processing benchmark when compared to the use of local SATA disk.

  1. Principles for an ETL Benchmark

    NASA Astrophysics Data System (ADS)

    Wyatt, Len; Caufield, Brian; Pol, Daniel

    Conditions in the marketplace for ETL tools suggest that an industry standard benchmark is needed. The benchmark should provide useful data for comparing the performance of ETL systems, be based on a meaningful scenario, and be scalable over a wide range of data set sizes. This paper gives a general scoping of the proposed benchmark and outlines some key decision points. The Transaction Processing Performance Council (TPC) has formed a development subcommittee to define and produce such a benchmark.

  2. The Snowmass points and slopes: Benchmarks for SUSY searches

    SciTech Connect

    M. Battaglia et al.

    2002-03-04

    The ''Snowmass Points and Slopes'' (SPS) are a set of benchmark points and parameter lines in the MSSM parameter space corresponding to different scenarios in the search for Supersymmetry at present and future experiments. This set of benchmarks was agreed upon at the 2001 ''Snowmass Workshop on the Future of Particle Physics'' as a consensus based on different existing proposals.

  3. Sequoia Messaging Rate Benchmark

    SciTech Connect

    Friedley, Andrew

    2008-01-22

    The purpose of this benchmark is to measure the maximal message rate of a single compute node. The first num_cores ranks are expected to reside on the 'core' compute node for which message rate is being tested. After that, the next num_nbors ranks are neighbors for the first core rank, the next set of num_nbors ranks are neighbors for the second core rank, and so on. For example, testing an 8-core node (num_cores = 8) with 4 neighbors (num_nbors = 4) requires 8 + 8 * 4 - 40 ranks. The first 8 of those 40 ranks are expected to be on the 'core' node being benchmarked, while the rest of the ranks are on separate nodes.

  4. Benchmark Airport Charges

    NASA Technical Reports Server (NTRS)

    de Wit, A.; Cohn, N.

    1999-01-01

    The Netherlands Directorate General of Civil Aviation (DGCA) commissioned Hague Consulting Group (HCG) to complete a benchmark study of airport charges at twenty eight airports in Europe and around the world, based on 1996 charges. This study followed previous DGCA research on the topic but included more airports in much more detail. The main purpose of this new benchmark study was to provide insight into the levels and types of airport charges worldwide and into recent changes in airport charge policy and structure. This paper describes the 1996 analysis. It is intended that this work be repeated every year in order to follow developing trends and provide the most up-to-date information possible.

  5. Algebraic Multigrid Benchmark

    SciTech Connect

    2013-05-06

    AMG2013 is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. It has been derived directly from the Boomer AMG solver in the hypre library, a large linear solvers library that is being developed in the Center for Applied Scientific Computing (CASC) at LLNL. The driver provided in the benchmark can build various test problems. The default problem is a Laplace type problem on an unstructured domain with various jumps and an anisotropy in one part.

  6. MPI Multicore Linktest Benchmark

    SciTech Connect

    Schulz, Martin

    2008-01-25

    The MPI Multicore Linktest (LinkTest) measures the aggregate bandwidth from/to a multicore node in a parallel system. It allows the user to specify a variety of different node layout and communication routine variations and reports the maximal observed bandwidth across all specified options. In particular, this benchmark is able to vary the number of tasks on the root node and thereby allows users to study the impact of multicore architectures on MPI communication performance.

  7. Shame and self-acceptance in continued flux: qualitative study of the embodied experience of significant weight loss and removal of resultant excess skin by plastic surgery.

    PubMed

    Smith, Fran; Farrants, Jacqui R

    2013-09-01

    This study explored the embodied experience of body change using a qualitative design. Eight previous plastic surgery patients of a London hospital took part in in-depth, semi-structured interviews 1 year post a plastic surgery procedure to remove excess skin around their abdomen, resulting from weight loss. Participant interviews were analysed using Interpretative Phenomenological Analysis. Two sub-themes titled 'Shame of the hidden body' and 'Lack of acceptance; the future focused body' are presented in this article. Findings are considered in relation to theories of 'Body Shame' and in the current cultural context.

  8. 75 FR 36710 - The Texas Engineering Experiment Station/Texas A&M University System; Notice of Acceptance for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-28

    ... for the Nuclear Science Center Reactor and Order Imposing Procedures for Access To Safeguards... Experiment Station/Texas A&M University System (TEES, the licensee) to operate the Nuclear Science Center... constitute a Fair Use application, participants are requested not to include copyrighted materials in...

  9. From Being Bullied to Being Accepted: The Lived Experiences of a Student with Asperger's Enrolled in a Christian University

    ERIC Educational Resources Information Center

    Reid, Denise P.

    2015-01-01

    Thirteen participants from two private universities located in the western region of the United States shared their lived experiences of being a college student who does not request accommodations. In one's educational pursuit, bullying is often experienced. While the rates of bullying have increased, students with disabilities are more likely to…

  10. The NAS Parallel Benchmarks

    SciTech Connect

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage

  11. Patients' experience of a telephone booster intervention to support weight management in Type 2 diabetes and its acceptability.

    PubMed

    Wu, Lihua; Forbes, Angus; While, Alison

    2010-01-01

    We studied the patient experience of a telephone booster intervention, i.e. weekly reinforcement of the clinic advice regarding lifestyle modification advice to support weight loss. Forty six adults with Type 2 diabetes and a body mass index >28 kg/m(2) were randomised into either intervention (n = 25) or control (n = 21) groups. Semi-structured interviews were conducted with the intervention group participants to explore their views and experiences. The patients were satisfied or very satisfied with the telephone calls and most would recommend the intervention to others in a similar situation. The content of the telephone follow-up met their need for on-going support. The benefits arising from the telephone calls included: being reminded to comply with their regimen; prompting and motivating adherence to diabetes self-care behaviours; improved self-esteem; and feeling 'worthy of interest'. The convenience and low cost of telephone support has much potential in chronic disease management.

  12. Inventory of Safety-related Codes and Standards for Energy Storage Systems with some Experiences related to Approval and Acceptance

    SciTech Connect

    Conover, David R.

    2014-09-11

    The purpose of this document is to identify laws, rules, model codes, codes, standards, regulations, specifications (CSR) related to safety that could apply to stationary energy storage systems (ESS) and experiences to date securing approval of ESS in relation to CSR. This information is intended to assist in securing approval of ESS under current CSR and to identification of new CRS or revisions to existing CRS and necessary supporting research and documentation that can foster the deployment of safe ESS.

  13. Benchmarking: applications to transfusion medicine.

    PubMed

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal.

  14. Self-benchmarking Guide for Data Centers: Metrics, Benchmarks, Actions

    SciTech Connect

    Mathew, Paul; Ganguly, Srirupa; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in data centers. This guide is primarily intended for personnel who have responsibility for managing energy use in existing data centers - including facilities managers, energy managers, and their engineering consultants. Additionally, data center designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior data center benchmarking studies supported by the California Energy Commission. Much of the benchmarking data are drawn from the LBNL data center benchmarking database that was developed from these studies. Additional benchmark data were obtained from engineering experts including facility designers and energy managers. This guide also builds on recent research supported by the U.S. Department of Energy's Save Energy Now program.

  15. RECENT ADDITIONS OF CRITICALITY SAFETY RELATED INTEGRAL BENCHMARK DATA TO THE ICSBEP AND IRPHEP HANDBOOKS

    SciTech Connect

    J. Blair Briggs; Lori Scott; Yolanda Rugama; Enrico Sartori

    2009-09-01

    High-quality integral benchmark experiments have always been a priority for criticality safety. However, interest in integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of future criticality safety needs to support next generation reactor and advanced fuel cycle concepts. The importance of drawing upon existing benchmark data is becoming more apparent because of dwindling availability of critical facilities worldwide and the high cost of performing new experiments. Integral benchmark data from the International Handbook of Evaluated Criticality Safety Benchmark Experiments and the International Handbook of Reactor Physics Benchmark Experiments are widely used. Benchmark data have been added to these two handbooks since the last Nuclear Criticality Safety Division Topical Meeting in Knoxville, Tennessee (September 2005). This paper highlights these additions.

  16. Benchmarking concentrating photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Duerr, Fabian; Muthirayan, Buvaneshwari; Meuret, Youri; Thienpont, Hugo

    2010-08-01

    Integral to photovoltaics is the need to provide improved economic viability. To achieve this goal, photovoltaic technology has to be able to harness more light at less cost. A large variety of concentrating photovoltaic concepts has provided cause for pursuit. To obtain a detailed profitability analysis, a flexible evaluation is crucial for benchmarking the cost-performance of this variety of concentrating photovoltaic concepts. To save time and capital, a way to estimate the cost-performance of a complete solar energy system is to use computer aided modeling. In this work a benchmark tool is introduced based on a modular programming concept. The overall implementation is done in MATLAB whereas Advanced Systems Analysis Program (ASAP) is used for ray tracing calculations. This allows for a flexible and extendable structuring of all important modules, namely an advanced source modeling including time and local dependence, and an advanced optical system analysis of various optical designs to obtain an evaluation of the figure of merit. An important figure of merit: the energy yield for a given photovoltaic system at a geographical position over a specific period, can be calculated.

  17. Air Traffic Management Technology Demostration Phase 1 (ATD) Interval Management for Near-Term Operations Validation of Acceptability (IM-NOVA) Experiment

    NASA Technical Reports Server (NTRS)

    Kibler, Jennifer L.; Wilson, Sara R.; Hubbs, Clay E.; Smail, James W.

    2015-01-01

    The Interval Management for Near-term Operations Validation of Acceptability (IM-NOVA) experiment was conducted at the National Aeronautics and Space Administration (NASA) Langley Research Center (LaRC) in support of the NASA Airspace Systems Program's Air Traffic Management Technology Demonstration-1 (ATD-1). ATD-1 is intended to showcase an integrated set of technologies that provide an efficient arrival solution for managing aircraft using Next Generation Air Transportation System (NextGen) surveillance, navigation, procedures, and automation for both airborne and ground-based systems. The goal of the IMNOVA experiment was to assess if procedures outlined by the ATD-1 Concept of Operations were acceptable to and feasible for use by flight crews in a voice communications environment when used with a minimum set of Flight Deck-based Interval Management (FIM) equipment and a prototype crew interface. To investigate an integrated arrival solution using ground-based air traffic control tools and aircraft Automatic Dependent Surveillance-Broadcast (ADS-B) tools, the LaRC FIM system and the Traffic Management Advisor with Terminal Metering and Controller Managed Spacing tools developed at the NASA Ames Research Center (ARC) were integrated into LaRC's Air Traffic Operations Laboratory (ATOL). Data were collected from 10 crews of current 757/767 pilots asked to fly a high-fidelity, fixed-based simulator during scenarios conducted within an airspace environment modeled on the Dallas-Fort Worth (DFW) Terminal Radar Approach Control area. The aircraft simulator was equipped with the Airborne Spacing for Terminal Area Routes (ASTAR) algorithm and a FIM crew interface consisting of electronic flight bags and ADS-B guidance displays. Researchers used "pseudo-pilot" stations to control 24 simulated aircraft that provided multiple air traffic flows into the DFW International Airport, and recently retired DFW air traffic controllers served as confederate Center, Feeder, Final

  18. A new numerical benchmark of a freshwater lens

    NASA Astrophysics Data System (ADS)

    Stoeckl, L.; Walther, M.; Graf, T.

    2016-04-01

    A numerical benchmark for 2-D variable-density flow and solute transport in a freshwater lens is presented. The benchmark is based on results of laboratory experiments conducted by Stoeckl and Houben (2012) using a sand tank on the meter scale. This benchmark describes the formation and degradation of a freshwater lens over time as it can be found under real-world islands. An error analysis gave the appropriate spatial and temporal discretization of 1 mm and 8.64 s, respectively. The calibrated parameter set was obtained using the parameter estimation tool PEST. Comparing density-coupled and density-uncoupled results showed that the freshwater-saltwater interface position is strongly dependent on density differences. A benchmark that adequately represents saltwater intrusion and that includes realistic features of coastal aquifers or freshwater lenses was lacking. This new benchmark was thus developed and is demonstrated to be suitable to test variable-density groundwater models applied to saltwater intrusion investigations.

  19. Experience of domestic violence and acceptance of intimate partner violence among out-of-school adolescent girls in Iwaya Community, Lagos State.

    PubMed

    Kunnuji, Michael O N

    2015-02-01

    Gender-based domestic violence (DV) comes at great costs to the victims and society at large. Yet, many women hold the view that intimate partner violence (IPV) against women is appropriate behavior. This study aimed at exploring the nexus of experience of different forms of DV and acceptance of IPV as appropriate behavior. Using data from a survey of 480 out-of-school adolescent girls, the researcher shows that psychological abuse is a significant predictor of approval of DV resulting from the wife's failure to make food available for her husband with victims of abuse approving of violence against women. Conversely, victims of sexual abuse, more than nonvictims, disapproved of wife beating resulting from the wife going out without informing the husband. The implications of the findings are discussed and the study recommends deconstructing women's negative beliefs upon which DV rests.

  20. Experiences of stigma among people with severe mental illness. Reliability, acceptability and construct validity of the Swedish versions of two stigma scales measuring devaluation/discrimination and rejection experiences.

    PubMed

    Björkman, Tommy; Svensson, Bengt; Lundberg, Bertil

    2007-01-01

    Stigma has been identified as one of the most important obstacles for a successful integration of people with mental illness into the society. Research about stigma has shown negative attitudes among the public towards people with mental illness. Studies so far have, however, put little emphasis on how these negative attitudes are perceived by the mentally ill persons. The aim of the present study was to investigate acceptability and internal consistency of the Swedish versions of two stigma scales, the Devaluation and Discrimination scale and the Rejection experiences scale. Forty individuals were subject to an interview, which also comprised assessments of needs for care, quality of life, therapeutic relationship and empowerment. The results showed that both the Devaluation and Discrimination scale and the Rejection experiences scale had a good internal consistency and acceptability. Stigma in terms of perceived devaluation and discrimination was found to be most markedly associated with empowerment and rejection experiences was found to be most associated with the number of previous psychiatric admissions. It is concluded that the Swedish versions of the Devaluation and Discrimination scale and the Rejection experiences scale may well be used in further studies of stigma among people with mental illness.

  1. Seismo-acoustic ray model benchmarking against experimental tank data.

    PubMed

    Camargo Rodríguez, Orlando; Collis, Jon M; Simpson, Harry J; Ey, Emanuel; Schneiderwind, Joseph; Felisberto, Paulo

    2012-08-01

    Acoustic predictions of the recently developed traceo ray model, which accounts for bottom shear properties, are benchmarked against tank experimental data from the EPEE-1 and EPEE-2 (Elastic Parabolic Equation Experiment) experiments. Both experiments are representative of signal propagation in a Pekeris-like shallow-water waveguide over a non-flat isotropic elastic bottom, where significant interaction of the signal with the bottom can be expected. The benchmarks show, in particular, that the ray model can be as accurate as a parabolic approximation model benchmarked in similar conditions. The results of benchmarking are important, on one side, as a preliminary experimental validation of the model and, on the other side, demonstrates the reliability of the ray approach for seismo-acoustic applications.

  2. Aeroelasticity Benchmark Assessment: Subsonic Fixed Wing Program

    NASA Technical Reports Server (NTRS)

    Florance, Jennifer P.; Chwalowski, Pawel; Wieseman, Carol D.

    2010-01-01

    The fundamental technical challenge in computational aeroelasticity is the accurate prediction of unsteady aerodynamic phenomena and the effect on the aeroelastic response of a vehicle. Currently, a benchmarking standard for use in validating the accuracy of computational aeroelasticity codes does not exist. Many aeroelastic data sets have been obtained in wind-tunnel and flight testing throughout the world; however, none have been globally presented or accepted as an ideal data set. There are numerous reasons for this. One reason is that often, such aeroelastic data sets focus on the aeroelastic phenomena alone (flutter, for example) and do not contain associated information such as unsteady pressures and time-correlated structural dynamic deflections. Other available data sets focus solely on the unsteady pressures and do not address the aeroelastic phenomena. Other discrepancies can include omission of relevant data, such as flutter frequency and / or the acquisition of only qualitative deflection data. In addition to these content deficiencies, all of the available data sets present both experimental and computational technical challenges. Experimental issues include facility influences, nonlinearities beyond those being modeled, and data processing. From the computational perspective, technical challenges include modeling geometric complexities, coupling between the flow and the structure, grid issues, and boundary conditions. The Aeroelasticity Benchmark Assessment task seeks to examine the existing potential experimental data sets and ultimately choose the one that is viewed as the most suitable for computational benchmarking. An initial computational evaluation of that configuration will then be performed using the Langley-developed computational fluid dynamics (CFD) software FUN3D1 as part of its code validation process. In addition to the benchmarking activity, this task also includes an examination of future research directions. Researchers within the

  3. Cleanroom energy benchmarking results

    SciTech Connect

    Tschudi, William; Xu, Tengfang

    2001-09-01

    A utility market transformation project studied energy use and identified energy efficiency opportunities in cleanroom HVAC design and operation for fourteen cleanrooms. This paper presents the results of this work and relevant observations. Cleanroom owners and operators know that cleanrooms are energy intensive but have little information to compare their cleanroom's performance over time, or to others. Direct comparison of energy performance by traditional means, such as watts/ft{sup 2}, is not a good indicator with the wide range of industrial processes and cleanliness levels occurring in cleanrooms. In this project, metrics allow direct comparison of the efficiency of HVAC systems and components. Energy and flow measurements were taken to determine actual HVAC system energy efficiency. The results confirm a wide variation in operating efficiency and they identify other non-energy operating problems. Improvement opportunities were identified at each of the benchmarked facilities. Analysis of the best performing systems and components is summarized, as are areas for additional investigation.

  4. Benchmarking monthly homogenization algorithms

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  5. Benchmarking foreign electronics technologies

    SciTech Connect

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  6. Internal Benchmarking for Institutional Effectiveness

    ERIC Educational Resources Information Center

    Ronco, Sharron L.

    2012-01-01

    Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…

  7. The NAS kernel benchmark program

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.; Barton, J. T.

    1985-01-01

    A collection of benchmark test kernels that measure supercomputer performance has been developed for the use of the NAS (Numerical Aerodynamic Simulation) program at the NASA Ames Research Center. This benchmark program is described in detail and the specific ground rules are given for running the program as a performance test.

  8. Benchmarking: A Process for Improvement.

    ERIC Educational Resources Information Center

    Peischl, Thomas M.

    One problem with the outcome-based measures used in higher education is that they measure quantity but not quality. Benchmarking, or the use of some external standard of quality to measure tasks, processes, and outputs, is partially solving that difficulty. Benchmarking allows for the establishment of a systematic process to indicate if outputs…

  9. FireHose Streaming Benchmarks

    SciTech Connect

    Karl Anderson, Steve Plimpton

    2015-01-27

    The FireHose Streaming Benchmarks are a suite of stream-processing benchmarks defined to enable comparison of streaming software and hardware, both quantitatively vis-a-vis the rate at which they can process data, and qualitatively by judging the effort involved to implement and run the benchmarks. Each benchmark has two parts. The first is a generator which produces and outputs datums at a high rate in a specific format. The second is an analytic which reads the stream of datums and is required to perform a well-defined calculation on the collection of datums, typically to find anomalous datums that have been created in the stream by the generator. The FireHose suite provides code for the generators, sample code for the analytics (which users are free to re-implement in their own custom frameworks), and a precise definition of each benchmark calculation.

  10. Motives and decisions for and against having children among nonheterosexuals and the impact of experiences of discrimination, internalized stigma, and social acceptance.

    PubMed

    Kleinert, Evelyn; Martin, Olaf; Brähler, Elmar; Stöbel-Richter, Yve

    2015-01-01

    Same-sex parents are increasingly a topic of public discourse. A growing number of homosexuals openly speak about their desire to have children or are already living together in different family constellations. The current study examined the decisions for or against having children and the motivations behind those decisions among nonheterosexuals living in Germany. A sample of 1,283 nonheterosexuals participated by means of an online survey. As some nonheterosexual individuals do not identify themselves with a male or female gender identity, a third category, "gender different," was generated. Motives for (not) having children, perceptions of social acceptance, experiences of discrimination in relation to one's sexual orientation, and levels of internalized stigma were taken into account regarding their influence on the decision about parenthood. Most respondents (80%) reported that they did not have children. However, among this group, 43% stated that they had decided to have children later in their lives, 24% were undecided, and 11% had already decided against having children. The most important influences on the decision of whether to have children were respondents' age and their desire for emotional stabilization. Negative experiences as a result of sexual orientation and internalized stigma had no impact on the decisions regarding parenthood.

  11. International land Model Benchmarking (ILAMB) Package v001.00

    DOE Data Explorer

    Mu, Mingquan [University of California, Irvine; Randerson, James T. [University of California, Irvine; Riley, William J. [Lawrence Berkeley National Laboratory; Hoffman, Forrest M. [Oak Ridge National Laboratory

    2016-05-02

    As a contribution to International Land Model Benchmarking (ILAMB) Project, we are providing new analysis approaches, benchmarking tools, and science leadership. The goal of ILAMB is to assess and improve the performance of land models through international cooperation and to inform the design of new measurement campaigns and field studies to reduce uncertainties associated with key biogeochemical processes and feedbacks. ILAMB is expected to be a primary analysis tool for CMIP6 and future model-data intercomparison experiments. This team has developed initial prototype benchmarking systems for ILAMB, which will be improved and extended to include ocean model metrics and diagnostics.

  12. International land Model Benchmarking (ILAMB) Package v002.00

    SciTech Connect

    Collier, Nathaniel; Hoffman, Forrest M.; Mu, Mingquan; Randerson, James T.; Riley, William J.

    2016-05-09

    As a contribution to International Land Model Benchmarking (ILAMB) Project, we are providing new analysis approaches, benchmarking tools, and science leadership. The goal of ILAMB is to assess and improve the performance of land models through international cooperation and to inform the design of new measurement campaigns and field studies to reduce uncertainties associated with key biogeochemical processes and feedbacks. ILAMB is expected to be a primary analysis tool for CMIP6 and future model-data intercomparison experiments. This team has developed initial prototype benchmarking systems for ILAMB, which will be improved and extended to include ocean model metrics and diagnostics.

  13. Acceptance speech.

    PubMed

    Yusuf, C K

    1994-01-01

    I am proud and honored to accept this award on behalf of the Government of Bangladesh, and the millions of Bangladeshi children saved by oral rehydration solution. The Government of Bangladesh is grateful for this recognition of its commitment to international health and population research and cost-effective health care for all. The Government of Bangladesh has already made remarkable strides forward in the health and population sector, and this was recognized in UNICEF's 1993 "State of the World's Children". The national contraceptive prevalence rate, at 40%, is higher than that of many developed countries. It is appropriate that Bangladesh, where ORS was discovered, has the largest ORS production capacity in the world. It was remarkable that after the devastating cyclone in 1991, the country was able to produce enough ORS to meet the needs and remain self-sufficient. Similarly, Bangladesh has one of the most effective, flexible and efficient control of diarrheal disease and epidemic response program in the world. Through the country, doctors have been trained in diarrheal disease management, and stores of ORS are maintained ready for any outbreak. Despite grim predictions after the 1991 cyclone and the 1993 floods, relatively few people died from diarrheal disease. This is indicative of the strength of the national program. I want to take this opportunity to acknowledge the contribution of ICDDR, B and the important role it plays in supporting the Government's efforts in the health and population sector. The partnership between the Government of Bangladesh and ICDDR, B has already borne great fruit, and I hope and believe that it will continue to do so for many years in the future. Thank you.

  14. Benchmarking in academic pharmacy departments.

    PubMed

    Bosso, John A; Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O; Ross, Leigh Ann

    2010-10-11

    Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation.

  15. A Heterogeneous Medium Analytical Benchmark

    SciTech Connect

    Ganapol, B.D.

    1999-09-27

    A benchmark, called benchmark BLUE, has been developed for one-group neutral particle (neutron or photon) transport in a one-dimensional sub-critical heterogeneous plane parallel medium with surface illumination. General anisotropic scattering is accommodated through the Green's Function Method (GFM). Numerical Fourier transform inversion is used to generate the required Green's functions which are kernels to coupled integral equations that give the exiting angular fluxes. The interior scalar flux is then obtained through quadrature. A compound iterative procedure for quadrature order and slab surface source convergence provides highly accurate benchmark qualities (4- to 5- places of accuracy) results.

  16. California commercial building energy benchmarking

    SciTech Connect

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the identities of building owners might be revealed and

  17. Data-Intensive Benchmarking Suite

    SciTech Connect

    2008-11-26

    The Data-Intensive Benchmark Suite is a set of programs written for the study of data-or storage-intensive science and engineering problems, The benchmark sets cover: general graph searching (basic and Hadoop Map/Reduce breadth-first search), genome sequence searching, HTTP request classification (basic and Hadoop Map/Reduce), low-level data communication, and storage device micro-beachmarking

  18. Benchmarking hypercube hardware and software

    NASA Technical Reports Server (NTRS)

    Grunwald, Dirk C.; Reed, Daniel A.

    1986-01-01

    It was long a truism in computer systems design that balanced systems achieve the best performance. Message passing parallel processors are no different. To quantify the balance of a hypercube design, an experimental methodology was developed and the associated suite of benchmarks was applied to several existing hypercubes. The benchmark suite includes tests of both processor speed in the absence of internode communication and message transmission speed as a function of communication patterns.

  19. Research on computer systems benchmarking

    NASA Technical Reports Server (NTRS)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  20. Issues in Benchmarking Human Reliability Analysis Methods: A Literature Review

    SciTech Connect

    Ronald L. Boring; Stacey M. L. Hendrickson; John A. Forester; Tuan Q. Tran; Erasmia Lois

    2010-06-01

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessments (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study comparing and evaluating HRA methods in assessing operator performance in simulator experiments is currently underway. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.

  1. Status and Benchmarking of the Free Boundary Equilibrium Code FREEBIE

    NASA Astrophysics Data System (ADS)

    Urban, Jakub; Artaud, Jean-Francois; Basiuk, Vincent; Besseghir, Karim; Huynh, Philippe; Kim, Sunhee; Lister, Jonathan Bryan; Nardon, Eric

    2013-10-01

    FREEBIE is a recent free boundary equilibrium (FBE) code, which solves the temporal evolution of tokamak equilibrium, described by the Grad-Shafranov equation and circuit equations for active and passive poloidal field components. FREEBIE can be run stand-alone, within the transport code CRONOS or on the ITM (European Integrated Tokamak Modelling) platform. FREEBIE with prescribed plasma profiles has already been successfully benchmarked against DINA simulations and TCV experiments. Here we report on the current status of the code coupling with transport solvers and benchmarking of fully consistent transport-FBE simulations. A benchmarking procedure is developed and applied to several ITER cases using FREEBIE, DINA and CEDRES++. The benchmarks indicate that because of the different methods and the complexity of the problem, results obtained from the different codes are comparable only to a certain extent. Supported by GACR 13-38121P, EURATOM, AS CR AV0Z 20430508, MSMT 7G10072 and MSMT LM2011021.

  2. Benchmarking on Tsunami Currents with ComMIT

    NASA Astrophysics Data System (ADS)

    Sharghi vand, N.; Kanoglu, U.

    2015-12-01

    There were no standards for the validation and verification of tsunami numerical models before 2004 Indian Ocean tsunami. Even, number of numerical models has been used for inundation mapping effort, evaluation of critical structures, etc. without validation and verification. After 2004, NOAA Center for Tsunami Research (NCTR) established standards for the validation and verification of tsunami numerical models (Synolakis et al. 2008 Pure Appl. Geophys. 165, 2197-2228), which will be used evaluation of critical structures such as nuclear power plants against tsunami attack. NCTR presented analytical, experimental and field benchmark problems aimed to estimate maximum runup and accepted widely by the community. Recently, benchmark problems were suggested by the US National Tsunami Hazard Mitigation Program Mapping & Modeling Benchmarking Workshop: Tsunami Currents on February 9-10, 2015 at Portland, Oregon, USA (http://nws.weather.gov/nthmp/index.html). These benchmark problems concentrated toward validation and verification of tsunami numerical models on tsunami currents. Three of the benchmark problems were: current measurement of the Japan 2011 tsunami in Hilo Harbor, Hawaii, USA and in Tauranga Harbor, New Zealand, and single long-period wave propagating onto a small-scale experimental model of the town of Seaside, Oregon, USA. These benchmark problems were implemented in the Community Modeling Interface for Tsunamis (ComMIT) (Titov et al. 2011 Pure Appl. Geophys. 168, 2121-2131), which is a user-friendly interface to the validated and verified Method of Splitting Tsunami (MOST) (Titov and Synolakis 1995 J. Waterw. Port Coastal Ocean Eng. 121, 308-316) model and is developed by NCTR. The modeling results are compared with the required benchmark data, providing good agreements and results are discussed. Acknowledgment: The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant

  3. Improving Mass Balance Modeling of Benchmark Glaciers

    NASA Astrophysics Data System (ADS)

    van Beusekom, A. E.; March, R. S.; O'Neel, S.

    2009-12-01

    The USGS monitors long-term glacier mass balance at three benchmark glaciers in different climate regimes. The coastal and continental glaciers are represented by Wolverine and Gulkana Glaciers in Alaska, respectively. Field measurements began in 1966 and continue. We have reanalyzed the published balance time series with more modern methods and recomputed reference surface and conventional balances. Addition of the most recent data shows a continuing trend of mass loss. We compare the updated balances to the previously accepted balances and discuss differences. Not all balance quantities can be determined from the field measurements. For surface processes, we model missing information with an improved degree-day model. Degree-day models predict ablation from the sum of daily mean temperatures and an empirical degree-day factor. We modernize the traditional degree-day model as well as derive new degree-day factors in an effort to closer match the balance time series and thus better predict the future state of the benchmark glaciers. For subsurface processes, we model the refreezing of meltwater for internal accumulation. We examine the sensitivity of the balance time series to the subsurface process of internal accumulation, with the goal of determining the best way to include internal accumulation into balance estimates.

  4. How to avoid 'death by benchmarking'.

    PubMed

    Wofford, Dave; Libby, Darin

    2015-08-01

    Hospitals and health systems should adopt four key principles and practices when applying benchmarks to determine physician compensation: Acknowledge that a lower percentile may be appropriate. Use the median as the all-in benchmark. Use peer benchmarks when available. Use alternative benchmarks.

  5. Benchmarking for the Learning and Skills Sector.

    ERIC Educational Resources Information Center

    Owen, Jane

    This document is designed to introduce practitioners in the United Kingdom's learning and skills sector to the principles and practice of benchmarking. The first section defines benchmarking and differentiates metric, diagnostic, and process benchmarking. The remainder of the booklet details the following steps of the benchmarking process: (1) get…

  6. A Simplified HTTR Diffusion Theory Benchmark

    SciTech Connect

    Rodolfo M. Ferrer; Abderrafi M. Ougouag; Farzad Rahnema

    2010-10-01

    The Georgia Institute of Technology (GA-Tech) recently developed a transport theory benchmark based closely on the geometry and the features of the HTTR reactor that is operational in Japan. Though simplified, the benchmark retains all the principal physical features of the reactor and thus provides a realistic and challenging test for the codes. The purpose of this paper is twofold. The first goal is an extension of the benchmark to diffusion theory applications by generating the additional data not provided in the GA-Tech prior work. The second goal is to use the benchmark on the HEXPEDITE code available to the INL. The HEXPEDITE code is a Green’s function-based neutron diffusion code in 3D hexagonal-z geometry. The results showed that the HEXPEDITE code accurately reproduces the effective multiplication factor of the reference HELIOS solution. A secondary, but no less important, conclusion is that in the testing against actual HTTR data of a full sequence of codes that would include HEXPEDITE, in the apportioning of inevitable discrepancies between experiment and models, the portion of error attributable to HEXPEDITE would be expected to be modest. If large discrepancies are observed, they would have to be explained by errors in the data fed into HEXPEDITE. Results based on a fully realistic model of the HTTR reactor are presented in a companion paper. The suite of codes used in that paper also includes HEXPEDITE. The results shown here should help that effort in the decision making process for refining the modeling steps in the full sequence of codes.

  7. The impact and applicability of critical experiment evaluations

    SciTech Connect

    Brewer, R.

    1997-06-01

    This paper very briefly describes a project to evaluate previously performed critical experiments. The evaluation is intended for use by criticality safety engineers to verify calculations, and may also be used to identify data which need further investigation. The evaluation process is briefly outlined; the accepted benchmark critical experiments will be used as a standard for verification and validation. The end result of the project will be a comprehensive reference document.

  8. POTENTIAL BENCHMARKS FOR ACTINIDE PRODUCTION IN HANFORD REACTORS

    SciTech Connect

    PUIGH RJ; TOFFER H

    2011-10-19

    A significant experimental program was conducted in the early Hanford reactors to understand the reactor production of actinides. These experiments were conducted with sufficient rigor, in some cases, to provide useful information that can be utilized today in development of benchmark experiments that may be used for the validation of present computer codes for the production of these actinides in low enriched uranium fuel.

  9. Benchmark simulation models, quo vadis?

    PubMed

    Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.

  10. Closed-Loop Neuromorphic Benchmarks

    PubMed Central

    Stewart, Terrence C.; DeWolf, Travis; Kleinhans, Ashley; Eliasmith, Chris

    2015-01-01

    Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is even more difficult when the task of interest is a closed-loop task; that is, a task where the output from the neuromorphic hardware affects some environment, which then in turn affects the hardware's future input. However, closed-loop situations are one of the primary potential uses of neuromorphic hardware. To address this, we present a methodology for generating closed-loop benchmarks that makes use of a hybrid of real physical embodiment and a type of “minimal” simulation. Minimal simulation has been shown to lead to robust real-world performance, while still maintaining the practical advantages of simulation, such as making it easy for the same benchmark to be used by many researchers. This method is flexible enough to allow researchers to explicitly modify the benchmarks to identify specific task domains where particular hardware excels. To demonstrate the method, we present a set of novel benchmarks that focus on motor control for an arbitrary system with unknown external forces. Using these benchmarks, we show that an error-driven learning rule can consistently improve motor control performance across a randomly generated family of closed-loop simulations, even when there are up to 15 interacting joints to be controlled. PMID:26696820

  11. Defining acceptable conditions in wilderness

    NASA Astrophysics Data System (ADS)

    Roggenbuck, J. W.; Williams, D. R.; Watson, A. E.

    1993-03-01

    The limits of acceptable change (LAC) planning framework recognizes that forest managers must decide what indicators of wilderness conditions best represent resource naturalness and high-quality visitor experiences and how much change from the pristine is acceptable for each indicator. Visitor opinions on the aspects of the wilderness that have great impact on their experience can provide valuable input to selection of indicators. Cohutta, Georgia; Caney Creek, Arkansas; Upland Island, Texas; and Rattlesnake, Montana, wilderness visitors have high shared agreement that littering and damage to trees in campsites, noise, and seeing wildlife are very important influences on wilderness experiences. Camping within sight or sound of other people influences experience quality more than do encounters on the trails. Visitors’ standards of acceptable conditions within wilderness vary considerably, suggesting a potential need to manage different zones within wilderness for different clientele groups and experiences. Standards across wildernesses, however, are remarkably similar.

  12. Benchmarking boiler tube failures - Part 1

    SciTech Connect

    Patrick, J.; Oldani, R.; von Behren, D.

    2005-10-01

    Boiler tube failures continue to be the leading cause of downtime for steam power plants. That should not be a surprise; a typical steam generator has miles of tubes that operate at high temperatures and pressures. Are your experiences comparable to those of your peers? Could you learn something from tube-leak benchmarking data that could improve the operation of your plant? The Electric Utility Cost Group (EUCG) recently completed a boiler-tube failure study that is available only to its members. But Power magazine has been given exclusive access to some of the results, published in this article. 4 figs.

  13. FLOWTRAN-TF code benchmarking

    SciTech Connect

    Flach, G.P.

    1990-12-01

    FLOWTRAN-TF is a two-component (air-water), two-phase thermal-hydraulics code designed for performing accident analyses of SRS reactor fuel assemblies during the Emergency Cooling System (ECS) phase of a Double Ended Guillotine Break (DEGB) Loss Of Coolant Accident (LOCA). A description of the code is given by Flach et al. (1990). This report provides benchmarking results for the version of FLOWTRAN-TF used to compute the Recommended K-Reactor Restart ECS Power Limit (Smith et al., 1990a; 1990b). Individual constitutive relations are benchmarked in Sections 2 through 5 while in Sections 6 and 7 integral code benchmarking results are presented. An overall assessment of FLOWTRAN-TF for its intended use in computing the ECS power limit completes the document.

  14. Radiation Detection Computational Benchmark Scenarios

    SciTech Connect

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  15. SPICE benchmark for global tomographic methods

    NASA Astrophysics Data System (ADS)

    Qin, Yilong; Capdeville, Yann; Maupin, Valerie; Montagner, Jean-Paul; Lebedev, Sergei; Beucler, Eric

    2008-11-01

    The existing global tomographic methods result in different models due to different parametrization, scale resolution and theoretical approach. To test how current imaging techniques are limited by approximations in theory and by the inadequacy of data quality and coverage, it is necessary to perform a global-scale benchmark to understand the resolving properties of each specific imaging algorithm. In the framework of the Seismic wave Propagation and Imaging in Complex media: a European network (SPICE) project, it was decided to perform a benchmark experiment of global inversion algorithms. First, a preliminary benchmark with a simple isotropic model is carried out to check the feasibility in terms of acquisition geometry and numerical accuracy. Then, to fully validate tomographic schemes with a challenging synthetic data set, we constructed one complex anisotropic global model, which is characterized by 21 elastic constants and includes 3-D heterogeneities in velocity, anisotropy (radial and azimuthal anisotropy), attenuation, density, as well as surface topography and bathymetry. The intermediate-period (>32 s), high fidelity anisotropic modelling was performed by using state-of-the-art anisotropic anelastic modelling code, that is, coupled spectral element method (CSEM), on modern massively parallel computing resources. The benchmark data set consists of 29 events and three-component seismograms are recorded by 256 stations. Because of the limitation of the available computing power, synthetic seismograms have a minimum period of 32 s and a length of 10 500 s. The inversion of the benchmark data set demonstrates several well-known problems of classical surface wave tomography, such as the importance of crustal correction to recover the shallow structures, the loss of resolution with depth, the smearing effect, both horizontal and vertical, the inaccuracy of amplitude of isotropic S-wave velocity variation, the difficulty of retrieving the magnitude of azimuthal

  16. Integral Benchmark Data for Nuclear Data Testing Through the ICSBEP & IRPhEP

    SciTech Connect

    J. Blair Briggs; John D. Bess; Jim Gulliford; Ian Hill

    2013-10-01

    The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and International Reactor Physics Experiment Evaluation Project (IRPhEP) was last discussed directly with the nuclear data community at ND2007. Since ND2007, integral benchmark data that are available for nuclear data testing have increased significantly. The status of the ICSBEP and the IRPhEP is discussed and selected benchmark configurations that have been added to the ICSBEP and IRPhEP Handbooks since ND2007 are highlighted.

  17. Integral Benchmark Data for Nuclear Data Testing Through the ICSBEP & IRPhEP

    NASA Astrophysics Data System (ADS)

    Briggs, J. B.; Bess, J. D.; Gulliford, J.

    2014-04-01

    The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and International Reactor Physics Experiment Evaluation Project (IRPhEP) was last discussed directly with the nuclear data community at ND2007. Since ND2007, integral benchmark data that are available for nuclear data testing have increased significantly. The status of the ICSBEP and the IRPhEP is discussed and selected benchmark configurations that have been added to the ICSBEP and IRPhEP Handbooks since ND2007 are highlighted.

  18. High acceptance recoil polarimeter

    SciTech Connect

    The HARP Collaboration

    1992-12-05

    In order to detect neutrons and protons in the 50 to 600 MeV energy range and measure their polarization, an efficient, low-noise, self-calibrating device is being designed. This detector, known as the High Acceptance Recoil Polarimeter (HARP), is based on the recoil principle of proton detection from np[r arrow]n[prime]p[prime] or pp[r arrow]p[prime]p[prime] scattering (detected particles are underlined) which intrinsically yields polarization information on the incoming particle. HARP will be commissioned to carry out experiments in 1994.

  19. Current Reactor Physics Benchmark Activities at the Idaho National Laboratory

    SciTech Connect

    John D. Bess; Margaret A. Marshall; Mackenzie L. Gorham; Joseph Christensen; James C. Turnbull; Kim Clark

    2011-11-01

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) [1] and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) [2] were established to preserve integral reactor physics and criticality experiment data for present and future research. These valuable assets provide the basis for recording, developing, and validating our integral nuclear data, and experimental and computational methods. These projects are managed through the Idaho National Laboratory (INL) and the Organisation for Economic Co-operation and Development Nuclear Energy Agency (OECD-NEA). Staff and students at the Department of Energy - Idaho (DOE-ID) and INL are engaged in the development of benchmarks to support ongoing research activities. These benchmarks include reactors or assemblies that support Next Generation Nuclear Plant (NGNP) research, space nuclear Fission Surface Power System (FSPS) design validation, and currently operational facilities in Southeastern Idaho.

  20. PRISMATIC CORE COUPLED TRANSIENT BENCHMARK

    SciTech Connect

    J. Ortensi; M.A. Pope; G. Strydom; R.S. Sen; M.D. DeHart; H.D. Gougar; C. Ellis; A. Baxter; V. Seker; T.J. Downar; K. Vierow; K. Ivanov

    2011-06-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  1. Engine Benchmarking - Final CRADA Report

    SciTech Connect

    Wallner, Thomas

    2016-01-01

    Detailed benchmarking of the powertrains of three light-duty vehicles was performed. Results were presented and provided to CRADA partners. The vehicles included a MY2011 Audi A4, a MY2012 Mini Cooper and a MY2014 Nissan Versa.

  2. Benchmark Lisp And Ada Programs

    NASA Technical Reports Server (NTRS)

    Davis, Gloria; Galant, David; Lim, Raymond; Stutz, John; Gibson, J.; Raghavan, B.; Cheesema, P.; Taylor, W.

    1992-01-01

    Suite of nonparallel benchmark programs, ELAPSE, designed for three tests: comparing efficiency of computer processing via Lisp vs. Ada; comparing efficiencies of several computers processing via Lisp; or comparing several computers processing via Ada. Tests efficiency which computer executes routines in each language. Available for computer equipped with validated Ada compiler and/or Common Lisp system.

  3. Processor Emulator with Benchmark Applications

    SciTech Connect

    Lloyd, G. Scott; Pearce, Roger; Gokhale, Maya

    2015-11-13

    A processor emulator and a suite of benchmark applications have been developed to assist in characterizing the performance of data-centric workloads on current and future computer architectures. Some of the applications have been collected from other open source projects. For more details on the emulator and an example of its usage, see reference [1].

  4. Austin Community College Benchmarking Update.

    ERIC Educational Resources Information Center

    Austin Community Coll., TX. Office of Institutional Effectiveness.

    Austin Community College contracted with MGT of America, Inc. in spring 1999 to develop a peer and benchmark (best) practices analysis on key indicators. These indicators were updated in spring 2002 using data from eight Texas community colleges and four non-Texas institutions that represent large, comprehensive, urban community colleges, similar…

  5. Iraq: Politics, Elections, and Benchmarks

    DTIC Science & Technology

    2009-12-08

    Politics, Elections, and Benchmarks Congressional Research Service 2 Kirkuk ( Tamim province) will join the Kurdish region (Article 140); designation...security control over areas inhabited by Kurds, and the Kurds’ claim that the province of Tamim (Kirkuk) be formally integrated into the KRG. These

  6. Iraq: Politics, Elections, and Benchmarks

    DTIC Science & Technology

    2010-01-15

    referendum on whether . Iraq: Politics, Elections, and Benchmarks Congressional Research Service 2 Kirkuk ( Tamim province) would join the Kurdish...areas inhabited by Kurds, and the Kurds’ claim that the province of Tamim (Kirkuk) be formally integrated into the KRG. These disputes were aggravated

  7. Iraq: Politics, Elections, and Benchmarks

    DTIC Science & Technology

    2010-04-28

    Politics, Elections, and Benchmarks Congressional Research Service 2 Kirkuk ( Tamim province) would join the Kurdish region (Article 140...18 Maliki: 8; INA: 9; Iraqiyya: 1 Sulaymaniyah 17 Kurdistan Alliance: 8; other Kurds: 9 Kirkuk ( Tamim ) 12 Iraqiyya: 6; Kurdistan Alliance: 6

  8. Iraq: Politics, Elections, and Benchmarks

    DTIC Science & Technology

    2009-10-21

    Benchmarks Congressional Research Service 2 Kirkuk ( Tamim province) will join the Kurdish region (Article 140); designation of Islam as “a main source” of...security control over areas inhabited by Kurds, and the Kurds’ claim that the province of Tamim (Kirkuk) be formally integrated into the KRG. These

  9. Benchmark 3 - Incremental sheet forming

    NASA Astrophysics Data System (ADS)

    Elford, Michael; Saha, Pradip; Seong, Daeyong; Haque, MD Ziaul; Yoon, Jeong Whan

    2013-12-01

    Benchmark-3 is designed to predict strains, punch load and deformed profile after spring-back during single tool incremental sheet forming. AA 7075-O material has been selected. A corn shape is formed to 45 mm depth with an angle of 45°. Problem description, material properties, and simulation reports with experimental data are summarized.

  10. Benchmarking ETL Workflows

    NASA Astrophysics Data System (ADS)

    Simitsis, Alkis; Vassiliadis, Panos; Dayal, Umeshwar; Karagiannis, Anastasios; Tziovara, Vasiliki

    Extraction-Transform-Load (ETL) processes comprise complex data workflows, which are responsible for the maintenance of a Data Warehouse. A plethora of ETL tools is currently available constituting a multi-million dollar market. Each ETL tool uses its own technique for the design and implementation of an ETL workflow, making the task of assessing ETL tools extremely difficult. In this paper, we identify common characteristics of ETL workflows in an effort of proposing a unified evaluation method for ETL. We also identify the main points of interest in designing, implementing, and maintaining ETL workflows. Finally, we propose a principled organization of test suites based on the TPC-H schema for the problem of experimenting with ETL workflows.

  11. Simplified two and three dimensional HTTR benchmark problems

    SciTech Connect

    Zhan Zhang; Dingkang Zhang; Justin M. Pounders; Abderrafi M. Ougouag

    2011-05-01

    To assess the accuracy of diffusion or transport methods for reactor calculations, it is desirable to create heterogeneous benchmark problems that are typical of whole core configurations. In this paper we have created two and three dimensional numerical benchmark problems typical of high temperature gas cooled prismatic cores. Additionally, a single cell and single block benchmark problems are also included. These problems were derived from the HTTR start-up experiment. Since the primary utility of the benchmark problems is in code-to-code verification, minor details regarding geometry and material specification of the original experiment have been simplified while retaining the heterogeneity and the major physics properties of the core from a neutronics viewpoint. A six-group material (macroscopic) cross section library has been generated for the benchmark problems using the lattice depletion code HELIOS. Using this library, Monte Carlo solutions are presented for three configurations (all-rods-in, partially-controlled and all-rods-out) for both the 2D and 3D problems. These solutions include the core eigenvalues, the block (assembly) averaged fission densities, local peaking factors, the absorption densities in the burnable poison and control rods, and pin fission density distribution for selected blocks. Also included are the solutions for the single cell and single block problems.

  12. Benchmarking neuromorphic vision: lessons learnt from computer vision.

    PubMed

    Tan, Cheston; Lallee, Stephane; Orchard, Garrick

    2015-01-01

    Neuromorphic Vision sensors have improved greatly since the first silicon retina was presented almost three decades ago. They have recently matured to the point where they are commercially available and can be operated by laymen. However, despite improved availability of sensors, there remains a lack of good datasets, while algorithms for processing spike-based visual data are still in their infancy. On the other hand, frame-based computer vision algorithms are far more mature, thanks in part to widely accepted datasets which allow direct comparison between algorithms and encourage competition. We are presented with a unique opportunity to shape the development of Neuromorphic Vision benchmarks and challenges by leveraging what has been learnt from the use of datasets in frame-based computer vision. Taking advantage of this opportunity, in this paper we review the role that benchmarks and challenges have played in the advancement of frame-based computer vision, and suggest guidelines for the creation of Neuromorphic Vision benchmarks and challenges. We also discuss the unique challenges faced when benchmarking Neuromorphic Vision algorithms, particularly when attempting to provide direct comparison with frame-based computer vision.

  13. Benchmarking of the FENDL-3 Neutron Cross-section Data Starter Library for Fusion Applications

    SciTech Connect

    Fischer, U.; Angelone, M.; Bohm, T.; Kondo, K.; Konno, C.; Sawan, M.; Villari, R.; Walker, B.

    2014-06-15

    This paper summarizes the benchmark analyses performed in a joint effort of ENEA (Italy), JAEA (Japan), KIT (Germany), and the University of Wisconsin (USA) on a computational ITER benchmark and a series of 14 MeV neutron benchmark experiments. The computational benchmark revealed a modest increase of the neutron flux levels in the deep penetration regions and a substantial increase of the gas production in steel components. The comparison to experimental results showed good agreement with no substantial differences between FENDL-3.0 and FENDL-2.1 for most of the responses. In general, FENDL-3 shows an improved performance for fusion neutronics applications.

  14. Criticality safety benchmark evaluation project: Recovering the past

    SciTech Connect

    Trumble, E.F.

    1997-06-01

    A very brief summary of the Criticality Safety Benchmark Evaluation Project of the Westinghouse Savannah River Company is provided in this paper. The purpose of the project is to provide a source of evaluated criticality safety experiments in an easily usable format. Another project goal is to search for any experiments that may have been lost or contain discrepancies, and to determine if they can be used. Results of evaluated experiments are being published as US DOE handbooks.

  15. Towards Systematic Benchmarking of Climate Model Performance

    NASA Astrophysics Data System (ADS)

    Gleckler, P. J.

    2014-12-01

    The process by which climate models are evaluated has evolved substantially over the past decade, with the Coupled Model Intercomparison Project (CMIP) serving as a centralizing activity for coordinating model experimentation and enabling research. Scientists with a broad spectrum of expertise have contributed to the CMIP model evaluation process, resulting in many hundreds of publications that have served as a key resource for the IPCC process. For several reasons, efforts are now underway to further systematize some aspects of the model evaluation process. First, some model evaluation can now be considered routine and should not require "re-inventing the wheel" or a journal publication simply to update results with newer models. Second, the benefit of CMIP research to model development has not been optimal because the publication of results generally takes several years and is usually not reproducible for benchmarking newer model versions. And third, there are now hundreds of model versions and many thousands of simulations, but there is no community-based mechanism for routinely monitoring model performance changes. An important change in the design of CMIP6 can help address these limitations. CMIP6 will include a small set standardized experiments as an ongoing exercise (CMIP "DECK": ongoing Diagnostic, Evaluation and Characterization of Klima), so that modeling groups can submit them at any time and not be overly constrained by deadlines. In this presentation, efforts to establish routine benchmarking of existing and future CMIP simulations will be described. To date, some benchmarking tools have been made available to all CMIP modeling groups to enable them to readily compare with CMIP5 simulations during the model development process. A natural extension of this effort is to make results from all CMIP simulations widely available, including the results from newer models as soon as the simulations become available for research. Making the results from routine

  16. Benchmarking Evaluation Results for Prototype Extravehicular Activity Gloves

    NASA Technical Reports Server (NTRS)

    Aitchison, Lindsay; McFarland, Shane

    2012-01-01

    The Space Suit Assembly (SSA) Development Team at NASA Johnson Space Center has invested heavily in the advancement of rear-entry planetary exploration suit design but largely deferred development of extravehicular activity (EVA) glove designs, and accepted the risk of using the current flight gloves, Phase VI, for unique mission scenarios outside the Space Shuttle and International Space Station (ISS) Program realm of experience. However, as design reference missions mature, the risks of using heritage hardware have highlighted the need for developing robust new glove technologies. To address the technology gap, the NASA Game-Changing Technology group provided start-up funding for the High Performance EVA Glove (HPEG) Project in the spring of 2012. The overarching goal of the HPEG Project is to develop a robust glove design that increases human performance during EVA and creates pathway for future implementation of emergent technologies, with specific aims of increasing pressurized mobility to 60% of barehanded capability, increasing the durability by 100%, and decreasing the potential of gloves to cause injury during use. The HPEG Project focused initial efforts on identifying potential new technologies and benchmarking the performance of current state of the art gloves to identify trends in design and fit leading to establish standards and metrics against which emerging technologies can be assessed at both the component and assembly levels. The first of the benchmarking tests evaluated the quantitative mobility performance and subjective fit of four prototype gloves developed by Flagsuit LLC, Final Frontier Designs, LLC Dover, and David Clark Company as compared to the Phase VI. All of the companies were asked to design and fabricate gloves to the same set of NASA provided hand measurements (which corresponded to a single size of Phase Vi glove) and focus their efforts on improving mobility in the metacarpal phalangeal and carpometacarpal joints. Four test

  17. Benchmarking clinical photography services in the NHS.

    PubMed

    Arbon, Giles

    2015-01-01

    Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession.

  18. Gatemon Benchmarking and Two-Qubit Operation

    NASA Astrophysics Data System (ADS)

    Casparis, Lucas; Larsen, Thorvald; Olsen, Michael; Petersson, Karl; Kuemmeth, Ferdinand; Krogstrup, Peter; Nygard, Jesper; Marcus, Charles

    Recent experiments have demonstrated superconducting transmon qubits with semiconductor nanowire Josephson junctions. These hybrid gatemon qubits utilize field effect tunability singular to semiconductors to allow complete qubit control using gate voltages, potentially a technological advantage over conventional flux-controlled transmons. Here, we present experiments with a two-qubit gatemon circuit. We characterize qubit coherence and stability and use randomized benchmarking to demonstrate single-qubit gate errors of ~0.5 % for all gates, including voltage-controlled Z rotations. We show coherent capacitive coupling between two gatemons and coherent SWAP operations. Finally, we perform a two-qubit controlled-phase gate with an estimated fidelity of ~91 %, demonstrating the potential of gatemon qubits for building scalable quantum processors. We acknowledge financial support from Microsoft Project Q and the Danish National Research Foundation.

  19. Benchmarking in Czech Higher Education: The Case of Schools of Economics

    ERIC Educational Resources Information Center

    Placek, Michal; Ochrana, František; Pucek, Milan

    2015-01-01

    This article describes the use of benchmarking in universities in the Czech Republic and academics' experiences with it. It is based on research conducted among academics from economics schools in Czech public and private universities. The results identified several issues regarding the utilisation and understanding of benchmarking in the Czech…

  20. How Benchmarking and Higher Education Came Together

    ERIC Educational Resources Information Center

    Levy, Gary D.; Ronco, Sharron L.

    2012-01-01

    This chapter introduces the concept of benchmarking and how higher education institutions began to use benchmarking for a variety of purposes. Here, benchmarking is defined as a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to…

  1. Internal Quality Assurance Benchmarking. ENQA Workshop Report 20

    ERIC Educational Resources Information Center

    Blackstock, Douglas; Burquel, Nadine; Comet, Nuria; Kajaste, Matti; dos Santos, Sergio Machado; Marcos, Sandra; Moser, Marion; Ponds, Henri; Scheuthle, Harald; Sixto, Luis Carlos Velon

    2012-01-01

    The Internal Quality Assurance group of ENQA (IQA Group) has been organising a yearly seminar for its members since 2007. The main objective is to share experiences concerning the internal quality assurance of work processes in the participating agencies. The overarching theme of the 2011 seminar was how to use benchmarking as a tool for…

  2. BENCHMARKING ORTEC ISOTOPIC MEASUREMENTS AND CALCULATIONS

    SciTech Connect

    Dewberry, R; Raymond Sigg, R; Vito Casella, V; Nitin Bhatt, N

    2008-09-29

    these cases the ISOTOPIC analysis program is especially valuable because it allows a rapid, defensible, reproducible analysis of radioactive content without tedious and repetitive experimental measurement of {gamma}-ray transmission through the sample and container at multiple photon energies. The ISOTOPIC analysis technique is also especially valuable in facility holdup measurements where the acquisition configuration does not fit the accepted generalized geometries where detector efficiencies have been solved exactly with good calculus. Generally in facility passive {gamma}-ray holdup measurements the acquisition geometry is only approximately reproducible, and the sample (object) is an extensive glovebox or HEPA filter component. In these cases accuracy of analyses is rarely possible, however demonstrating fissile Pu and U content within criticality safety guidelines yields valuable operating information. Demonstrating such content can be performed with broad assumptions and within broad factors (e.g. 2-8) of conservatism. The ISOTOPIC analysis program yields rapid defensible analyses of content within acceptable uncertainty and within acceptable conservatism without extensive repetitive experimental measurements. In addition to transmission correction determinations based on the mass and composition of objects, the ISOTOPIC program performs finite geometry corrections based on object shape and dimensions. These geometry corrections are based upon finite element summation to approximate exact closed form calculus. In this report we provide several benchmark comparisons to the same technique provided by the Canberra In Situ Object Counting System (ISOCS) and to the finite thickness calculations described by Russo in reference 10. This report describes the benchmark comparisons we have performed to demonstrate and to document that the ISOTOPIC analysis program yields the results we claim to our customers.

  3. Benchmarking neuromorphic systems with Nengo

    PubMed Central

    Bekolay, Trevor; Stewart, Terrence C.; Eliasmith, Chris

    2015-01-01

    Nengo is a software package for designing and simulating large-scale neural models. Nengo is architected such that the same Nengo model can be simulated on any of several Nengo backends with few to no modifications. Backends translate a model to specific platforms, which include GPUs and neuromorphic hardware. Nengo also contains a large test suite that can be run with any backend and focuses primarily on functional performance. We propose that Nengo's large test suite can be used to benchmark neuromorphic hardware's functional performance and simulation speed in an efficient, unbiased, and future-proof manner. We implement four benchmark models and show that Nengo can collect metrics across five different backends that identify situations in which some backends perform more accurately or quickly. PMID:26539076

  4. New NAS Parallel Benchmarks Results

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Saphir, William; VanderWijngaart, Rob; Woo, Alex; Kutler, Paul (Technical Monitor)

    1997-01-01

    NPB2 (NAS (NASA Advanced Supercomputing) Parallel Benchmarks 2) is an implementation, based on Fortran and the MPI (message passing interface) message passing standard, of the original NAS Parallel Benchmark specifications. NPB2 programs are run with little or no tuning, in contrast to NPB vendor implementations, which are highly optimized for specific architectures. NPB2 results complement, rather than replace, NPB results. Because they have not been optimized by vendors, NPB2 implementations approximate the performance a typical user can expect for a portable parallel program on distributed memory parallel computers. Together these results provide an insightful comparison of the real-world performance of high-performance computers. New NPB2 features: New implementation (CG), new workstation class problem sizes, new serial sample versions, more performance statistics.

  5. RASSP Benchmark 4 Technical Description.

    DTIC Science & Technology

    1998-01-09

    of both application and VHDL code . 5.3.4.1 Lines of Code . The lines of code for each application and VHDL source file shall be reported. This...Developer shall provide source files for the VHDL files used in defining the Virtual Prototype as well as in programming the FPGAs . Benchmark-4...programmable devices running application code writ- ten in a high-level source language such as C, except that more detailed models may be required to

  6. MPI Multicore Torus Communication Benchmark

    SciTech Connect

    Schulz, M.

    2008-02-05

    The MPI Multicore Torus Communications Benchmark (TorusTest) measues the aggegate bandwidth across all six links from/to any multicore node in a logical torus. It can run in wo modi: using a static or a random mapping of tasks to torus locations. The former can be used to achieve optimal mappings and aggregate bandwidths that can be achieved with varying node mappings.

  7. Restaurant Energy Use Benchmarking Guideline

    SciTech Connect

    Hedrick, R.; Smith, V.; Field, K.

    2011-07-01

    A significant operational challenge for food service operators is defining energy use benchmark metrics to compare against the performance of individual stores. Without metrics, multiunit operators and managers have difficulty identifying which stores in their portfolios require extra attention to bring their energy performance in line with expectations. This report presents a method whereby multiunit operators may use their own utility data to create suitable metrics for evaluating their operations.

  8. Performance Benchmarking Tsunami Models for NTHMP's Inundation Mapping Activities

    NASA Astrophysics Data System (ADS)

    Horrillo, Juan; Grilli, Stéphan T.; Nicolsky, Dmitry; Roeber, Volker; Zhang, Joseph

    2015-03-01

    The coastal states and territories of the United States (US) are vulnerable to devastating tsunamis from near-field or far-field coseismic and underwater/subaerial landslide sources. Following the catastrophic 2004 Indian Ocean tsunami, the National Tsunami Hazard Mitigation Program (NTHMP) accelerated the development of public safety products for the mitigation of these hazards. In response to this initiative, US coastal states and territories speeded up the process of developing/enhancing/adopting tsunami models that can be used for developing inundation maps and evacuation plans. One of NTHMP's requirements is that all operational and inundation-based numerical (O&I) models used for such purposes be properly validated against established standards to ensure the reliability of tsunami inundation maps as well as to achieve a basic level of consistency between parallel efforts. The validation of several O&I models was considered during a workshop held in 2011 at Texas A&M University (Galveston). This validation was performed based on the existing standard (OAR-PMEL-135), which provides a list of benchmark problems (BPs) covering various tsunami processes that models must meet to be deemed acceptable. Here, we summarize key approaches followed, results, and conclusions of the workshop. Eight distinct tsunami models were validated and cross-compared by using a subset of the BPs listed in the OAR-PMEL-135 standard. Of the several BPs available, only two based on laboratory experiments are detailed here for sake of brevity; since they are considered as sufficiently comprehensive. Average relative errors associated with expected parameters values such as maximum surface amplitude/runup are estimated. The level of agreement with the reference data, reasons for discrepancies between model results, and some of the limitations are discussed. In general, dispersive models were found to perform better than nondispersive models, but differences were relatively small, in part

  9. Non-Markovianity in Randomized Benchmarking

    NASA Astrophysics Data System (ADS)

    Ball, Harrison; Stace, Tom M.; Biercuk, Michael J.

    2015-03-01

    Randomized benchmarking is routinely employed to recover information about the fidelity of a quantum operation by exploiting probabilistic twirling errors over an implementation of the Clifford group. Standard assumptions of Markovianity in the underlying noise environment, however, remain at odds with realistic, correlated noise encountered in real systems. We model single-qubit randomized benchmarking experiments as a sequence of ideal Clifford operations interleaved with stochastic dephasing errors, implemented as unitary rotations about σz. Successive error rotations map to a sequence of random variables whose correlations introduce non-Markovian effects emulating realistic colored-noise environments. The Markovian limit is recovered by turning off all correlations, reducing each error to an independent Gaussian-distributed random variable. We examine the dependence of the statistical distribution of fidelity outcomes on these noise correlations, deriving analytic expressions for probability density functions and related statistics for relevant fidelity metrics. This enables us to characterize and bear out the distinction between the Markovian and non-Markovian cases, with implications for interpretation and handling of experimental data.

  10. Analyzing the BBOB results by means of benchmarking concepts.

    PubMed

    Mersmann, O; Preuss, M; Trautmann, H; Bischl, B; Weihs, C

    2015-01-01

    We present methods to answer two basic questions that arise when benchmarking optimization algorithms. The first one is: which algorithm is the "best" one? and the second one is: which algorithm should I use for my real-world problem? Both are connected and neither is easy to answer. We present a theoretical framework for designing and analyzing the raw data of such benchmark experiments. This represents a first step in answering the aforementioned questions. The 2009 and 2010 BBOB benchmark results are analyzed by means of this framework and we derive insight regarding the answers to the two questions. Furthermore, we discuss how to properly aggregate rankings from algorithm evaluations on individual problems into a consensus, its theoretical background and which common pitfalls should be avoided. Finally, we address the grouping of test problems into sets with similar optimizer rankings and investigate whether these are reflected by already proposed test problem characteristics, finding that this is not always the case.

  11. Thermal Performance Benchmarking: Annual Report

    SciTech Connect

    Moreno, Gilbert

    2016-04-08

    The goal for this project is to thoroughly characterize the performance of state-of-the-art (SOA) automotive power electronics and electric motor thermal management systems. Information obtained from these studies will be used to: Evaluate advantages and disadvantages of different thermal management strategies; establish baseline metrics for the thermal management systems; identify methods of improvement to advance the SOA; increase the publicly available information related to automotive traction-drive thermal management systems; help guide future electric drive technologies (EDT) research and development (R&D) efforts. The performance results combined with component efficiency and heat generation information obtained by Oak Ridge National Laboratory (ORNL) may then be used to determine the operating temperatures for the EDT components under drive-cycle conditions. In FY15, the 2012 Nissan LEAF power electronics and electric motor thermal management systems were benchmarked. Testing of the 2014 Honda Accord Hybrid power electronics thermal management system started in FY15; however, due to time constraints it was not possible to include results for this system in this report. The focus of this project is to benchmark the thermal aspects of the systems. ORNL's benchmarking of electric and hybrid electric vehicle technology reports provide detailed descriptions of the electrical and packaging aspects of these automotive systems.

  12. An introduction to benchmarking in healthcare.

    PubMed

    Benson, H R

    1994-01-01

    Benchmarking--the process of establishing a standard of excellence and comparing a business function or activity, a product, or an enterprise as a whole with that standard--will be used increasingly by healthcare institutions to reduce expenses and simultaneously improve product and service quality. As a component of total quality management, benchmarking is a continuous process by which an organization can measure and compare its own processes with those of organizations that are leaders in a particular area. Benchmarking should be viewed as a part of quality management programs, not as a replacement. There are four kinds of benchmarking: internal, competitive, functional and generic. With internal benchmarking, functions within an organization are compared with each other. Competitive benchmarking partners do business in the same market and provide a direct comparison of products or services. Functional and generic benchmarking are performed with organizations which may have a specific similar function, such as payroll or purchasing, but which otherwise are in a different business. Benchmarking must be a team process because the outcome will involve changing current practices, with effects felt throughout the organization. The team should include members who have subject knowledge; communications and computer proficiency; skills as facilitators and outside contacts; and sponsorship of senior management. Benchmarking requires quantitative measurement of the subject. The process or activity that you are attempting to benchmark will determine the types of measurements used. Benchmarking metrics usually can be classified in one of four categories: productivity, quality, time and cost-related.

  13. Benchmarking analogue models of brittle thrust wedges

    NASA Astrophysics Data System (ADS)

    Schreurs, Guido; Buiter, Susanne J. H.; Boutelier, Jennifer; Burberry, Caroline; Callot, Jean-Paul; Cavozzi, Cristian; Cerca, Mariano; Chen, Jian-Hong; Cristallini, Ernesto; Cruden, Alexander R.; Cruz, Leonardo; Daniel, Jean-Marc; Da Poian, Gabriela; Garcia, Victor H.; Gomes, Caroline J. S.; Grall, Céline; Guillot, Yannick; Guzmán, Cecilia; Hidayah, Triyani Nur; Hilley, George; Klinkmüller, Matthias; Koyi, Hemin A.; Lu, Chia-Yu; Maillot, Bertrand; Meriaux, Catherine; Nilfouroushan, Faramarz; Pan, Chang-Chih; Pillot, Daniel; Portillo, Rodrigo; Rosenau, Matthias; Schellart, Wouter P.; Schlische, Roy W.; Take, Andy; Vendeville, Bruno; Vergnaud, Marine; Vettori, Matteo; Wang, Shih-Hsien; Withjack, Martha O.; Yagupsky, Daniel; Yamada, Yasuhiro

    2016-11-01

    We performed a quantitative comparison of brittle thrust wedge experiments to evaluate the variability among analogue models and to appraise the reproducibility and limits of model interpretation. Fifteen analogue modeling laboratories participated in this benchmark initiative. Each laboratory received a shipment of the same type of quartz and corundum sand and all laboratories adhered to a stringent model building protocol and used the same type of foil to cover base and sidewalls of the sandbox. Sieve structure, sifting height, filling rate, and details on off-scraping of excess sand followed prescribed procedures. Our analogue benchmark shows that even for simple plane-strain experiments with prescribed stringent model construction techniques, quantitative model results show variability, most notably for surface slope, thrust spacing and number of forward and backthrusts. One of the sources of the variability in model results is related to slight variations in how sand is deposited in the sandbox. Small changes in sifting height, sifting rate, and scraping will result in slightly heterogeneous material bulk densities, which will affect the mechanical properties of the sand, and will result in lateral and vertical differences in peak and boundary friction angles, as well as cohesion values once the model is constructed. Initial variations in basal friction are inferred to play the most important role in causing model variability. Our comparison shows that the human factor plays a decisive role, and even when one modeler repeats the same experiment, quantitative model results still show variability. Our observations highlight the limits of up-scaling quantitative analogue model results to nature or for making comparisons with numerical models. The frictional behavior of sand is highly sensitive to small variations in material state or experimental set-up, and hence, it will remain difficult to scale quantitative results such as number of thrusts, thrust spacing

  14. The First Benchmarking of ITER BR Nb3Sn Strand of CNDA

    NASA Astrophysics Data System (ADS)

    Long, Feng; Liu, Fang; Wu, Yu; Ni, Zhipeng

    2012-09-01

    According to the International Thermonuclear Experimental Reactor (ITER) Procurement Arrangement (PA) of Cable-In-Conduit Conductor (CICC) unit lengths for the Toroidal Field (TF) and Poloidal Field (PF) magnet systems of ITER, at the start of process qualification, the Domestic Agency (DA) shall be required to conduct a benchmarking of the room and low temperature acceptance tests carried out at the Strand Suppliers and/or at its Reference Laboratories designated by the ITER Organization (IO). The first benchmarking was carried out successfully in 2009. Nineteen participants from six DAs (China, European Union, Japan, South Korea, Russia, and the United States) participated in the first benchmarking. Bronze-route (BR) Nb3Sn strand and samples prepared by the ITER reference lab (CERN) were sent out to each participant by CERN. In this paper, the test facility and test results of the first benchmarking by the Chinese DA (CNDA) are presented.

  15. Successful database benchmarking: what do we need?

    PubMed

    Backiel, R B

    2001-01-01

    As the technology of on-line healthcare information advances, hospitals and data vendors are faced with a variety of challenges. What are the accepted standard fields? What kind of DSS system should we use? Which system will give us the information we need? Does the server have enough space to handle increased business? Healthcare organizations are now looking at comparative information through the Internet instead of buying data and loading it onto their own servers. They are asking: Are servers necessary now? Is the software user-friendly? How current and accurate is the information being offered? How secure is the Web site it is on? Data vendors are asking: Is our server large enough to handle the volume of data we now have and as the company grows? How do we make sure the data are accurate? How do we keep the data secure? This article educates and informs healthcare facilities about the factors that should be considered when comparing their own data with those of other hospitals in an on-line benchmarking database warehouse.

  16. Offer/Acceptance Ratio.

    ERIC Educational Resources Information Center

    Collins, Mimi

    1997-01-01

    Explores how human resource professionals, with above average offer/acceptance ratios, streamline their recruitment efforts. Profiles company strategies with internships, internal promotion, cooperative education programs, and how to get candidates to accept offers. Also discusses how to use the offer/acceptance ratio as a measure of program…

  17. Gaia FGK benchmark stars: Metallicity

    NASA Astrophysics Data System (ADS)

    Jofré, P.; Heiter, U.; Soubiran, C.; Blanco-Cuaresma, S.; Worley, C. C.; Pancino, E.; Cantat-Gaudin, T.; Magrini, L.; Bergemann, M.; González Hernández, J. I.; Hill, V.; Lardo, C.; de Laverny, P.; Lind, K.; Masseron, T.; Montes, D.; Mucciarelli, A.; Nordlander, T.; Recio Blanco, A.; Sobeck, J.; Sordo, R.; Sousa, S. G.; Tabernero, H.; Vallenari, A.; Van Eck, S.

    2014-04-01

    Context. To calibrate automatic pipelines that determine atmospheric parameters of stars, one needs a sample of stars, or "benchmark stars", with well-defined parameters to be used as a reference. Aims: We provide detailed documentation of the iron abundance determination of the 34 FGK-type benchmark stars that are selected to be the pillars for calibration of the one billion Gaia stars. They cover a wide range of temperatures, surface gravities, and metallicities. Methods: Up to seven different methods were used to analyze an observed spectral library of high resolutions and high signal-to-noise ratios. The metallicity was determined by assuming a value of effective temperature and surface gravity obtained from fundamental relations; that is, these parameters were known a priori and independently from the spectra. Results: We present a set of metallicity values obtained in a homogeneous way for our sample of benchmark stars. In addition to this value, we provide detailed documentation of the associated uncertainties. Finally, we report a value of the metallicity of the cool giant ψ Phe for the first time. Based on NARVAL and HARPS data obtained within the Gaia DPAC (Data Processing and Analysis Consortium) and coordinated by the GBOG (Ground-Based Observations for Gaia) working group and on data retrieved from the ESO-ADP database.Tables 6-76 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/564/A133

  18. NASA Software Engineering Benchmarking Effort

    NASA Technical Reports Server (NTRS)

    Godfrey, Sally; Rarick, Heather

    2012-01-01

    Benchmarking was very interesting and provided a wealth of information (1) We did see potential solutions to some of our "top 10" issues (2) We have an assessment of where NASA stands with relation to other aerospace/defense groups We formed new contacts and potential collaborations (1) Several organizations sent us examples of their templates, processes (2) Many of the organizations were interested in future collaboration: sharing of training, metrics, Capability Maturity Model Integration (CMMI) appraisers, instructors, etc. We received feedback from some of our contractors/ partners (1) Desires to participate in our training; provide feedback on procedures (2) Welcomed opportunity to provide feedback on working with NASA

  19. [Benchmarking in health care: conclusions and recommendations].

    PubMed

    Geraedts, Max; Selbmann, Hans-Konrad

    2011-01-01

    The German Health Ministry funded 10 demonstration projects and accompanying research of benchmarking in health care. The accompanying research work aimed to infer generalisable findings and recommendations. We performed a meta-evaluation of the demonstration projects and analysed national and international approaches to benchmarking in health care. It was found that the typical benchmarking sequence is hardly ever realised. Most projects lack a detailed analysis of structures and processes of the best performers as a starting point for the process of learning from and adopting best practice. To tap the full potential of benchmarking in health care, participation in voluntary benchmarking projects should be promoted that have been demonstrated to follow all the typical steps of a benchmarking process.

  20. Benchmarking pathology services: implementing a longitudinal study.

    PubMed

    Gordon, M; Holmes, S; McGrath, K; Neil, A

    1999-05-01

    This paper details the benchmarking process and its application to the activities of pathology laboratories participating in a benchmark pilot study [the Royal College of Pathologists of Australasian (RCPA) Benchmarking Project]. The discussion highlights the primary issues confronted in collecting, processing, analysing and comparing benchmark data. The paper outlines the benefits of engaging in a benchmarking exercise and provides a framework which can be applied across a range of public health settings. This information is then applied to a review of the development of the RCPA Benchmarking Project. Consideration is also given to the nature of the preliminary results of the project and the implications of these results to the on-going conduct of the study.

  1. Pynamic: the Python Dynamic Benchmark

    SciTech Connect

    Lee, G L; Ahn, D H; de Supinksi, B R; Gyllenhaal, J C; Miller, P J

    2007-07-10

    Python is widely used in scientific computing to facilitate application development and to support features such as computational steering. Making full use of some of Python's popular features, which improve programmer productivity, leads to applications that access extremely high numbers of dynamically linked libraries (DLLs). As a result, some important Python-based applications severely stress a system's dynamic linking and loading capabilities and also cause significant difficulties for most development environment tools, such as debuggers. Furthermore, using the Python paradigm for large scale MPI-based applications can create significant file IO and further stress tools and operating systems. In this paper, we present Pynamic, the first benchmark program to support configurable emulation of a wide-range of the DLL usage of Python-based applications for large scale systems. Pynamic has already accurately reproduced system software and tool issues encountered by important large Python-based scientific applications on our supercomputers. Pynamic provided insight for our system software and tool vendors, and our application developers, into the impact of several design decisions. As we describe the Pynamic benchmark, we will highlight some of the issues discovered in our large scale system software and tools using Pynamic.

  2. BENCHMARK DOSE TECHNICAL GUIDANCE DOCUMENT ...

    EPA Pesticide Factsheets

    The U.S. EPA conducts risk assessments for an array of health effects that may result from exposure to environmental agents, and that require an analysis of the relationship between exposure and health-related outcomes. The dose-response assessment is essentially a two-step process, the first being the definition of a point of departure (POD), and the second extrapolation from the POD to low environmentally-relevant exposure levels. The benchmark dose (BMD) approach provides a more quantitative alternative to the first step in the dose-response assessment than the current NOAEL/LOAEL process for noncancer health effects, and is similar to that for determining the POD proposed for cancer endpoints. As the Agency moves toward harmonization of approaches for human health risk assessment, the dichotomy between cancer and noncancer health effects is being replaced by consideration of mode of action and whether the effects of concern are likely to be linear or nonlinear at low doses. Thus, the purpose of this project is to provide guidance for the Agency and the outside community on the application of the BMD approach in determining the POD for all types of health effects data, whether a linear or nonlinear low dose extrapolation is used. A guidance document is being developed under the auspices of EPA's Risk Assessment Forum. The purpose of this project is to provide guidance for the Agency and the outside community on the application of the benchmark dose (BMD) appr

  3. Benchmarking for Excellence and the Nursing Process

    NASA Technical Reports Server (NTRS)

    Sleboda, Claire

    1999-01-01

    Nursing is a service profession. The services provided are essential to life and welfare. Therefore, setting the benchmark for high quality care is fundamental. Exploring the definition of a benchmark value will help to determine a best practice approach. A benchmark is the descriptive statement of a desired level of performance against which quality can be judged. It must be sufficiently well understood by managers and personnel in order that it may serve as a standard against which to measure value.

  4. Computational Chemistry Comparison and Benchmark Database

    National Institute of Standards and Technology Data Gateway

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  5. Method and system for benchmarking computers

    DOEpatents

    Gustafson, John L.

    1993-09-14

    A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

  6. INTEGRAL BENCHMARK DATA FOR NUCLEAR DATA TESTING THROUGH THE ICSBEP AND THE NEWLY ORGANIZED IRPHEP

    SciTech Connect

    J. Blair Briggs; Lori Scott; Yolanda Rugama; Enrico Satori

    2007-04-01

    The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) was last reported in a nuclear data conference at the International Conference on Nuclear Data for Science and Technology, ND-2004, in Santa Fe, New Mexico. Since that time the number and type of integral benchmarks have increased significantly. Included in the ICSBEP Handbook are criticality-alarm / shielding and fundamental physic benchmarks in addition to the traditional critical / subcritical benchmark data. Since ND 2004, a reactor physics counterpart to the ICSBEP, the International Reactor Physics Experiment Evaluation Project (IRPhEP) was initiated. The IRPhEP is patterned after the ICSBEP, but focuses on other integral measurements, such as buckling, spectral characteristics, reactivity effects, reactivity coefficients, kinetics measurements, reaction-rate and power distributions, nuclide compositions, and other miscellaneous-type measurements in addition to the critical configuration. The status of these two projects is discussed and selected benchmarks highlighted in this paper.

  7. The Impact of Previous Schooling Experiences on a Quaker High School's Graduating Students' College Entrance Exam Scores, Parents' Expectations, and College Acceptance Outcomes

    ERIC Educational Resources Information Center

    Galusha, Debbie K.

    2010-01-01

    The purpose of the study is to determine the impact of previous private, public, home, or international schooling experiences on a Quaker high school's graduating students' college entrance composite exam scores, parents' expectations, and college attendance outcomes. The study's results suggest that regardless of previous private, public, home,…

  8. Nuclear Data Performance Testing Using Sensitive, but Less Frequently Used ICSBEP Benchmarks

    SciTech Connect

    J. Blair Briggs; John D. Bess

    2011-08-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) has published the International Handbook of Evaluated Criticality Safety Benchmark Experiments annually since 1995. The Handbook now spans over 51,000 pages with benchmark specifications for 4,283 critical, near critical, or subcritical configurations; 24 criticality alarm placement/shielding configurations with multiple dose points for each; and 200 configurations that have been categorized as fundamental physics measurements relevant to criticality safety applications. Benchmark data in the ICSBEP Handbook were originally intended for validation of criticality safety methods and data; however, the benchmark specifications are now used extensively for nuclear data testing. There are several, less frequently used benchmarks within the Handbook that are very sensitive to thorium and certain key structural and moderating materials. Calculated results for many of those benchmarks using modern nuclear data libraries suggest there is still room for improvement. These and other highly sensitive, but rarely quoted benchmarks are highlighted and data testing results provided using the Monte Carlo N-Particle Version 5 (MCNP5) code and continuous energy ENDF/B-V, VI.8, and VII.0, JEFF-3.1, and JENDL-3.3 nuclear data libraries.

  9. Sieve of Eratosthenes benchmarks for the Z8 FORTH microcontroller

    SciTech Connect

    Edwards, R.

    1989-02-01

    This report presents benchmarks for the Z8 FORTH microcontroller system that ORNL uses extensively in proving concepts and developing prototype test equipment for the Smart House Project. The results are based on the sieve of Eratosthenes algorithm, a calculation used extensively to rate computer systems and programming languages. Three benchmark refinements are presented,each showing how the execution speed of a FORTH program can be improved by use of a particular optimization technique. The last version of the FORTH benchmark shows that optimization is worth the effort: It executes 20 times faster than the Gilbreaths' widely-published FORTH benchmark program. The National Association of Home Builders Smart House Project is a cooperative research and development effort being undertaken by American home builders and a number of major corporations serving the home building industry. The major goal of the project is to help the participating organizations incorporate advanced technology in communications,energy distribution, and appliance control products for American homes. This information is provided to help project participants use the Z8 FORTH prototyping microcontroller in developing Smart House concepts and equipment. The discussion is technical in nature and assumes some experience with microcontroller devices and the techniques used to develop software for them. 7 refs., 5 tabs.

  10. LIMS user acceptance testing.

    PubMed

    Klein, Corbett S

    2003-01-01

    Laboratory Information Management Systems (LIMS) play a key role in the pharmaceutical industry. Thorough and accurate validation of such systems is critical and is a regulatory requirement. LIMS user acceptance testing is one aspect of this testing and enables the user to make a decision to accept or reject implementation of the system. This paper discusses key elements in facilitating the development and execution of a LIMS User Acceptance Test Plan (UATP).

  11. Reconceptualizing Benchmarks for Residency Training

    PubMed Central

    2017-01-01

    Postgraduate medical education (PGME) is currently transitioning to a competency-based framework. This model clarifies the desired outcome of residency training - competence. However, since the popularization of Ericsson's work on the effect of time and deliberate practice on performance level, his findings have been applied in some areas of residency training. Though this may be grounded in a noble effort to maximize patient well-being, it imposes unrealistic expectations on trainees. This work aims to demonstrate the fundamental flaws of this application and therefore the lack of validity in using Ericsson's work to develop training benchmarks at the postgraduate level as well as expose potential harms in doing so.

  12. Benchmarking Multipacting Simulations in VORPAL

    SciTech Connect

    C. Nieter, C. Roark, P. Stoltz, K. Tian

    2009-05-01

    We will present the results of benchmarking simulations run to test the ability of VORPAL to model multipacting processes in Superconducting Radio Frequency structures. VORPAL is an electromagnetic (FDTD) particle-in-cell simulation code originally developed for applications in plasma and beam physics. The addition of conformal boundaries and algorithms for secondary electron emission allow VORPAL to be applied to multipacting processes. We start with simulations of multipacting between parallel plates where there are well understood theoretical predictions for the frequency bands where multipacting is expected to occur. We reproduce the predicted multipacting bands and demonstrate departures from the theoretical predictions when a more sophisticated model of secondary emission is used. Simulations of existing cavity structures developed at Jefferson National Laboratories will also be presented where we compare results from VORPAL to experimental data.

  13. Benchmarking ICRF simulations for ITER

    SciTech Connect

    R. V. Budny, L. Berry, R. Bilato, P. Bonoli, M. Brambilla, R.J. Dumont, A. Fukuyama, R. Harvey, E.F. Jaeger, E. Lerche, C.K. Phillips, V. Vdovin, J. Wright, and members of the ITPA-IOS

    2010-09-28

    Abstract Benchmarking of full-wave solvers for ICRF simulations is performed using plasma profiles and equilibria obtained from integrated self-consistent modeling predictions of four ITER plasmas. One is for a high performance baseline (5.3 T, 15 MA) DT H-mode plasma. The others are for half-field, half-current plasmas of interest for the pre-activation phase with bulk plasma ion species being either hydrogen or He4. The predicted profiles are used by seven groups to predict the ICRF electromagnetic fields and heating profiles. Approximate agreement is achieved for the predicted heating power partitions for the DT and He4 cases. Profiles of the heating powers and electromagnetic fields are compared.

  14. Benchmark Evaluation of Fuel Effect and Material Worth Measurements for a Beryllium-Reflected Space Reactor Mockup

    SciTech Connect

    Marshall, Margaret A.; Bess, John D.

    2015-02-01

    The critical configuration of the small, compact critical assembly (SCCA) experiments performed at the Oak Ridge Critical Experiments Facility (ORCEF) in 1962-1965 have been evaluated as acceptable benchmark experiments for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. The initial intent of these experiments was to support the design of the Medium Power Reactor Experiment (MPRE) program, whose purpose was to study “power plants for the production of electrical power in space vehicles.” The third configuration in this series of experiments was a beryllium-reflected assembly of stainless-steel-clad, highly enriched uranium (HEU)-O2 fuel mockup of a potassium-cooled space power reactor. Reactivity measurements cadmium ratio spectral measurements and fission rate measurements were measured through the core and top reflector. Fuel effect worth measurements and neutron moderating and absorbing material worths were also measured in the assembly fuel region. The cadmium ratios, fission rate, and worth measurements were evaluated for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. The fuel tube effect and neutron moderating and absorbing material worth measurements are the focus of this paper. Additionally, a measurement of the worth of potassium filling the core region was performed but has not yet been evaluated Pellets of 93.15 wt.% enriched uranium dioxide (UO2) were stacked in 30.48 cm tall stainless steel fuel tubes (0.3 cm tall end caps). Each fuel tube had 26 pellets with a total mass of 295.8 g UO2 per tube. 253 tubes were arranged in 1.506-cm triangular lattice. An additional 7-tube cluster critical configuration was also measured but not used for any physics measurements. The core was surrounded on all side by a beryllium reflector. The fuel effect worths were measured by removing fuel tubes at various radius. An accident scenario

  15. On Maximum FODO Acceptance

    SciTech Connect

    Batygin, Yuri Konstantinovich

    2014-12-24

    This note illustrates maximum acceptance of FODO quadrupole focusing channel. Acceptance is the largest Floquet ellipse of a matched beam: A = $\\frac{a^2}{β}$$_{max}$ where a is the aperture of the channel and βmax is the largest value of beta-function in the channel. If aperture of the channel is restricted by a circle of radius a, the s-s acceptance is available for particles oscillating at median plane, y=0. Particles outside median plane will occupy smaller phase space area. In x-y plane, cross section of the accepted beam has a shape of ellipse with truncated boundaries.

  16. Optimized selection of benchmark test parameters for image watermark algorithms based on Taguchi methods and corresponding influence on design decisions for real-world applications

    NASA Astrophysics Data System (ADS)

    Rodriguez, Tony F.; Cushman, David A.

    2003-06-01

    With the growing commercialization of watermarking techniques in various application scenarios it has become increasingly important to quantify the performance of watermarking products. The quantification of relative merits of various products is not only essential in enabling further adoption of the technology by society as a whole, but will also drive the industry to develop testing plans/methodologies to ensure quality and minimize cost (to both vendors & customers.) While the research community understands the theoretical need for a publicly available benchmarking system to quantify performance, there has been less discussion on the practical application of these systems. By providing a standard set of acceptance criteria, benchmarking systems can dramatically increase the quality of a particular watermarking solution, validating the product performances if they are used efficiently and frequently during the design process. In this paper we describe how to leverage specific design of experiments techniques to increase the quality of a watermarking scheme, to be used with the benchmark tools being developed by the Ad-Hoc Watermark Verification Group. A Taguchi Loss Function is proposed for an application and orthogonal arrays used to isolate optimal levels for a multi-factor experimental situation. Finally, the results are generalized to a population of cover works and validated through an exhaustive test.

  17. Data Acceptance Criteria for Standardized Human-Associated Fecal Source Identification Quantitative Real-Time PCR Methods

    PubMed Central

    Kelty, Catherine A.; Oshiro, Robin; Haugland, Richard A.; Madi, Tania; Brooks, Lauren; Field, Katharine G.; Sivaganesan, Mano

    2016-01-01

    There is growing interest in the application of human-associated fecal source identification quantitative real-time PCR (qPCR) technologies for water quality management. The transition from a research tool to a standardized protocol requires a high degree of confidence in data quality across laboratories. Data quality is typically determined through a series of specifications that ensure good experimental practice and the absence of bias in the results due to DNA isolation and amplification interferences. However, there is currently a lack of consensus on how best to evaluate and interpret human fecal source identification qPCR experiments. This is, in part, due to the lack of standardized protocols and information on interlaboratory variability under conditions for data acceptance. The aim of this study is to provide users and reviewers with a complete series of conditions for data acceptance derived from a multiple laboratory data set using standardized procedures. To establish these benchmarks, data from HF183/BacR287 and HumM2 human-associated qPCR methods were generated across 14 laboratories. Each laboratory followed a standardized protocol utilizing the same lot of reference DNA materials, DNA isolation kits, amplification reagents, and test samples to generate comparable data. After removal of outliers, a nested analysis of variance (ANOVA) was used to establish proficiency metrics that include lab-to-lab, replicate testing within a lab, and random error for amplification inhibition and sample processing controls. Other data acceptance measurements included extraneous DNA contamination assessments (no-template and extraction blank controls) and calibration model performance (correlation coefficient, amplification efficiency, and lower limit of quantification). To demonstrate the implementation of the proposed standardized protocols and data acceptance criteria, comparable data from two additional laboratories were reviewed. The data acceptance criteria

  18. Data Acceptance Criteria for Standardized Human-Associated Fecal Source Identification Quantitative Real-Time PCR Methods.

    PubMed

    Shanks, Orin C; Kelty, Catherine A; Oshiro, Robin; Haugland, Richard A; Madi, Tania; Brooks, Lauren; Field, Katharine G; Sivaganesan, Mano

    2016-05-01

    There is growing interest in the application of human-associated fecal source identification quantitative real-time PCR (qPCR) technologies for water quality management. The transition from a research tool to a standardized protocol requires a high degree of confidence in data quality across laboratories. Data quality is typically determined through a series of specifications that ensure good experimental practice and the absence of bias in the results due to DNA isolation and amplification interferences. However, there is currently a lack of consensus on how best to evaluate and interpret human fecal source identification qPCR experiments. This is, in part, due to the lack of standardized protocols and information on interlaboratory variability under conditions for data acceptance. The aim of this study is to provide users and reviewers with a complete series of conditions for data acceptance derived from a multiple laboratory data set using standardized procedures. To establish these benchmarks, data from HF183/BacR287 and HumM2 human-associated qPCR methods were generated across 14 laboratories. Each laboratory followed a standardized protocol utilizing the same lot of reference DNA materials, DNA isolation kits, amplification reagents, and test samples to generate comparable data. After removal of outliers, a nested analysis of variance (ANOVA) was used to establish proficiency metrics that include lab-to-lab, replicate testing within a lab, and random error for amplification inhibition and sample processing controls. Other data acceptance measurements included extraneous DNA contamination assessments (no-template and extraction blank controls) and calibration model performance (correlation coefficient, amplification efficiency, and lower limit of quantification). To demonstrate the implementation of the proposed standardized protocols and data acceptance criteria, comparable data from two additional laboratories were reviewed. The data acceptance criteria

  19. Benchmarking Learning and Teaching: Developing a Method

    ERIC Educational Resources Information Center

    Henderson-Smart, Cheryl; Winning, Tracey; Gerzina, Tania; King, Shalinie; Hyde, Sarah

    2006-01-01

    Purpose: To develop a method for benchmarking teaching and learning in response to an institutional need to validate a new program in Dentistry at the University of Sydney, Australia. Design/methodology/approach: After a collaborative partner, University of Adelaide, was identified, the areas of teaching and learning to be benchmarked, PBL…

  20. Beyond Benchmarking: Value-Adding Metrics

    ERIC Educational Resources Information Center

    Fitz-enz, Jac

    2007-01-01

    HR metrics has grown up a bit over the past two decades, moving away from simple benchmarking practices and toward a more inclusive approach to measuring institutional performance and progress. In this article, the acknowledged "father" of human capital performance benchmarking provides an overview of several aspects of today's HR metrics…

  1. Benchmarking can add up for healthcare accounting.

    PubMed

    Czarnecki, M T

    1994-09-01

    In 1993, a healthcare accounting and finance benchmarking survey of hospital and nonhospital organizations gathered statistics about key common performance areas. A low response did not allow for statistically significant findings, but the survey identified performance measures that can be used in healthcare financial management settings. This article explains the benchmarking process and examines some of the 1993 study's findings.

  2. HyspIRI Low Latency Concept and Benchmarks

    NASA Technical Reports Server (NTRS)

    Mandl, Dan

    2010-01-01

    Topics include HyspIRI low latency data ops concept, HyspIRI data flow, ongoing efforts, experiment with Web Coverage Processing Service (WCPS) approach to injecting new algorithms into SensorWeb, low fidelity HyspIRI IPM testbed, compute cloud testbed, open cloud testbed environment, Global Lambda Integrated Facility (GLIF) and OCC collaboration with Starlight, delay tolerant network (DTN) protocol benchmarking, and EO-1 configuration for preliminary DTN prototype.

  3. A performance benchmark test for geodynamo simulations

    NASA Astrophysics Data System (ADS)

    Matsui, H.; Heien, E. M.

    2013-12-01

    In the last ten years, a number of numerical dynamo models have successfully represented basic characteristics of the geomagnetic field. As new models and numerical methods continue to be developed, it is important to update and extend benchmarks for testing these models. The first dynamo benchmark of Christensen et al. (2001) was applied to models based on spherical harmonic expansion methods. However, only a few groups have reported results of the dynamo benchmark using local methods (Harder and Hansen, 2005; Matsui and Okuda, 2005; Chan et al., 2007) because of the difficulty treating magnetic boundary conditions based on the local methods. On the other hand, spherical harmonics expansion methods perform poorly on massively parallel computers because global data communications are required for the spherical harmonics expansions to evaluate nonlinear terms. We perform benchmark tests to asses various numerical methods for the next generation of geodynamo simulations. The purpose of this benchmark test is to assess numerical geodynamo models on a massively parallel computational platform. To compare among many numerical methods as possible, we consider the model with the insulated magnetic boundary by Christensen et al. (2001) and with the pseudo vacuum magnetic boundary, because the pseudo vacuum boundaries are implemented easier by using the local method than the magnetic insulated boundaries. In the present study, we consider two kinds of benchmarks, so-called accuracy benchmark and performance benchmark. In the accuracy benchmark, we compare the dynamo models by using modest Ekman and Rayleigh numbers proposed by Christensen et. al. (2001). We investigate a required spatial resolution for each dynamo code to obtain less than 1% difference from the suggested solution of the benchmark test using the two magnetic boundary conditions. In the performance benchmark, we investigate computational performance under the same computational environment. We perform these

  4. Performance measures and their benchmarks for assessing organizational cultural competency in behavioral health care service delivery.

    PubMed

    Siegel, Carole; Haugland, Gary; Chambers, Ethel Davis

    2003-11-01

    A project is described in which performance measures of cultural competency in behavioral health care were selected and benchmarked. Input from an Expert Panel representing the four major ethnic and racial groups in the U.S. and persons with extensive experience in implementing cultural competency in health care, along with survey data from 21 sites were used in the process. Measures and benchmarks are made specific to organizations that administrate care networks, and to service entities that deliver care. Measures were selected to parallel an implementation process, and benchmarks were set at "gold standard" levels.

  5. Specifications for the Large Core Code Evaluation Working Group Benchmark Problem Four. [LMFBR

    SciTech Connect

    Cowan, C.L.; Protsik, R.

    1981-09-01

    Benchmark studies have been carried out by the members of the Large Core Code Evaluation Working Group (LCCEWG) as part of a broad effort to systematically evaluate the important steps in the reactor design and analysis process for large fast breeder reactors. The specific objectives of the LCCEWG benchmark studies have been: to quantify the accuracy and efficiency of current neutronics methods for large cores; to identify neutronic design problems unique to large breeder reactors; to identify computer code development requirements; and to provide support for large core critical benchmark experiments.

  6. The He + H2+ --> HeH+ + H reaction: Ab initio studies of the potential energy surface, benchmark time-independent quantum dynamics in an extended energy range and comparison with experiments

    NASA Astrophysics Data System (ADS)

    De Fazio, Dario; de Castro-Vitores, Miguel; Aguado, Alfredo; Aquilanti, Vincenzo; Cavalli, Simonetta

    2012-12-01

    In this work we critically revise several aspects of previous ab initio quantum chemistry studies [P. Palmieri et al., Mol. Phys. 98, 1835 (2000);, 10.1080/00268970009483387 C. N. Ramachandran et al., Chem. Phys. Lett. 469, 26 (2009)], 10.1016/j.cplett.2008.12.035 of the HeH_2^+ system. New diatomic curves for the H_2^+ and HeH+ molecular ions, which provide vibrational frequencies at a near spectroscopic level of accuracy, have been generated to test the quality of the diatomic terms employed in the previous analytical fittings. The reliability of the global potential energy surfaces has also been tested performing benchmark quantum scattering calculations within the time-independent approach in an extended interval of energies. In particular, the total integral cross sections have been calculated in the total collision energy range 0.955-2.400 eV for the scattering of the He atom by the ortho- and para-hydrogen molecular ion. The energy profiles of the total integral cross sections for selected vibro-rotational states of H_2^+ (v = 0, …,5 and j = 1, …,7) show a strong rotational enhancement for the lower vibrational states which becomes weaker as the vibrational quantum number increases. Comparison with several available experimental data is presented and discussed.

  7. CSRL-V ENDF/B-V library and thermal reactor and criticality safety benchmarks

    SciTech Connect

    Ford, W.E. III; Diggs, B.R.; Knight, J.R.; Greene, N.M.; Petrie, L.M.; Williams, M.L.

    1982-01-01

    CSRL-V, an ENDF/B-V 227-group neutron cross-section library which has recently been expanded to include Bondarenko factor data for unresolved resonance processing, was used to calculate performance parameters for a series of thermal reactor and criticality safety benchmarks. Among the thermal benchmarks calculated were the Babcock and Wilcox lattice critical experiments B and W-XIII and B and W-XX. These two slightly-enriched (2.46%) UO/sub 2/, water-moderated, tight-pitch lattice experiments were chosen because (a) they have similar U/sup 238/ resonance shielding characteristics as power reactor cores, and (b) they provide benchmark results representative of high-leakage and low-leakage lattices, respectively. Among the criticality safety benchmarks calculated were homogeneous, highly-enriched (93.2%) uranyl fluoride spheres with hydrogen-to-uranium ratios varying from 76 to 972.

  8. Acceptability and Preferences for Hypothetical Rectal Microbicides among a Community Sample of Young Men Who Have Sex with Men and Transgender Women in Thailand: A Discrete Choice Experiment.

    PubMed

    Newman, Peter A; Cameron, Michael P; Roungprakhon, Surachet; Tepjan, Suchon; Scarpa, Riccardo

    2016-11-01

    Rectal microbicides (RMs) may offer substantial benefits in expanding HIV prevention options for key populations. From April to August 2013, we conducted Tablet-Assisted Survey Interviewing, including a discrete choice experiment, with participants recruited from gay entertainment venues and community-based organizations in Chiang Mai and Pattaya, Thailand. Among 408 participants, 74.5 % were young men who have sex with men, 25.5 % transgender women, with mean age = 24.3 years. One-third (35.5 %) had ≤9th grade education; 63.4 % engaged in sex work. Overall, 83.4 % reported they would definitely use a RM, with more than 2-fold higher odds of choice of a RM with 99 versus 50 % efficacy, and significantly higher odds of choosing gel versus suppository, intermittent versus daily dosing, and prescription versus over-the-counter. Sex workers were significantly more likely to use a RM immediately upon availability, with greater tolerance for moderate efficacy and daily dosing. Engaging key populations in assessing RM preferences may support biomedical research and evidence-informed interventions to optimize the effectiveness of RMs in HIV prevention.

  9. ``Observation, Experiment, and the Future of Physics'' John G. King's acceptance speech for the 2000 Oersted Medal presented by the American Association of Physics Teachers, 18 January 2000

    NASA Astrophysics Data System (ADS)

    King, John G.

    2001-01-01

    Looking at our built world, most physicists see order where many others see magic. This view of order should be available to all, and physics would flourish better in an appreciative society. Despite the remarkable developments in the teaching of physics in the last half century, too many people, whether they've had physics courses or not, don't have an inkling of the power and value of our subject, whose importance ranges from the practical to the psychological. We need to supplement people's experiences in ways that are applicable to different groups, from physics majors to people without formal education. I will describe and explain an ambitious program to stimulate scientific, engineering, and technological interest and understanding through direct observation of a wide range of phenomena and experimentation with them. For the very young: toys, playgrounds, kits, projects. For older students: indoor showcases, projects, and courses taught in intensive form. For all ages: more instructive everyday surroundings with outdoor showcases and large demonstrations.

  10. Benchmarking--Measuring and Comparing for Continuous Improvement.

    ERIC Educational Resources Information Center

    Henczel, Sue

    2002-01-01

    Discussion of benchmarking focuses on the use of internal and external benchmarking by special librarians. Highlights include defining types of benchmarking; historical development; benefits, including efficiency, improved performance, increased competitiveness, and better decision making; problems, including inappropriate adaptation; developing a…

  11. Commercially available antibodies directed against α-adrenergic receptor subtypes and other G protein-coupled receptors with acceptable selectivity in flow cytometry experiments.

    PubMed

    Tripathi, Abhishek; Gaponenko, Vadim; Majetschak, Matthias

    2016-02-01

    Several previous reports suggested that many commercially available antibodies directed against G protein-coupled receptors (GPCR) lack sufficient selectivity. Accordingly, it has been proposed that receptor antibodies should be validated by at least one of several criteria, such as testing tissues or cells after knockout or silencing of the corresponding gene. Here, we tested whether 12 commercially available antibodies directed against α-adrenergic receptor (AR) subtypes (α1A/B/D, α2A/B/C), atypical chemokine receptor 3 (ACKR3), and vasopressin receptor 1A (AVPR1A) suffice these criteria. We detected in flow cytometry experiments with human vascular smooth muscle cells that the fluorescence signals from each of these antibodies were reduced by 46 ± 10 %-91 ± 2 % in cells treated with commercially available small interfering RNA (siRNA) specific for each receptor, as compared with cells that were incubated with non-targeting siRNA. The tested antibodies included anti-ACKR3 (R&D Systems, mab42273), for which specificity has previously been demonstrated. Staining with this antibody resulted in 72 ± 5 % reduction of the fluorescence signal after ACKR3 siRNA treatment. Furthermore, staining with anti-α1A-AR (Santa Cruz, sc1477) and anti-ACKR3 (Abcam, ab38089), which have previously been reported to be non-specific, resulted in 70 ± 19 % and 80 ± 4 % loss of the fluorescence signal after α1A-AR and ACKR3 siRNA treatment, respectively. Our findings demonstrate that the tested antibodies show reasonable selectivity for their receptor target under our experimental conditions. Furthermore, our observations suggest that the selectivity of GPCR antibodies depends on the method for which the antibody is employed, the species from which cells/tissues are obtained, and on the type of specimens (cell, tissue/cell homogenate, or section) tested.

  12. Effective File I/O Bandwidth Benchmark

    SciTech Connect

    Rabenseifner, R.; Koniges, A.E.

    2000-02-15

    The effective I/O bandwidth benchmark (b{_}eff{_}io) covers two goals: (1) to achieve a characteristic average number for the I/O bandwidth achievable with parallel MPI-I/O applications, and (2) to get detailed information about several access patterns and buffer lengths. The benchmark examines ''first write'', ''rewrite'' and ''read'' access, strided (individual and shared pointers) and segmented collective patterns on one file per application and non-collective access to one file per process. The number of parallel accessing processes is also varied and well-formed I/O is compared with non-well formed. On systems, meeting the rule that the total memory can be written to disk in 10 minutes, the benchmark should not need more than 15 minutes for a first pass of all patterns. The benchmark is designed analogously to the effective bandwidth benchmark for message passing (b{_}eff) that characterizes the message passing capabilities of a system in a few minutes. First results of the b{_}eff{_}io benchmark are given for IBM SP and Cray T3E systems and compared with existing benchmarks based on parallel Posix-I/O.

  13. Benchmarking Measures of Network Influence

    NASA Astrophysics Data System (ADS)

    Bramson, Aaron; Vandermarliere, Benjamin

    2016-09-01

    Identifying key agents for the transmission of diseases (ideas, technology, etc.) across social networks has predominantly relied on measures of centrality on a static base network or a temporally flattened graph of agent interactions. Various measures have been proposed as the best trackers of influence, such as degree centrality, betweenness, and k-shell, depending on the structure of the connectivity. We consider SIR and SIS propagation dynamics on a temporally-extruded network of observed interactions and measure the conditional marginal spread as the change in the magnitude of the infection given the removal of each agent at each time: its temporal knockout (TKO) score. We argue that this TKO score is an effective benchmark measure for evaluating the accuracy of other, often more practical, measures of influence. We find that none of the network measures applied to the induced flat graphs are accurate predictors of network propagation influence on the systems studied; however, temporal networks and the TKO measure provide the requisite targets for the search for effective predictive measures.

  14. Benchmarking pKa prediction

    PubMed Central

    Davies, Matthew N; Toseland, Christopher P; Moss, David S; Flower, Darren R

    2006-01-01

    Background pKa values are a measure of the protonation of ionizable groups in proteins. Ionizable groups are involved in intra-protein, protein-solvent and protein-ligand interactions as well as solubility, protein folding and catalytic activity. The pKa shift of a group from its intrinsic value is determined by the perturbation of the residue by the environment and can be calculated from three-dimensional structural data. Results Here we use a large dataset of experimentally-determined pKas to analyse the performance of different prediction techniques. Our work provides a benchmark of available software implementations: MCCE, MEAD, PROPKA and UHBD. Combinatorial and regression analysis is also used in an attempt to find a consensus approach towards pKa prediction. The tendency of individual programs to over- or underpredict the pKa value is related to the underlying methodology of the individual programs. Conclusion Overall, PROPKA is more accurate than the other three programs. Key to developing accurate predictive software will be a complete sampling of conformations accessible to protein structures. PMID:16749919

  15. Benchmark problems in computational aeroacoustics

    NASA Technical Reports Server (NTRS)

    Porter-Locklear, Freda

    1994-01-01

    A recent directive at NASA Langley is aimed at numerically predicting principal noise sources. During my summer stay, I worked with high-order ENO code, developed by Dr. Harold Atkins, for solving the unsteady compressible Navier-Stokes equations, as it applies to computational aeroacoustics (CAA). A CAA workshop, composed of six categories of benchmark problems, has been organized to test various numerical properties of code. My task was to determine the robustness of Atkins' code for these test problems. In one category, we tested the nonlinear wave propagation of the code for the one-dimensional Euler equations, with initial pressure, density, and velocity conditions. Using freestream boundary conditions, our results were plausible. In another category, we solved the linearized two-dimensional Euler equations to test the effectiveness of radiation boundary conditions. Here we utilized MAPLE to compute eigenvalues and eigenvectors of the Jacobian given variable and flux vectors. We experienced a minor problem with inflow and outflow boundary conditions. Next, we solved the quasi one dimensional unsteady flow equations with an incoming acoustic wave of amplitude 10(exp -6). The small amplitude sound wave was incident on a convergent-divergent nozzle. After finding a steady-state solution and then marching forward, our solution indicated that after 30 periods the acoustic wave had dissipated (a period is time required for sound wave to traverse one end of nozzle to other end).

  16. Benchmark problems in computational aeroacoustics

    NASA Astrophysics Data System (ADS)

    Porter-Locklear, Freda

    1994-12-01

    A recent directive at NASA Langley is aimed at numerically predicting principal noise sources. During my summer stay, I worked with high-order ENO code, developed by Dr. Harold Atkins, for solving the unsteady compressible Navier-Stokes equations, as it applies to computational aeroacoustics (CAA). A CAA workshop, composed of six categories of benchmark problems, has been organized to test various numerical properties of code. My task was to determine the robustness of Atkins' code for these test problems. In one category, we tested the nonlinear wave propagation of the code for the one-dimensional Euler equations, with initial pressure, density, and velocity conditions. Using freestream boundary conditions, our results were plausible. In another category, we solved the linearized two-dimensional Euler equations to test the effectiveness of radiation boundary conditions. Here we utilized MAPLE to compute eigenvalues and eigenvectors of the Jacobian given variable and flux vectors. We experienced a minor problem with inflow and outflow boundary conditions. Next, we solved the quasi one dimensional unsteady flow equations with an incoming acoustic wave of amplitude 10(exp -6). The small amplitude sound wave was incident on a convergent-divergent nozzle. After finding a steady-state solution and then marching forward, our solution indicated that after 30 periods the acoustic wave had dissipated (a period is time required for sound wave to traverse one end of nozzle to other end).

  17. Benchmarking Measures of Network Influence

    PubMed Central

    Bramson, Aaron; Vandermarliere, Benjamin

    2016-01-01

    Identifying key agents for the transmission of diseases (ideas, technology, etc.) across social networks has predominantly relied on measures of centrality on a static base network or a temporally flattened graph of agent interactions. Various measures have been proposed as the best trackers of influence, such as degree centrality, betweenness, and k-shell, depending on the structure of the connectivity. We consider SIR and SIS propagation dynamics on a temporally-extruded network of observed interactions and measure the conditional marginal spread as the change in the magnitude of the infection given the removal of each agent at each time: its temporal knockout (TKO) score. We argue that this TKO score is an effective benchmark measure for evaluating the accuracy of other, often more practical, measures of influence. We find that none of the network measures applied to the induced flat graphs are accurate predictors of network propagation influence on the systems studied; however, temporal networks and the TKO measure provide the requisite targets for the search for effective predictive measures. PMID:27670635

  18. Benchmarking for Bayesian Reinforcement Learning

    PubMed Central

    Ernst, Damien; Couëtoux, Adrien

    2016-01-01

    In the Bayesian Reinforcement Learning (BRL) setting, agents try to maximise the collected rewards while interacting with their environment while using some prior knowledge that is accessed beforehand. Many BRL algorithms have already been proposed, but the benchmarks used to compare them are only relevant for specific cases. The paper addresses this problem, and provides a new BRL comparison methodology along with the corresponding open source library. In this methodology, a comparison criterion that measures the performance of algorithms on large sets of Markov Decision Processes (MDPs) drawn from some probability distributions is defined. In order to enable the comparison of non-anytime algorithms, our methodology also includes a detailed analysis of the computation time requirement of each algorithm. Our library is released with all source code and documentation: it includes three test problems, each of which has two different prior distributions, and seven state-of-the-art RL algorithms. Finally, our library is illustrated by comparing all the available algorithms and the results are discussed. PMID:27304891

  19. Benchmarking for Bayesian Reinforcement Learning.

    PubMed

    Castronovo, Michael; Ernst, Damien; Couëtoux, Adrien; Fonteneau, Raphael

    2016-01-01

    In the Bayesian Reinforcement Learning (BRL) setting, agents try to maximise the collected rewards while interacting with their environment while using some prior knowledge that is accessed beforehand. Many BRL algorithms have already been proposed, but the benchmarks used to compare them are only relevant for specific cases. The paper addresses this problem, and provides a new BRL comparison methodology along with the corresponding open source library. In this methodology, a comparison criterion that measures the performance of algorithms on large sets of Markov Decision Processes (MDPs) drawn from some probability distributions is defined. In order to enable the comparison of non-anytime algorithms, our methodology also includes a detailed analysis of the computation time requirement of each algorithm. Our library is released with all source code and documentation: it includes three test problems, each of which has two different prior distributions, and seven state-of-the-art RL algorithms. Finally, our library is illustrated by comparing all the available algorithms and the results are discussed.

  20. Developing integrated benchmarks for DOE performance measurement

    SciTech Connect

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  1. Clinically meaningful performance benchmarks in MS

    PubMed Central

    Motl, Robert W.; Scagnelli, John; Pula, John H.; Sosnoff, Jacob J.; Cadavid, Diego

    2013-01-01

    Objective: Identify and validate clinically meaningful Timed 25-Foot Walk (T25FW) performance benchmarks in individuals living with multiple sclerosis (MS). Methods: Cross-sectional study of 159 MS patients first identified candidate T25FW benchmarks. To characterize the clinical meaningfulness of T25FW benchmarks, we ascertained their relationships to real-life anchors, functional independence, and physiologic measurements of gait and disease progression. Candidate T25FW benchmarks were then prospectively validated in 95 subjects using 13 measures of ambulation and cognition, patient-reported outcomes, and optical coherence tomography. Results: T25FW of 6 to 7.99 seconds was associated with a change in occupation due to MS, occupational disability, walking with a cane, and needing “some help” with instrumental activities of daily living; T25FW ≥8 seconds was associated with collecting Supplemental Security Income and government health care, walking with a walker, and inability to do instrumental activities of daily living. During prospective benchmark validation, we trichotomized data by T25FW benchmarks (<6 seconds, 6–7.99 seconds, and ≥8 seconds) and found group main effects on 12 of 13 objective and subjective measures (p < 0.05). Conclusions: Using a cross-sectional design, we identified 2 clinically meaningful T25FW benchmarks of ≥6 seconds (6–7.99) and ≥8 seconds. Longitudinal and larger studies are needed to confirm the clinical utility and relevance of these proposed T25FW benchmarks and to parse out whether there are additional benchmarks in the lower (<6 seconds) and higher (>10 seconds) ranges of performance. PMID:24174581

  2. Parallelization of NAS Benchmarks for Shared Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    This paper presents our experiences of parallelizing the sequential implementation of NAS benchmarks using compiler directives on SGI Origin2000 distributed shared memory (DSM) system. Porting existing applications to new high performance parallel and distributed computing platforms is a challenging task. Ideally, a user develops a sequential version of the application, leaving the task of porting to new generations of high performance computing systems to parallelization tools and compilers. Due to the simplicity of programming shared-memory multiprocessors, compiler developers have provided various facilities to allow the users to exploit parallelism. Native compilers on SGI Origin2000 support multiprocessing directives to allow users to exploit loop-level parallelism in their programs. Additionally, supporting tools can accomplish this process automatically and present the results of parallelization to the users. We experimented with these compiler directives and supporting tools by parallelizing sequential implementation of NAS benchmarks. Results reported in this paper indicate that with minimal effort, the performance gain is comparable with the hand-parallelized, carefully optimized, message-passing implementations of the same benchmarks.

  3. Newbery Medal Acceptance.

    ERIC Educational Resources Information Center

    Freedman, Russell

    1988-01-01

    Presents the Newbery Medal acceptance speech of Russell Freedman, writer of children's nonfiction. Discusses the place of nonfiction in the world of children's literature, the evolution of children's biographies, and the author's work on "Lincoln." (ARH)

  4. The IAEA Coordinated Research Program on HTGR Reactor Physics, Thermal-hydraulics and Depletion Uncertainty Analysis: Description of the Benchmark Test Cases and Phases

    SciTech Connect

    Frederik Reitsma; Gerhard Strydom; Bismark Tyobeka; Kostadin Ivanov

    2012-10-01

    The continued development of High Temperature Gas Cooled Reactors (HTGRs) requires verification of design and safety features with reliable high fidelity physics models and robust, efficient, and accurate codes. The uncertainties in the HTR analysis tools are today typically assessed with sensitivity analysis and then a few important input uncertainties (typically based on a PIRT process) are varied in the analysis to find a spread in the parameter of importance. However, one wish to apply a more fundamental approach to determine the predictive capability and accuracies of coupled neutronics/thermal-hydraulics and depletion simulations used for reactor design and safety assessment. Today there is a broader acceptance of the use of uncertainty analysis even in safety studies and it has been accepted by regulators in some cases to replace the traditional conservative analysis. Finally, there is also a renewed focus in supplying reliable covariance data (nuclear data uncertainties) that can then be used in uncertainty methods. Uncertainty and sensitivity studies are therefore becoming an essential component of any significant effort in data and simulation improvement. In order to address uncertainty in analysis and methods in the HTGR community the IAEA launched a Coordinated Research Project (CRP) on the HTGR Uncertainty Analysis in Modelling early in 2012. The project is built on the experience of the OECD/NEA Light Water Reactor (LWR) Uncertainty Analysis in Best-Estimate Modelling (UAM) benchmark activity, but focuses specifically on the peculiarities of HTGR designs and its simulation requirements. Two benchmark problems were defined with the prismatic type design represented by the MHTGR-350 design from General Atomics (GA) while a 250 MW modular pebble bed design, similar to the INET (China) and indirect-cycle PBMR (South Africa) designs are also included. In the paper more detail on the benchmark cases, the different specific phases and tasks and the latest

  5. Toxicological benchmarks for screening potential contaminants of concern for effects on terrestrial plants: 1994 revision

    SciTech Connect

    Will, M.E.; Suter, G.W. II

    1994-09-01

    One of the initial stages in ecological risk assessment for hazardous waste sites is screening contaminants to determine which of them are worthy of further consideration as contaminants of potential concern. This process is termed contaminant screening. It is performed by comparing measured ambient concentrations of chemicals to benchmark concentrations. Currently, no standard benchmark concentrations exist for assessing contaminants in soil with respect to their toxicity to plants. This report presents a standard method for deriving benchmarks for this purpose (phytotoxicity benchmarks), a set of data concerning effects of chemicals in soil or soil solution on plants, and a set of phytotoxicity benchmarks for 38 chemicals potentially associated with United States Department of Energy (DOE) sites. In addition, background information on the phytotoxicity and occurrence of the chemicals in soils is presented, and literature describing the experiments from which data were drawn for benchmark derivation is reviewed. Chemicals that are found in soil at concentrations exceeding both the phytotoxicity benchmark and the background concentration for the soil type should be considered contaminants of potential concern.

  6. Toxicological Benchmarks for Screening Potential Contaminants of Concern for Effects on Terrestrial Plants

    SciTech Connect

    Suter, G.W. II

    1993-01-01

    One of the initial stages in ecological risk assessment for hazardous waste sites is screening contaminants to determine which of them are worthy of further consideration as contaminants of potential concern. This process is termed contaminant screening. It is performed by comparing measured ambient concentrations of chemicals to benchmark concentrations. Currently, no standard benchmark concentrations exist for assessing contaminants in soil with respect to their toxicity to plants. This report presents a standard method for deriving benchmarks for this purpose (phytotoxicity benchmarks), a set of data concerning effects of chemicals in soil or soil solution on plants, and a set of phytotoxicity benchmarks for 38 chemicals potentially associated with United States Department of Energy (DOE) sites. In addition, background information on the phytotoxicity and occurrence of the chemicals in soils is presented, and literature describing the experiments from which data were drawn for benchmark derivation is reviewed. Chemicals that are found in soil at concentrations exceeding both the phytotoxicity benchmark and the background concentration for the soil type should be considered contaminants of potential concern.

  7. Geant4 Computing Performance Benchmarking and Monitoring

    DOE PAGES

    Dotti, Andrea; Elvira, V. Daniel; Folger, Gunter; ...

    2015-12-23

    Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared tomore » previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. In conclusion, the scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.« less

  8. Geant4 Computing Performance Benchmarking and Monitoring

    SciTech Connect

    Dotti, Andrea; Elvira, V. Daniel; Folger, Gunter; Genser, Krzysztof; Jun, Soon Yung; Kowalkowski, James B.; Paterno, Marc

    2015-12-23

    Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared to previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. In conclusion, the scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.

  9. Updates to the integrated protein-protein interaction benchmarks: Docking benchmark version 5 and affinity benchmark version 2

    PubMed Central

    Vreven, Thom; Moal, Iain H.; Vangone, Anna; Pierce, Brian G.; Kastritis, Panagiotis L.; Torchala, Mieczyslaw; Chaleil, Raphael; Jiménez-García, Brian; Bates, Paul A.; Fernandez-Recio, Juan; Bonvin, Alexandre M.J.J.; Weng, Zhiping

    2015-01-01

    We present an updated and integrated version of our widely used protein-protein docking and binding affinity benchmarks. The benchmarks consist of non-redundant, high quality structures of protein-protein complexes along with the unbound structures of their components. Fifty-five new complexes were added to the docking benchmark, 35 of which have experimentally-measured binding affinities. These updated docking and affinity benchmarks now contain 230 and 179 entries, respectively. In particular, the number of antibody-antigen complexes has increased significantly, by 67% and 74% in the docking and affinity benchmarks, respectively. We tested previously developed docking and affinity prediction algorithms on the new cases. Considering only the top ten docking predictions per benchmark case, a prediction accuracy of 38% is achieved on all 55 cases, and up to 50% for the 32 rigid-body cases only. Predicted affinity scores are found to correlate with experimental binding energies up to r=0.52 overall, and r=0.72 for the rigid complexes. PMID:26231283

  10. Updates to the Integrated Protein-Protein Interaction Benchmarks: Docking Benchmark Version 5 and Affinity Benchmark Version 2.

    PubMed

    Vreven, Thom; Moal, Iain H; Vangone, Anna; Pierce, Brian G; Kastritis, Panagiotis L; Torchala, Mieczyslaw; Chaleil, Raphael; Jiménez-García, Brian; Bates, Paul A; Fernandez-Recio, Juan; Bonvin, Alexandre M J J; Weng, Zhiping

    2015-09-25

    We present an updated and integrated version of our widely used protein-protein docking and binding affinity benchmarks. The benchmarks consist of non-redundant, high-quality structures of protein-protein complexes along with the unbound structures of their components. Fifty-five new complexes were added to the docking benchmark, 35 of which have experimentally measured binding affinities. These updated docking and affinity benchmarks now contain 230 and 179 entries, respectively. In particular, the number of antibody-antigen complexes has increased significantly, by 67% and 74% in the docking and affinity benchmarks, respectively. We tested previously developed docking and affinity prediction algorithms on the new cases. Considering only the top 10 docking predictions per benchmark case, a prediction accuracy of 38% is achieved on all 55 cases and up to 50% for the 32 rigid-body cases only. Predicted affinity scores are found to correlate with experimental binding energies up to r=0.52 overall and r=0.72 for the rigid complexes.

  11. NAS Grid Benchmarks. 1.0

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob; Frumkin, Michael; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We provide a paper-and-pencil specification of a benchmark suite for computational grids. It is based on the NAS (NASA Advanced Supercomputing) Parallel Benchmarks (NPB) and is called the NAS Grid Benchmarks (NGB). NGB problems are presented as data flow graphs encapsulating an instance of a slightly modified NPB task in each graph node, which communicates with other nodes by sending/receiving initialization data. Like NPB, NGB specifies several different classes (problem sizes). In this report we describe classes S, W, and A, and provide verification values for each. The implementor has the freedom to choose any language, grid environment, security model, fault tolerance/error correction mechanism, etc., as long as the resulting implementation passes the verification test and reports the turnaround time of the benchmark.

  12. Social benchmarking to improve river ecosystems.

    PubMed

    Cary, John; Pisarski, Anne

    2011-01-01

    To complement physical measures or indices of river health a social benchmarking instrument has been developed to measure community dispositions and behaviour regarding river health. This instrument seeks to achieve three outcomes. First, to provide a benchmark of the social condition of communities' attitudes, values, understanding and behaviours in relation to river health; second, to provide information for developing management and educational priorities; and third, to provide an assessment of the long-term effectiveness of community education and engagement activities in achieving changes in attitudes, understanding and behaviours in relation to river health. In this paper the development of the social benchmarking instrument is described and results are presented from the first state-wide benchmark study in Victoria, Australia, in which the social dimensions of river health, community behaviours related to rivers, and community understanding of human impacts on rivers were assessed.

  13. Public Relations in Accounting: A Benchmark Study.

    ERIC Educational Resources Information Center

    Pincus, J. David; Pincus, Karen V.

    1987-01-01

    Reports on a national study of one segment of the professional services market: the accounting profession. Benchmark data on CPA firms' attitudes toward and uses of public relations are presented and practical and theoretical/research issues are discussed. (JC)

  14. DOE Commercial Building Benchmark Models: Preprint

    SciTech Connect

    Torcelini, P.; Deru, M.; Griffith, B.; Benne, K.; Halverson, M.; Winiarski, D.; Crawley, D. B.

    2008-07-01

    To provide a consistent baseline of comparison and save time conducting such simulations, the U.S. Department of Energy (DOE) has developed a set of standard benchmark building models. This paper will provide an executive summary overview of these benchmark buildings, and how they can save building analysts valuable time. Fully documented and implemented to use with the EnergyPlus energy simulation program, the benchmark models are publicly available and new versions will be created to maintain compatibility with new releases of EnergyPlus. The benchmark buildings will form the basis for research on specific building technologies, energy code development, appliance standards, and measurement of progress toward DOE energy goals. Having a common starting point allows us to better share and compare research results and move forward to make more energy efficient buildings.

  15. Simple Benchmark Specifications for Space Radiation Protection

    NASA Technical Reports Server (NTRS)

    Singleterry, Robert C. Jr.; Aghara, Sukesh K.

    2013-01-01

    This report defines space radiation benchmark specifications. This specification starts with simple, monoenergetic, mono-directional particles on slabs and progresses to human models in spacecraft. This report specifies the models and sources needed to what the team performing the benchmark needs to produce in a report. Also included are brief descriptions of how OLTARIS, the NASA Langley website for space radiation analysis, performs its analysis.

  16. Implementation of NAS Parallel Benchmarks in Java

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Schultz, Matthew; Jin, Hao-Qiang; Yan, Jerry

    2000-01-01

    A number of features make Java an attractive but a debatable choice for High Performance Computing (HPC). In order to gauge the applicability of Java to the Computational Fluid Dynamics (CFD) we have implemented NAS Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would move Java closer to Fortran in the competition for CFD applications.

  17. Benchmarking Attosecond Physics with Atomic Hydrogen

    DTIC Science & Technology

    2015-05-25

    Final 3. DATES COVERED (From - To) 12 Mar 12 – 11 Mar 15 4. TITLE AND SUBTITLE Benchmarking attosecond physics with atomic hydrogen 5a...NOTES 14. ABSTRACT The research team obtained uniquely reliable reference data on atomic interactions with intense few-cycle laser pulses...AND SUBTITLE Benchmarking attosecond physics with atomic hydrogen 5a. CONTRACT NUMBER FA2386-12-1-4025 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER

  18. Benchmarking for Cost Improvement. Final report

    SciTech Connect

    Not Available

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  19. Machine characterization and benchmark performance prediction

    NASA Technical Reports Server (NTRS)

    Saavedra-Barrera, Rafael H.

    1988-01-01

    From runs of standard benchmarks or benchmark suites, it is not possible to characterize the machine nor to predict the run time of other benchmarks which have not been run. A new approach to benchmarking and machine characterization is reported. The creation and use of a machine analyzer is described, which measures the performance of a given machine on FORTRAN source language constructs. The machine analyzer yields a set of parameters which characterize the machine and spotlight its strong and weak points. Also described is a program analyzer, which analyzes FORTRAN programs and determines the frequency of execution of each of the same set of source language operations. It is then shown that by combining a machine characterization and a program characterization, we are able to predict with good accuracy the run time of a given benchmark on a given machine. Characterizations are provided for the Cray-X-MP/48, Cyber 205, IBM 3090/200, Amdahl 5840, Convex C-1, VAX 8600, VAX 11/785, VAX 11/780, SUN 3/50, and IBM RT-PC/125, and for the following benchmark programs or suites: Los Alamos (BMK8A1), Baskett, Linpack, Livermore Loops, Madelbrot Set, NAS Kernels, Shell Sort, Smith, Whetstone and Sieve of Erathostenes.

  20. Benchmarking of Graphite Reflected Critical Assemblies of UO2

    SciTech Connect

    Margaret A. Marshall; John D. Bess

    2011-11-01

    A series of experiments were carried out in 1963 at the Oak Ridge National Laboratory Critical Experiments Facility (ORCEF) for use in space reactor research programs. A core containing 93.2% enriched UO2 fuel rods was used in these experiments. The first part of the experimental series consisted of 253 tightly-packed fuel rods (1.27 cm triangular pitch) with graphite reflectors [1], the second part used 253 graphite-reflected fuel rods organized in a 1.506 cm triangular pitch [2], and the final part of the experimental series consisted of 253 beryllium-reflected fuel rods with a 1.506 cm triangular pitch. [3] Fission rate distribution and cadmium ratio measurements were taken for all three parts of the experimental series. Reactivity coefficient measurements were taken for various materials placed in the beryllium reflected core. The first part of this experimental series has been evaluated for inclusion in the International Reactor Physics Experiment Evaluation Project (IRPhEP) [4] and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbooks, [5] and is discussed below. These experiments are of interest as benchmarks because they support the validation of compact reactor designs with similar characteristics to the design parameters for a space nuclear fission surface power systems. [6

  1. Benchmarking ENDF/B-VII.1, JENDL-4.0 and JEFF-3.1

    SciTech Connect

    Van Der Marck, S. C.

    2012-07-01

    Three nuclear data libraries have been tested extensively using criticality safety benchmark calculations. The three libraries are the new release of the US library ENDF/B-VII.1 (2011), the new release of the Japanese library JENDL-4.0 (2011), and the OECD/NEA library JEFF-3.1 (2006). All calculations were performed with the continuous-energy Monte Carlo code MCNP (version 4C3, as well as version 6-beta1). Around 2000 benchmark cases from the International Handbook of Criticality Safety Benchmark Experiments (ICSBEP) were used. The results were analyzed per ICSBEP category, and per element. Overall, the three libraries show similar performance on most criticality safety benchmarks. The largest differences are probably caused by elements such as Be, C, Fe, Zr, W. (authors)

  2. Thermo-hydro-mechanical-chemical processes in fractured-porous media: Benchmarks and examples

    NASA Astrophysics Data System (ADS)

    Kolditz, O.; Shao, H.; Görke, U.; Kalbacher, T.; Bauer, S.; McDermott, C. I.; Wang, W.

    2012-12-01

    The book comprises an assembly of benchmarks and examples for porous media mechanics collected over the last twenty years. Analysis of thermo-hydro-mechanical-chemical (THMC) processes is essential to many applications in environmental engineering, such as geological waste deposition, geothermal energy utilisation, carbon capture and storage, water resources management, hydrology, even climate change. In order to assess the feasibility as well as the safety of geotechnical applications, process-based modelling is the only tool to put numbers, i.e. to quantify future scenarios. This charges a huge responsibility concerning the reliability of computational tools. Benchmarking is an appropriate methodology to verify the quality of modelling tools based on best practices. Moreover, benchmarking and code comparison foster community efforts. The benchmark book is part of the OpenGeoSys initiative - an open source project to share knowledge and experience in environmental analysis and scientific computation.

  3. Accepting space radiation risks.

    PubMed

    Schimmerling, Walter

    2010-08-01

    The human exploration of space inevitably involves exposure to radiation. Associated with this exposure are multiple risks, i.e., probabilities that certain aspects of an astronaut's health or performance will be degraded. The management of these risks requires that such probabilities be accurately predicted, that the actual exposures be verified, and that comprehensive records be maintained. Implicit in these actions is the fact that, at some point, a decision has been made to accept a certain level of risk. This paper examines ethical and practical considerations involved in arriving at a determination that risks are acceptable, roles that the parties involved may play, and obligations arising out of reliance on the informed consent paradigm seen as the basis for ethical radiation risk acceptance in space.

  4. Criticality Benchmark Analysis of the HTTR Annular Startup Core Configurations

    SciTech Connect

    John D. Bess

    2009-11-01

    One of the high priority benchmarking activities for corroborating the Next Generation Nuclear Plant (NGNP) Project and Very High Temperature Reactor (VHTR) Program is evaluation of Japan's existing High Temperature Engineering Test Reactor (HTTR). The HTTR is a 30 MWt engineering test reactor utilizing graphite moderation, helium coolant, and prismatic TRISO fuel. A large amount of critical reactor physics data is available for validation efforts of High Temperature Gas-cooled Reactors (HTGRs). Previous international reactor physics benchmarking activities provided a collation of mixed results that inaccurately predicted actual experimental performance.1 Reevaluations were performed by the Japanese to reduce the discrepancy between actual and computationally-determined critical configurations.2-3 Current efforts at the Idaho National Laboratory (INL) involve development of reactor physics benchmark models in conjunction with the International Reactor Physics Experiment Evaluation Project (IRPhEP) for use with verification and validation methods in the VHTR Program. Annular cores demonstrate inherent safety characteristics that are of interest in developing future HTGRs.

  5. Performance Evaluation and Benchmarking of Next Intelligent Systems

    SciTech Connect

    del Pobil, Angel; Madhavan, Raj; Bonsignorio, Fabio

    2009-10-01

    Performance Evaluation and Benchmarking of Intelligent Systems presents research dedicated to the subject of performance evaluation and benchmarking of intelligent systems by drawing from the experiences and insights of leading experts gained both through theoretical development and practical implementation of intelligent systems in a variety of diverse application domains. This contributed volume offers a detailed and coherent picture of state-of-the-art, recent developments, and further research areas in intelligent systems. The chapters cover a broad range of applications, such as assistive robotics, planetary surveying, urban search and rescue, and line tracking for automotive assembly. Subsystems or components described in this book include human-robot interaction, multi-robot coordination, communications, perception, and mapping. Chapters are also devoted to simulation support and open source software for cognitive platforms, providing examples of the type of enabling underlying technologies that can help intelligent systems to propagate and increase in capabilities. Performance Evaluation and Benchmarking of Intelligent Systems serves as a professional reference for researchers and practitioners in the field. This book is also applicable to advanced courses for graduate level students and robotics professionals in a wide range of engineering and related disciplines including computer science, automotive, healthcare, manufacturing, and service robotics.

  6. Benchmarking Gas Path Diagnostic Methods: A Public Approach

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Bird, Jeff; Davison, Craig; Volponi, Al; Iverson, R. Eugene

    2008-01-01

    Recent technology reviews have identified the need for objective assessments of engine health management (EHM) technology. The need is two-fold: technology developers require relevant data and problems to design and validate new algorithms and techniques while engine system integrators and operators need practical tools to direct development and then evaluate the effectiveness of proposed solutions. This paper presents a publicly available gas path diagnostic benchmark problem that has been developed by the Propulsion and Power Systems Panel of The Technical Cooperation Program (TTCP) to help address these needs. The problem is coded in MATLAB (The MathWorks, Inc.) and coupled with a non-linear turbofan engine simulation to produce "snap-shot" measurements, with relevant noise levels, as if collected from a fleet of engines over their lifetime of use. Each engine within the fleet will experience unique operating and deterioration profiles, and may encounter randomly occurring relevant gas path faults including sensor, actuator and component faults. The challenge to the EHM community is to develop gas path diagnostic algorithms to reliably perform fault detection and isolation. An example solution to the benchmark problem is provided along with associated evaluation metrics. A plan is presented to disseminate this benchmark problem to the engine health management technical community and invite technology solutions.

  7. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  8. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 4 2013-10-01 2013-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  9. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 4 2014-10-01 2014-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  10. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 4 2012-10-01 2012-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  11. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  12. Benchmarking predictions of allostery in liver pyruvate kinase in CAGI4.

    PubMed

    Xu, Qifang; Tang, Qingling; Katsonis, Panagiotis; Lichtarge, Olivier; Jones, David; Bovo, Samuele; Babbi, Giulia; Martelli, Pier L; Casadio, Rita; Lee, Gyu Rie; Seok, Chaok; Fenton, Aron W; Dunbrack, Roland L

    2017-03-29

    Critical Assessment of Genome Interpretation "CAGI" is a global community experiment to objectively assess computational methods for predicting phenotypic impacts of genomic variation. One of the 2015-2016 competitions focused on predicting the influence of mutations on the allosteric regulation of human liver pyruvate kinase. More than 30 different researchers accessed the challenge data. However, only four groups accepted the challenge. Features used for predictions ranged from evolutionary constraints, mutant site locations relative to active and effector binding sites, and computational docking outputs. Despite the range of expertise and strategies used by predictors, the best predictions were marginally greater than random for modified allostery resulting from mutations. In contrast, several groups successfully predicted which mutations severely reduced enzymatic activity. Nonetheless, poor predictions of allostery stands in stark contrast to the impression left by more than 700 PubMed entries identified using the identifiers "computational + allosteric". This contrast highlights a specialized need for new computational tools and utilization of benchmarks that focus on allosteric regulation. This article is protected by copyright. All rights reserved.

  13. Why was Relativity Accepted?

    NASA Astrophysics Data System (ADS)

    Brush, S. G.

    Historians of science have published many studies of the reception of Einstein's special and general theories of relativity. Based on a review of these studies, and my own research on the role of the light-bending prediction in the reception of general relativity, I discuss the role of three kinds of reasons for accepting relativity (1) empirical predictions and explanations; (2) social-psychological factors; and (3) aesthetic-mathematical factors. According to the historical studies, acceptance was a three-stage process. First, a few leading scientists adopted the special theory for aesthetic-mathematical reasons. In the second stage, their enthusiastic advocacy persuaded other scientists to work on the theory and apply it to problems currently of interest in atomic physics. The special theory was accepted by many German physicists by 1910 and had begun to attract some interest in other countries. In the third stage, the confirmation of Einstein's light-bending prediction attracted much public attention and forced all physicists to take the general theory of relativity seriously. In addition to light-bending, the explanation of the advance of Mercury's perihelion was considered strong evidence by theoretical physicists. The American astronomers who conducted successful tests of general relativity became defenders of the theory. There is little evidence that relativity was `socially constructed' but its initial acceptance was facilitated by the prestige and resources of its advocates.

  14. UGV acceptance testing

    NASA Astrophysics Data System (ADS)

    Kramer, Jeffrey A.; Murphy, Robin R.

    2006-05-01

    With over 100 models of unmanned vehicles now available for military and civilian safety, security or rescue applications, it is important to for agencies to establish acceptance testing. However, there appears to be no general guidelines for what constitutes a reasonable acceptance test. This paper describes i) a preliminary method for acceptance testing by a customer of the mechanical and electrical components of an unmanned ground vehicle system, ii) how it has been applied to a man-packable micro-robot, and iii) discusses the value of testing both to ensure that the customer has a workable system and to improve design. The test method automated the operation of the robot to repeatedly exercise all aspects and combinations of components on the robot for 6 hours. The acceptance testing process uncovered many failures consistent with those shown to occur in the field, showing that testing by the user does predict failures. The process also demonstrated that the testing by the manufacturer can provide important design data that can be used to identify, diagnose, and prevent long-term problems. Also, the structured testing environment showed that sensor systems can be used to predict errors and changes in performance, as well as uncovering unmodeled behavior in subsystems.

  15. Approaches to acceptable risk

    SciTech Connect

    Whipple, C.

    1997-04-30

    Several alternative approaches to address the question {open_quotes}How safe is safe enough?{close_quotes} are reviewed and an attempt is made to apply the reasoning behind these approaches to the issue of acceptability of radiation exposures received in space. The approaches to the issue of the acceptability of technological risk described here are primarily analytical, and are drawn from examples in the management of environmental health risks. These include risk-based approaches, in which specific quantitative risk targets determine the acceptability of an activity, and cost-benefit and decision analysis, which generally focus on the estimation and evaluation of risks, benefits and costs, in a framework that balances these factors against each other. These analytical methods tend by their quantitative nature to emphasize the magnitude of risks, costs and alternatives, and to downplay other factors, especially those that are not easily expressed in quantitative terms, that affect acceptance or rejection of risk. Such other factors include the issues of risk perceptions and how and by whom risk decisions are made.

  16. Benchmarking local healthcare-associated infections: available benchmarks and interpretation challenges.

    PubMed

    El-Saed, Aiman; Balkhy, Hanan H; Weber, David J

    2013-10-01

    Growing numbers of healthcare facilities are routinely collecting standardized data on healthcare-associated infection (HAI), which can be used not only to track internal performance but also to compare local data to national and international benchmarks. Benchmarking overall (crude) HAI surveillance metrics without accounting or adjusting for potential confounders can result in misleading conclusions. Methods commonly used to provide risk-adjusted metrics include multivariate logistic regression analysis, stratification, indirect standardization, and restrictions. The characteristics of recognized benchmarks worldwide, including the advantages and limitations are described. The choice of the right benchmark for the data from the Gulf Cooperation Council (GCC) states is challenging. The chosen benchmark should have similar data collection and presentation methods. Additionally, differences in surveillance environments including regulations should be taken into consideration when considering such a benchmark. The GCC center for infection control took some steps to unify HAI surveillance systems in the region. GCC hospitals still need to overcome legislative and logistic difficulties in sharing data to create their own benchmark. The availability of a regional GCC benchmark may better enable health care workers and researchers to obtain more accurate and realistic comparisons.

  17. The Concepts "Benchmarks and Benchmarking" Used in Education Planning: Teacher Education as Example

    ERIC Educational Resources Information Center

    Steyn, H. J.

    2015-01-01

    Planning in education is a structured activity that includes several phases and steps that take into account several kinds of information (Steyn, Steyn, De Waal & Wolhuter, 2002: 146). One of the sets of information that are usually considered is the (so-called) "benchmarks" and "benchmarking" regarding the focus of a…

  18. Toxicological benchmarks for wildlife: 1994 Revision

    SciTech Connect

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report.

  19. ACT in Context: An Exploration of Experiential Acceptance

    ERIC Educational Resources Information Center

    Block-Lerner, Jennifer; Wulfert, Edelgard; Moses, Erica

    2009-01-01

    Experiential acceptance, which involves "having," or "allowing" private experiences, has recently gained much attention in the cognitive-behavioral literature. Acceptance, however, may be considered a common factor among psychotherapeutic traditions. The purposes of this paper are to examine the historical roots of acceptance and to discuss the…

  20. Social Acceptance of Wind: A Brief Overview (Presentation)

    SciTech Connect

    Lantz, E.

    2015-01-01

    This presentation discusses concepts and trends in social acceptance of wind energy, profiles recent research findings, and discussions mitigation strategies intended to resolve wind power social acceptance challenges as informed by published research and the experiences of individuals participating in the International Energy Agencies Working Group on Social Acceptance of Wind Energy

  1. Trinity Acceptance Tests Performance Summary.

    SciTech Connect

    Rajan, Mahesh

    2015-12-01

    Ensuring Real Applications perform well on Trinity is key to success. Four components: ASC applications, Sustained System Performance (SSP), Extra-Large MiniApplications problems, and Micro-benchmarks.

  2. Benchmarking ENDF/B-VII.1, JENDL-4.0 and JEFF-3.1.1 with MCNP6

    NASA Astrophysics Data System (ADS)

    van der Marck, Steven C.

    2012-12-01

    Recent releases of three major world nuclear reaction data libraries, ENDF/B-VII.1, JENDL-4.0, and JEFF-3.1.1, have been tested extensively using benchmark calculations. The calculations were performed with the latest release of the continuous energy Monte Carlo neutronics code MCNP, i.e. MCNP6. Three types of benchmarks were used, viz. criticality safety benchmarks, (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 2000 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6Li, 7Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D2O, H2O, concrete, polyethylene and teflon). The new functionality in MCNP6 to calculate the effective delayed neutron fraction was tested by comparison with more than thirty measurements in widely varying systems. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. The performance of the three libraries, in combination with MCNP6, is shown to be good. The results for the LEU-COMP-THERM category are on average very close to the benchmark value. Also for most other categories the results are satisfactory. Deviations from the benchmark values do occur in certain benchmark series, or in isolated cases within benchmark series. Such

  3. Quantum benchmark for teleportation and storage of squeezed states.

    PubMed

    Adesso, Gerardo; Chiribella, Giulio

    2008-05-02

    We provide a quantum benchmark for teleportation and storage of single-mode squeezed states with zero displacement and a completely unknown degree of squeezing along a given direction. For pure squeezed input states, a fidelity higher than 81.5% has to be attained in order to outperform any classical strategy based on an estimation of the unknown squeezing and repreparation of squeezed states. For squeezed thermal input states, we derive an upper and a lower bound on the classical average fidelity which tighten for moderate degree of mixedness. These results enable a critical discussion of recent experiments with squeezed light.

  4. Benchmarking Image Matching for Surface Description

    NASA Astrophysics Data System (ADS)

    Haala, Norbert; Stößel, Wolfgang; Gruber, Michael; Pfeifer, Norbert; Fritsch, Dieter

    2013-04-01

    Semi Global Matching algorithms have encompassed a renaissance to process stereoscopic data sets for surface reconstructions. This method is capable to provide very dense point clouds with sampling distances close to the Ground Sampling Resolution (GSD) of aerial images. EuroSDR, the pan-European organization of Spatial Data Research has initiated a benchmark for dense image matching. The expected outcomes of this benchmark are assessments for suitability, quality measures for dense surface reconstructions and run-time aspects. In particular, aerial image blocks of two sites covering two types of landscapes (urban and rural) are analysed. The benchmark' participants provide their results with respect to several criteria. As a follow-up an overall evaluation is given. Finally, point clouds of rural and urban surfaces delivered by very dense image matching algorithms and software packages are presented and results are compared.

  5. Standardized benchmarking in the quest for orthologs.

    PubMed

    Altenhoff, Adrian M; Boeckmann, Brigitte; Capella-Gutierrez, Salvador; Dalquen, Daniel A; DeLuca, Todd; Forslund, Kristoffer; Huerta-Cepas, Jaime; Linard, Benjamin; Pereira, Cécile; Pryszcz, Leszek P; Schreiber, Fabian; da Silva, Alan Sousa; Szklarczyk, Damian; Train, Clément-Marie; Bork, Peer; Lecompte, Odile; von Mering, Christian; Xenarios, Ioannis; Sjölander, Kimmen; Jensen, Lars Juhl; Martin, Maria J; Muffato, Matthieu; Gabaldón, Toni; Lewis, Suzanna E; Thomas, Paul D; Sonnhammer, Erik; Dessimoz, Christophe

    2016-05-01

    Achieving high accuracy in orthology inference is essential for many comparative, evolutionary and functional genomic analyses, yet the true evolutionary history of genes is generally unknown and orthologs are used for very different applications across phyla, requiring different precision-recall trade-offs. As a result, it is difficult to assess the performance of orthology inference methods. Here, we present a community effort to establish standards and an automated web-based service to facilitate orthology benchmarking. Using this service, we characterize 15 well-established inference methods and resources on a battery of 20 different benchmarks. Standardized benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimum requirement for new tools and resources, and guides the development of more accurate orthology inference methods.

  6. OpenSHMEM Implementation of HPCG Benchmark

    SciTech Connect

    Powers, Sarah S; Imam, Neena

    2016-01-01

    We describe the effort to implement the HPCG benchmark using OpenSHMEM and MPI one-sided communication. Unlike the High Performance LINPACK (HPL) benchmark that places em- phasis on large dense matrix computations, the HPCG benchmark is dominated by sparse operations such as sparse matrix-vector product, sparse matrix triangular solve, and long vector operations. The MPI one-sided implementation is developed using the one-sided OpenSHMEM implementation. Pre- liminary results comparing the original MPI, OpenSHMEM, and MPI one-sided implementations on an SGI cluster, Cray XK7 and Cray XC30 are presented. The results suggest the MPI, OpenSHMEM, and MPI one-sided implementations all obtain similar overall performance but the MPI one-sided im- plementation seems to slightly increase the run time for multigrid preconditioning in HPCG on the Cray XK7 and Cray XC30.

  7. NAS Parallel Benchmarks, Multi-Zone Versions

    NASA Technical Reports Server (NTRS)

    vanderWijngaart, Rob F.; Haopiang, Jin

    2003-01-01

    We describe an extension of the NAS Parallel Benchmarks (NPB) suite that involves solving the application benchmarks LU, BT and SP on collections of loosely coupled discretization meshes. The solutions on the meshes are updated independently, but after each time step they exchange boundary value information. This strategy, which is common among structured-mesh production flow solver codes in use at NASA Ames and elsewhere, provides relatively easily exploitable coarse-grain parallelism between meshes. Since the individual application benchmarks also allow fine-grain parallelism themselves, this NPB extension, named NPB Multi-Zone (NPB-MZ), is a good candidate for testing hybrid and multi-level parallelization tools and strategies.

  8. Coral benchmarks in the center of biodiversity.

    PubMed

    Licuanan, W Y; Robles, R; Dygico, M; Songco, A; van Woesik, R

    2017-01-30

    There is an urgent need to quantify coral reef benchmarks that assess changes and recovery rates through time and serve as goals for management. Yet, few studies have identified benchmarks for hard coral cover and diversity in the center of marine diversity. In this study, we estimated coral cover and generic diversity benchmarks on the Tubbataha reefs, the largest and best-enforced no-take marine protected area in the Philippines. The shallow (2-6m) reef slopes of Tubbataha were monitored annually, from 2012 to 2015, using hierarchical sampling. Mean coral cover was 34% (σ±1.7) and generic diversity was 18 (σ±0.9) per 75m by 25m station. The southeastern leeward slopes supported on average 56% coral cover, whereas the northeastern windward slopes supported 30%, and the western slopes supported 18% coral cover. Generic diversity was more spatially homogeneous than coral cover.

  9. Benchmark field study of deep neutron penetration

    SciTech Connect

    Morgan, J.F.; Sale, K. ); Gold, R.; Roberts, J.H.; Preston, C.C. )

    1991-06-10

    A unique benchmark neutron field has been established at the Lawrence Livermore National Laboratory (LLNL) to study deep penetration neutron transport. At LLNL, a tandem accelerator is used to generate a monoenergetic neutron source that permits investigation of deep neutron penetration under conditions that are virtually ideal to model, namely the transport of mono-energetic neutrons through a single material in a simple geometry. General features of the Lawrence Tandem (LATAN) benchmark field are described with emphasis on neutron source characteristics and room return background. The single material chosen for the first benchmark, LATAN-1, is a steel representative of Light Water Reactor (LWR) Pressure Vessels (PV). Also included is a brief description of the Little Boy replica, a critical reactor assembly designed to mimic the radiation doses from the atomic bomb dropped on Hiroshima, and its us in neutron spectrometry. 18 refs.

  10. Outlier Benchmark Systems With Gaia Primaries

    NASA Astrophysics Data System (ADS)

    Marocco, Federico; Pinfield, David J.; Montes, David; Zapatero Osorio, Maria Rosa; Smart, Richard L.; Cook, Neil J.; Caballero, José A.; Jones, Hugh, R. A.; Lucas, Phil W.

    2016-07-01

    Benchmark systems are critical to assisting sub-stellar physics. While the known population of benchmarks hasincreased significantly in recent years, large portions of the age-metallicity parameter space remain unexplored.Gaia will expand enormously the pool of well characterized primary stars, and our simulations show that we couldpotentially have access to more than 6000 benchmark systems out to 300 pc, allowing us to whittle down thesenbsp;systems into a large sample with outlier properties that will reveal the nature of ultra-cool dwarfs in rare parameternbsp;space. In this contribution we present the preliminary results from our effort to identify and characterize ultra-coolnbsp;companions to Gaia-imaged stars with unusual values of metallicity. Since these systems are intrinsically rare, wenbsp;expand the volume probed by targeting faint, low-proper motion systems.nbsp;/p>

  11. Preliminary Benchmark Evaluation of Japan’s High Temperature Engineering Test Reactor

    SciTech Connect

    John Darrell Bess

    2009-05-01

    A benchmark model of the initial fully-loaded start-up core critical of Japan’s High Temperature Engineering Test Reactor (HTTR) was developed to provide data in support of ongoing validation efforts of the Very High Temperature Reactor Program using publicly available resources. The HTTR is a 30 MWt test reactor utilizing graphite moderation, helium coolant, and prismatic TRISO fuel. The benchmark was modeled using MCNP5 with various neutron cross-section libraries. An uncertainty evaluation was performed by perturbing the benchmark model and comparing the resultant eigenvalues. The calculated eigenvalues are approximately 2-3% greater than expected with an uncertainty of ±0.70%. The primary sources of uncertainty are the impurities in the core and reflector graphite. The release of additional HTTR data could effectively reduce the benchmark model uncertainties and bias. Sensitivity of the results to the graphite impurity content might imply that further evaluation of the graphite content could significantly improve calculated results. Proper characterization of graphite for future Next Generation Nuclear Power reactor designs will improve computational modeling capabilities. Current benchmarking activities include evaluation of the annular HTTR cores and assessment of the remaining start-up core physics experiments, including reactivity effects, reactivity coefficient, and reaction-rate distribution measurements. Long term benchmarking goals might include analyses of the hot zero-power critical, rise-to-power tests, and other irradiation, safety, and technical evaluations performed with the HTTR.

  12. Applications of Monte Carlo methods for the analysis of MHTGR case of the VHTRC benchmark

    SciTech Connect

    Difilippo, F.C.

    1994-03-01

    Monte Carlo methods, as implemented in the MCNP code, have been used to analyze the neutronics characteristics of benchmarks related to Modular High Temperature Gas-Cooled Reactors. The benchmarks are idealized versions of the Japanese (VHTRC) and Swiss (PROTEUS) facilities and an actual configuration of the PROTEUS Configuration 1 experiment. The purpose of the unit cell benchmarks is to compare multiplication constants, critical bucklings, migration lengths, reaction rates and spectral indices. The purpose of the full reactors benchmarks is to compare multiplication constants, reaction rates, spectral indices, neutron balances, reaction rates profiles, temperature coefficients of reactivity and effective delayed neutron fractions. All of these parameters can be calculated by MCNP, which can provide a very detailed model of the geometry of the configurations, from fuel particles to entire fuel assemblies, using at the same time a continuous energy model. These characteristics make MCNP a very useful tool to analyze these MHTGR benchmarks. The author has used the MCNP latest version, 4.x, eld = 01/12/93 with an ENDF/B-V cross section library. This library does not yet contain temperature dependent resonance materials, so all calculations correspond to room temperature, T = 300{degrees}K. Two separate reports were made -- one for the VHTRC, the other for the PROTEUS benchmark.

  13. Acceptability of human risk.

    PubMed

    Kasperson, R E

    1983-10-01

    This paper has three objectives: to explore the nature of the problem implicit in the term "risk acceptability," to examine the possible contributions of scientific information to risk standard-setting, and to argue that societal response is best guided by considerations of process rather than formal methods of analysis. Most technological risks are not accepted but are imposed. There is also little reason to expect consensus among individuals on their tolerance of risk. Moreover, debates about risk levels are often at base debates over the adequacy of the institutions which manage the risks. Scientific information can contribute three broad types of analyses to risk-setting deliberations: contextual analysis, equity assessment, and public preference analysis. More effective risk-setting decisions will involve attention to the process used, particularly in regard to the requirements of procedural justice and democratic responsibility.

  14. Acceptability of human risk.

    PubMed Central

    Kasperson, R E

    1983-01-01

    This paper has three objectives: to explore the nature of the problem implicit in the term "risk acceptability," to examine the possible contributions of scientific information to risk standard-setting, and to argue that societal response is best guided by considerations of process rather than formal methods of analysis. Most technological risks are not accepted but are imposed. There is also little reason to expect consensus among individuals on their tolerance of risk. Moreover, debates about risk levels are often at base debates over the adequacy of the institutions which manage the risks. Scientific information can contribute three broad types of analyses to risk-setting deliberations: contextual analysis, equity assessment, and public preference analysis. More effective risk-setting decisions will involve attention to the process used, particularly in regard to the requirements of procedural justice and democratic responsibility. PMID:6418541

  15. Acceptance Test Plan.

    DTIC Science & Technology

    2014-09-26

    7 RD-Ai507 154 CCEPTANCE TEST PLN(U) WESTINGHOUSE DEFENSE ND i/i ELECTRO ICS CENTER BALTIMORE MD DEVELOPMENT AND OPERATIONS DIY D C KRRiJS 28 JUN...Ln ACCEPTANCE TEST PLAN FOR SPECIAL RELIABILITY TESTS FOR BROADBAND MICROWAVE AMPLIFIER PANEL David C. Kraus, Reliability Engineer WESTINGHOUSE ...ORGANIZATION b. OFFICE SYMBOL 7g& NAME OF MONITORING ORGANIZATION tIf appdeg ble) WESTINGHOUSE ELECTRIC CORP. - NAVAL RESEARCH LABORATORY e. AOORES$ (Ci7t

  16. Benchmarking computational fluid dynamics models for lava flow simulation

    NASA Astrophysics Data System (ADS)

    Dietterich, Hannah; Lev, Einat; Chen, Jiangzhi

    2016-04-01

    Numerical simulations of lava flow emplacement are valuable for assessing lava flow hazards, forecasting active flows, interpreting past eruptions, and understanding the controls on lava flow behavior. Existing lava flow models vary in simplifying assumptions, physics, dimensionality, and the degree to which they have been validated against analytical solutions, experiments, and natural observations. In order to assess existing models and guide the development of new codes, we conduct a benchmarking study of computational fluid dynamics models for lava flow emplacement, including VolcFlow, OpenFOAM, FLOW-3D, and COMSOL. Using the new benchmark scenarios defined in Cordonnier et al. (Geol Soc SP, 2015) as a guide, we model viscous, cooling, and solidifying flows over horizontal and sloping surfaces, topographic obstacles, and digital elevation models of natural topography. We compare model results to analytical theory, analogue and molten basalt experiments, and measurements from natural lava flows. Overall, the models accurately simulate viscous flow with some variability in flow thickness where flows intersect obstacles. OpenFOAM, COMSOL, and FLOW-3D can each reproduce experimental measurements of cooling viscous flows, and FLOW-3D simulations with temperature-dependent rheology match results from molten basalt experiments. We can apply these models to reconstruct past lava flows in Hawai'i and Saudi Arabia using parameters assembled from morphology, textural analysis, and eruption observations as natural test cases. Our study highlights the strengths and weaknesses of each code, including accuracy and computational costs, and provides insights regarding code selection.

  17. Los Alamos National Laboratory computer benchmarking 1982

    SciTech Connect

    Martin, J.L.

    1983-06-01

    Evaluating the performance of computing machinery is a continual effort of the Computer Research and Applications Group of the Los Alamos National Laboratory. This report summarizes the results of the group's benchmarking activities performed between October 1981 and September 1982, presenting compilation and execution times as well as megaflop rates for a set of benchmark codes. Tests were performed on the following computers: Cray Research, Inc. (CRI) Cray-1S; Control Data Corporation (CDC) 7600, 6600, Cyber 73, Cyber 825, Cyber 835, Cyber 855, and Cyber 205; Digital Equipment Corporation (DEC) VAX 11/780 and VAX 11/782; and Apollo Computer, Inc., Apollo.

  18. Toxicological benchmarks for wildlife: 1996 Revision

    SciTech Connect

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets.

  19. Age and Acceptance of Euthanasia.

    ERIC Educational Resources Information Center

    Ward, Russell A.

    1980-01-01

    Study explores relationship between age (and sex and race) and acceptance of euthanasia. Women and non-Whites were less accepting because of religiosity. Among older people less acceptance was attributable to their lesser education and greater religiosity. Results suggest that quality of life in old age affects acceptability of euthanasia. (Author)

  20. FDNS CFD Code Benchmark for RBCC Ejector Mode Operation

    NASA Technical Reports Server (NTRS)

    Holt, James B.; Ruf, Joe

    1999-01-01

    Computational Fluid Dynamics (CFD) analysis results are compared with benchmark quality test data from the Propulsion Engineering Research Center's (PERC) Rocket Based Combined Cycle (RBCC) experiments to verify fluid dynamic code and application procedures. RBCC engine flowpath development will rely on CFD applications to capture the multi-dimensional fluid dynamic interactions and to quantify their effect on the RBCC system performance. Therefore, the accuracy of these CFD codes must be determined through detailed comparisons with test data. The PERC experiments build upon the well-known 1968 rocket-ejector experiments of Odegaard and Stroup by employing advanced optical and laser based diagnostics to evaluate mixing and secondary combustion. The Finite Difference Navier Stokes (FDNS) code was used to model the fluid dynamics of the PERC RBCC ejector mode configuration. Analyses were performed for both Diffusion and Afterburning (DAB) and Simultaneous Mixing and Combustion (SMC) test conditions. Results from both the 2D and the 3D models are presented.

  1. Overview of TPC Benchmark E: The Next Generation of OLTP Benchmarks

    NASA Astrophysics Data System (ADS)

    Hogan, Trish

    Set to replace the aging TPC-C, the TPC Benchmark E is the next generation OLTP benchmark, which more accurately models client database usage. TPC-E addresses the shortcomings of TPC-C. It has a much more complex workload, requires the use of RAID-protected storage, generates much less I/O, and is much cheaper and easier to set up, run, and audit. After a period of overlap, it is expected that TPC-E will become the de facto OLTP benchmark.

  2. Object-Oriented Implementation of the NAS Parallel Benchmarks using Charm++

    NASA Technical Reports Server (NTRS)

    Krishnan, Sanjeev; Bhandarkar, Milind; Kale, Laxmikant V.

    1996-01-01

    This report describes experiences with implementing the NAS Computational Fluid Dynamics benchmarks using a parallel object-oriented language, Charm++. Our main objective in implementing the NAS CFD kernel benchmarks was to develop a code that could be used to easily experiment with different domain decomposition strategies and dynamic load balancing. We also wished to leverage the object-orientation provided by the Charm++ parallel object-oriented language, to develop reusable abstractions that would simplify the process of developing parallel applications. We first describe the Charm++ parallel programming model and the parallel object array abstraction, then go into detail about each of the Scalar Pentadiagonal (SP) and Lower/Upper Triangular (LU) benchmarks, along with performance results. Finally we conclude with an evaluation of the methodology used.

  3. Benchmarking 2011: Trends in Education Philanthropy

    ERIC Educational Resources Information Center

    Grantmakers for Education, 2011

    2011-01-01

    The analysis in "Benchmarking 2011" is based on data from an unduplicated sample of 184 education grantmaking organizations--approximately two-thirds of Grantmakers for Education's (GFE's) network of grantmakers--who responded to an online survey consisting of fixed-choice and open-ended questions. Because a different subset of funders elects to…

  4. What Is the Impact of Subject Benchmarking?

    ERIC Educational Resources Information Center

    Pidcock, Steve

    2006-01-01

    The introduction of subject benchmarking led to fears of increased external intervention in the activities of universities and a more restrictive view of institutional autonomy, accompanied by an undermining of the academic profession, particularly through the perceived threat of the introduction of a national curriculum for higher education. For…

  5. MHEC Survey Establishes Midwest Property Insurance Benchmarks.

    ERIC Educational Resources Information Center

    Midwestern Higher Education Commission Risk Management Institute Research Bulletin, 1994

    1994-01-01

    This publication presents the results of a survey of over 200 midwestern colleges and universities on their property insurance programs and establishes benchmarks to help these institutions evaluate their insurance programs. Findings included the following: (1) 51 percent of respondents currently purchase their property insurance as part of a…

  6. Benchmark Generation and Simulation at Extreme Scale

    SciTech Connect

    Lagadapati, Mahesh; Mueller, Frank; Engelmann, Christian

    2016-01-01

    The path to extreme scale high-performance computing (HPC) poses several challenges related to power, performance, resilience, productivity, programmability, data movement, and data management. Investigating the performance of parallel applications at scale on future architectures and the performance impact of different architectural choices is an important component of HPC hardware/software co-design. Simulations using models of future HPC systems and communication traces from applications running on existing HPC systems can offer an insight into the performance of future architectures. This work targets technology developed for scalable application tracing of communication events. It focuses on extreme-scale simulation of HPC applications and their communication behavior via lightweight parallel discrete event simulation for performance estimation and evaluation. Instead of simply replaying a trace within a simulator, this work promotes the generation of a benchmark from traces. This benchmark is subsequently exposed to simulation using models to reflect the performance characteristics of future-generation HPC systems. This technique provides a number of benefits, such as eliminating the data intensive trace replay and enabling simulations at different scales. The presented work features novel software co-design aspects, combining the ScalaTrace tool to generate scalable trace files, the ScalaBenchGen tool to generate the benchmark, and the xSim tool to assess the benchmark characteristics within a simulator.

  7. Standardised Benchmarking in the Quest for Orthologs

    PubMed Central

    Altenhoff, Adrian M.; Boeckmann, Brigitte; Capella-Gutierrez, Salvador; Dalquen, Daniel A.; DeLuca, Todd; Forslund, Kristoffer; Huerta-Cepas, Jaime; Linard, Benjamin; Pereira, Cécile; Pryszcz, Leszek P.; Schreiber, Fabian; Sousa da Silva, Alan; Szklarczyk, Damian; Train, Clément-Marie; Bork, Peer; Lecompte, Odile; von Mering, Christian; Xenarios, Ioannis; Sjölander, Kimmen; Juhl Jensen, Lars; Martin, Maria J.; Muffato, Matthieu; Gabaldón, Toni; Lewis, Suzanna E.; Thomas, Paul D.; Sonnhammer, Erik; Dessimoz, Christophe

    2016-01-01

    The identification of evolutionarily related genes across different species—orthologs in particular—forms the backbone of many comparative, evolutionary, and functional genomic analyses. Achieving high accuracy in orthology inference is thus essential. Yet the true evolutionary history of genes, required to ascertain orthology, is generally unknown. Furthermore, orthologs are used for very different applications across different phyla, with different requirements in terms of the precision-recall trade-off. As a result, assessing the performance of orthology inference methods remains difficult for both users and method developers. Here, we present a community effort to establish standards in orthology benchmarking and facilitate orthology benchmarking through an automated web-based service (http://orthology.benchmarkservice.org). Using this new service, we characterise the performance of 15 well-established orthology inference methods and resources on a battery of 20 different benchmarks. Standardised benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimal requirement for new tools and resources, and guides the development of more accurate orthology inference methods. PMID:27043882

  8. Benchmark Problems for Spacecraft Formation Flying Missions

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Leitner, Jesse A.; Burns, Richard D.; Folta, David C.

    2003-01-01

    To provide high-level focus to distributed space system flight dynamics and control research, several benchmark problems are suggested. These problems are not specific to any current or proposed mission, but instead are intended to capture high-level features that would be generic to many similar missions.

  9. Benchmarking Year Five Students' Reading Abilities

    ERIC Educational Resources Information Center

    Lim, Chang Kuan; Eng, Lin Siew; Mohamed, Abdul Rashid

    2014-01-01

    Reading and understanding a written text is one of the most important skills in English learning.This study attempts to benchmark Year Five students' reading abilities of fifteen rural schools in a district in Malaysia. The objectives of this study are to develop a set of standardised written reading comprehension and a set of indicators to inform…

  10. 2010 Recruiting Benchmarks Survey. Research Brief

    ERIC Educational Resources Information Center

    National Association of Colleges and Employers (NJ1), 2010

    2010-01-01

    The National Association of Colleges and Employers conducted its annual survey of employer members from June 15, 2010 to August 15, 2010, to benchmark data relevant to college recruiting. From a base of 861 employers holding organizational membership, there were 268 responses for a response rate of 31 percent. Following are some of the major…

  11. A MULTIMODEL APPROACH FOR CALCULATING BENCHMARK DOSE

    EPA Science Inventory


    A Multimodel Approach for Calculating Benchmark Dose
    Ramon I. Garcia and R. Woodrow Setzer

    In the assessment of dose response, a number of plausible dose- response models may give fits that are consistent with the data. If no dose response formulation had been speci...

  12. Cleanroom Energy Efficiency: Metrics and Benchmarks

    SciTech Connect

    International SEMATECH Manufacturing Initiative; Mathew, Paul A.; Tschudi, William; Sartor, Dale; Beasley, James

    2010-07-07

    Cleanrooms are among the most energy-intensive types of facilities. This is primarily due to the cleanliness requirements that result in high airflow rates and system static pressures, as well as process requirements that result in high cooling loads. Various studies have shown that there is a wide range of cleanroom energy efficiencies and that facility managers may not be aware of how energy efficient their cleanroom facility can be relative to other cleanroom facilities with the same cleanliness requirements. Metrics and benchmarks are an effective way to compare one facility to another and to track the performance of a given facility over time. This article presents the key metrics and benchmarks that facility managers can use to assess, track, and manage their cleanroom energy efficiency or to set energy efficiency targets for new construction. These include system-level metrics such as air change rates, air handling W/cfm, and filter pressure drops. Operational data are presented from over 20 different cleanrooms that were benchmarked with these metrics and that are part of the cleanroom benchmark dataset maintained by Lawrence Berkeley National Laboratory (LBNL). Overall production efficiency metrics for cleanrooms in 28 semiconductor manufacturing facilities in the United States and recorded in the Fabs21 database are also presented.

  13. Challenges and Benchmarks in Bioimage Analysis.

    PubMed

    Kozubek, Michal

    2016-01-01

    Similar to the medical imaging community, the bioimaging community has recently realized the need to benchmark various image analysis methods to compare their performance and assess their suitability for specific applications. Challenges sponsored by prestigious conferences have proven to be an effective means of encouraging benchmarking and new algorithm development for a particular type of image data. Bioimage analysis challenges have recently complemented medical image analysis challenges, especially in the case of the International Symposium on Biomedical Imaging (ISBI). This review summarizes recent progress in this respect and describes the general process of designing a bioimage analysis benchmark or challenge, including the proper selection of datasets and evaluation metrics. It also presents examples of specific target applications and biological research tasks that have benefited from these challenges with respect to the performance of automatic image analysis methods that are crucial for the given task. Finally, available benchmarks and challenges in terms of common features, possible classification and implications drawn from the results are analysed.

  14. Benchmarking in Universities: League Tables Revisited

    ERIC Educational Resources Information Center

    Turner, David

    2005-01-01

    This paper examines the practice of benchmarking universities using a "league table" approach. Taking the example of the "Sunday Times University League Table", the author reanalyses the descriptive data on UK universities. Using a linear programming technique, data envelope analysis (DEA), the author uses the re-analysis to…

  15. Algorithm and Architecture Independent Benchmarking with SEAK

    SciTech Connect

    Tallent, Nathan R.; Manzano Franco, Joseph B.; Gawande, Nitin A.; Kang, Seung-Hwa; Kerbyson, Darren J.; Hoisie, Adolfy; Cross, Joseph

    2016-05-23

    Many applications of high performance embedded computing are limited by performance or power bottlenecks. We have designed the Suite for Embedded Applications & Kernels (SEAK), a new benchmark suite, (a) to capture these bottlenecks in a way that encourages creative solutions; and (b) to facilitate rigorous, objective, end-user evaluation for their solutions. To avoid biasing solutions toward existing algorithms, SEAK benchmarks use a mission-centric (abstracted from a particular algorithm) and goal-oriented (functional) specification. To encourage solutions that are any combination of software or hardware, we use an end-user black-box evaluation that can capture tradeoffs between performance, power, accuracy, size, and weight. The tradeoffs are especially informative for procurement decisions. We call our benchmarks future proof because each mission-centric interface and evaluation remains useful despite shifting algorithmic preferences. It is challenging to create both concise and precise goal-oriented specifications for mission-centric problems. This paper describes the SEAK benchmark suite and presents an evaluation of sample solutions that highlights power and performance tradeoffs.

  16. Benchmarking 2010: Trends in Education Philanthropy

    ERIC Educational Resources Information Center

    Bearman, Jessica

    2010-01-01

    "Benchmarking 2010" offers insights into the current priorities, practices and concerns of education grantmakers. The report is divided into five sections: (1) Mapping the Education Grantmaking Landscape; (2) 2010 Funding Priorities; (3) Strategies for Leveraging Greater Impact; (4) Identifying Significant Trends in Education Funding; and (5)…

  17. Seven Benchmarks for Information Technology Investment.

    ERIC Educational Resources Information Center

    Smallen, David; Leach, Karen

    2002-01-01

    Offers benchmarks to help campuses evaluate their efforts in supplying information technology (IT) services. The first three help understand the IT budget, the next three provide insight into staffing levels and emphases, and the seventh relates to the pervasiveness of institutional infrastructure. (EV)

  18. Benchmarking Peer Production Mechanisms, Processes & Practices

    ERIC Educational Resources Information Center

    Fischer, Thomas; Kretschmer, Thomas

    2008-01-01

    This deliverable identifies key approaches for quality management in peer production by benchmarking peer production practices and processes in other areas. (Contains 29 footnotes, 13 figures and 2 tables.)[This report has been authored with contributions of: Kaisa Honkonen-Ratinen, Matti Auvinen, David Riley, Jose Pinzon, Thomas Fischer, Thomas…

  19. Baby-Crying Acceptance

    NASA Astrophysics Data System (ADS)

    Martins, Tiago; de Magalhães, Sérgio Tenreiro

    The baby's crying is his most important mean of communication. The crying monitoring performed by devices that have been developed doesn't ensure the complete safety of the child. It is necessary to join, to these technological resources, means of communicating the results to the responsible, which would involve the digital processing of information available from crying. The survey carried out, enabled to understand the level of adoption, in the continental territory of Portugal, of a technology that will be able to do such a digital processing. It was used the TAM as the theoretical referential. The statistical analysis showed that there is a good probability of acceptance of such a system.

  20. VENUS-2 Experimental Benchmark Analysis

    SciTech Connect

    Pavlovichev, A.M.

    2001-09-28

    The VENUS critical facility is a zero power reactor located at SCK-CEN, Mol, Belgium, which for the VENUS-2 experiment utilized a mixed-oxide core with near-weapons-grade plutonium. In addition to the VENUS-2 Core, additional computational variants based on each type of fuel cycle VENUS-2 core (3.3 wt. % UO{sub 2}, 4.0 wt. % UO{sub 2}, and 2.0/2.7 wt.% MOX) were also calculated. The VENUS-2 critical configuration and cell variants have been calculated with MCU-REA, which is a continuous energy Monte Carlo code system developed at Russian Research Center ''Kurchatov Institute'' and is used extensively in the Fissile Materials Disposition Program. The calculations resulted in a k{sub eff} of 0.99652 {+-} 0.00025 and relative pin powers within 2% for UO{sub 2} pins and 3% for MOX pins of the experimental values.

  1. Benchmark Evaluation of the Neutron Radiography (NRAD) Reactor Upgraded LEU-Fueled Core

    SciTech Connect

    John D. Bess

    2001-09-01

    Benchmark models were developed to evaluate the cold-critical start-up measurements performed during the fresh core reload of the Neutron Radiography (NRAD) reactor with Low Enriched Uranium (LEU) fuel. The final upgraded core configuration with 64 fuel elements has been completed. Evaluated benchmark measurement data include criticality, control-rod worth measurements, shutdown margin, and excess reactivity. Dominant uncertainties in keff include the manganese content and impurities contained within the stainless steel cladding of the fuel and the 236U and erbium poison content in the fuel matrix. Calculations with MCNP5 and ENDF/B-VII.0 nuclear data are approximately 1.4% greater than the benchmark model eigenvalue, supporting contemporary research regarding errors in the cross section data necessary to simulate TRIGA-type reactors. Uncertainties in reactivity effects measurements are estimated to be ~10% with calculations in agreement with benchmark experiment values within 2s. The completed benchmark evaluation de-tails are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Experiments (IRPhEP Handbook). Evaluation of the NRAD LEU cores containing 56, 60, and 62 fuel elements have also been completed, including analysis of their respective reactivity effects measurements; they are also available in the IRPhEP Handbook but will not be included in this summary paper.

  2. Benchmarking the ATLAS software through the Kit Validation engine

    NASA Astrophysics Data System (ADS)

    De Salvo, Alessandro; Brasolin, Franco

    2010-04-01

    The measurement of the experiment software performance is a very important metric in order to choose the most effective resources to be used and to discover the bottlenecks of the code implementation. In this work we present the benchmark techniques used to measure the ATLAS software performance through the ATLAS offline testing engine Kit Validation and the online portal Global Kit Validation. The performance measurements, the data collection, the online analysis and display of the results will be presented. The results of the measurement on different platforms and architectures will be shown, giving a full report on the CPU power and memory consumption of the Monte Carlo generation, simulation, digitization and reconstruction of the most CPU-intensive channels. The impact of the multi-core computing on the ATLAS software performance will also be presented, comparing the behavior of different architectures when increasing the number of concurrent processes. The benchmark techniques described in this paper have been used in the HEPiX group since the beginning of 2008 to help defining the performance metrics for the High Energy Physics applications, based on the real experiment software.

  3. Finding a benchmark for monitoring hospital cleanliness.

    PubMed

    Mulvey, D; Redding, P; Robertson, C; Woodall, C; Kingsmore, P; Bedwell, D; Dancer, S J

    2011-01-01

    This study evaluated three methods for monitoring hospital cleanliness. The aim was to find a benchmark that could indicate risk to patients from a contaminated environment. We performed visual monitoring, ATP bioluminescence and microbiological screening of five clinical surfaces before and after detergent-based cleaning on two wards over a four-week period. Five additional sites that were not featured in the routine domestic specification were also sampled. Measurements from all three methods were integrated and compared in order to choose appropriate levels for routine monitoring. We found that visual assessment did not reflect ATP values nor environmental contamination with microbial flora including Staphylococcus aureus and meticillin-resistant S. aureus (MRSA). There was a relationship between microbial growth categories and the proportion of ATP values exceeding a chosen benchmark but neither reliably predicted the presence of S. aureus or MRSA. ATP values were occasionally diverse. Detergent-based cleaning reduced levels of organic soil by 32% (95% confidence interval: 16-44%; P<0.001) but did not necessarily eliminate indicator staphylococci, some of which survived the cleaning process. An ATP benchmark value of 100 relative light units offered the closest correlation with microbial growth levels <2.5 cfu/cm(2) (receiver operating characteristic ROC curve sensitivity: 57%; specificity: 57%). In conclusion, microbiological and ATP monitoring confirmed environmental contamination, persistence of hospital pathogens and measured the effect on the environment from current cleaning practices. This study has provided provisional benchmarks to assist with future assessment of hospital cleanliness. Further work is required to refine practical sampling strategy and choice of benchmarks.

  4. A chemical EOR benchmark study of different reservoir simulators

    NASA Astrophysics Data System (ADS)

    Goudarzi, Ali; Delshad, Mojdeh; Sepehrnoori, Kamy

    2016-09-01

    Interest in chemical EOR processes has intensified in recent years due to the advancements in chemical formulations and injection techniques. Injecting Polymer (P), surfactant/polymer (SP), and alkaline/surfactant/polymer (ASP) are techniques for improving sweep and displacement efficiencies with the aim of improving oil production in both secondary and tertiary floods. There has been great interest in chemical flooding recently for different challenging situations. These include high temperature reservoirs, formations with extreme salinity and hardness, naturally fractured carbonates, and sandstone reservoirs with heavy and viscous crude oils. More oil reservoirs are reaching maturity where secondary polymer floods and tertiary surfactant methods have become increasingly important. This significance has added to the industry's interest in using reservoir simulators as tools for reservoir evaluation and management to minimize costs and increase the process efficiency. Reservoir simulators with special features are needed to represent coupled chemical and physical processes present in chemical EOR processes. The simulators need to be first validated against well controlled lab and pilot scale experiments to reliably predict the full field implementations. The available data from laboratory scale include 1) phase behavior and rheological data; and 2) results of secondary and tertiary coreflood experiments for P, SP, and ASP floods under reservoir conditions, i.e. chemical retentions, pressure drop, and oil recovery. Data collected from corefloods are used as benchmark tests comparing numerical reservoir simulators with chemical EOR modeling capabilities such as STARS of CMG, ECLIPSE-100 of Schlumberger, REVEAL of Petroleum Experts. The research UTCHEM simulator from The University of Texas at Austin is also included since it has been the benchmark for chemical flooding simulation for over 25 years. The results of this benchmark comparison will be utilized to improve

  5. RESULTS FOR THE INTERMEDIATE-SPECTRUM ZEUS BENCHMARK OBTAINED WITH NEW 63,65Cu CROSS-SECTION EVALUATIONS

    SciTech Connect

    Sobes, Vladimir; Leal, Luiz C

    2014-01-01

    The four HEU, intermediate-spectrum, copper-reflected Zeus experiments have shown discrepant results between measurement and calculation for the last several major releases of the ENDF library. The four benchmarks show a trend in reported C/E values with increasing energy of average lethargy causing fission. Recently, ORNL has made improvements to the evaluations of three key isotopes involved in the benchmark cases in question. Namely, an updated evaluation for 235U and evaluations of 63,65Cu. This paper presents the benchmarking results of the four intermediate-spectrum Zeus cases using the three updated evaluations.

  6. Toxicological Benchmarks for Screening Potential Contaminants of Concern for Effects on Soil and Litter Invertebrates and Heterotrophic Process

    SciTech Connect

    Will, M.E.

    1994-01-01

    This report presents a standard method for deriving benchmarks for the purpose of ''contaminant screening,'' performed by comparing measured ambient concentrations of chemicals. The work was performed under Work Breakdown Structure 1.4.12.2.3.04.07.02 (Activity Data Sheet 8304). In addition, this report presents sets of data concerning the effects of chemicals in soil on invertebrates and soil microbial processes, benchmarks for chemicals potentially associated with United States Department of Energy sites, and literature describing the experiments from which data were drawn for benchmark derivation.

  7. 42 CFR 425.602 - Establishing the benchmark.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... computing an ACO's fixed historical benchmark that is adjusted for historical growth and beneficiary... program. (2) Makes separate expenditure calculations for each of the following populations of... making up the historical benchmark, determines national growth rates and trends expenditures for...

  8. 42 CFR 425.602 - Establishing the benchmark.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... computing an ACO's fixed historical benchmark that is adjusted for historical growth and beneficiary... program. (2) Makes separate expenditure calculations for each of the following populations of... making up the historical benchmark, determines national growth rates and trends expenditures for...

  9. 42 CFR 425.602 - Establishing the benchmark.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... computing an ACO's fixed historical benchmark that is adjusted for historical growth and beneficiary... program. (2) Makes separate expenditure calculations for each of the following populations of... making up the historical benchmark, determines national growth rates and trends expenditures for...

  10. Using Benchmarking To Influence Tuition and Fee Decisions.

    ERIC Educational Resources Information Center

    Hubbell, Loren W. Loomis; Massa, Robert J.; Lapovsky, Lucie

    2002-01-01

    Discusses the use of benchmarking in managing enrollment. Using a case study, illustrates how benchmarking can help administrators develop strategies for planning and implementing admissions and pricing practices. (EV)

  11. Results Oriented Benchmarking: The Evolution of Benchmarking at NASA from Competitive Comparisons to World Class Space Partnerships

    NASA Technical Reports Server (NTRS)

    Bell, Michael A.

    1999-01-01

    Informal benchmarking using personal or professional networks has taken place for many years at the Kennedy Space Center (KSC). The National Aeronautics and Space Administration (NASA) recognized early on, the need to formalize the benchmarking process for better utilization of resources and improved benchmarking performance. The need to compete in a faster, better, cheaper environment has been the catalyst for formalizing these efforts. A pioneering benchmarking consortium was chartered at KSC in January 1994. The consortium known as the Kennedy Benchmarking Clearinghouse (KBC), is a collaborative effort of NASA and all major KSC contractors. The charter of this consortium is to facilitate effective benchmarking, and leverage the resulting quality improvements across KSC. The KBC acts as a resource with experienced facilitators and a proven process. One of the initial actions of the KBC was to develop a holistic methodology for Center-wide benchmarking. This approach to Benchmarking integrates the best features of proven benchmarking models (i.e., Camp, Spendolini, Watson, and Balm). This cost-effective alternative to conventional Benchmarking approaches has provided a foundation for consistent benchmarking at KSC through the development of common terminology, tools, and techniques. Through these efforts a foundation and infrastructure has been built which allows short duration benchmarking studies yielding results gleaned from world class partners that can be readily implemented. The KBC has been recognized with the Silver Medal Award (in the applied research category) from the International Benchmarking Clearinghouse.

  12. Middleware Evaluation and Benchmarking for Use in Mission Operations Centers

    NASA Technical Reports Server (NTRS)

    Antonucci, Rob; Waktola, Waka

    2005-01-01

    Middleware technologies have been promoted as timesaving, cost-cutting alternatives to the point-to-point communication used in traditional mission operations systems. However, missions have been slow to adopt the new technology. The lack of existing middleware-based missions has given rise to uncertainty about middleware's ability to perform in an operational setting. Most mission architects are also unfamiliar with the technology and do not know the benefits and detriments to architectural choices - or even what choices are available. We will present the findings of a study that evaluated several middleware options specifically for use in a mission operations system. We will address some common misconceptions regarding the applicability of middleware-based architectures, and we will identify the design decisions and tradeoffs that must be made when choosing a middleware solution. The Middleware Comparison and Benchmark Study was conducted at NASA Goddard Space Flight Center to comprehensively evaluate candidate middleware products, compare and contrast the performance of middleware solutions with the traditional point- to-point socket approach, and assess data delivery and reliability strategies. The study focused on requirements of the Global Precipitation Measurement (GPM) mission, validating the potential use of middleware in the GPM mission ground system. The study was jointly funded by GPM and the Goddard Mission Services Evolution Center (GMSEC), a virtual organization for providing mission enabling solutions and promoting the use of appropriate new technologies for mission support. The study was broken into two phases. To perform the generic middleware benchmarking and performance analysis, a network was created with data producers and consumers passing data between themselves. The benchmark monitored the delay, throughput, and reliability of the data as the characteristics were changed. Measurements were taken under a variety of topologies, data demands

  13. Fissile sample worths in the Uranium/Iron Benchmark

    SciTech Connect

    Schaefer, R.W.; Bucher, R.G.

    1983-01-01

    One of the long-standing problems from LMFBR critical experiments is the central worth discrepancy, the consistent overprediction of the reactivity associated with introducing a small material sample near the center of an assembly. Reactivity (sample worth) experiments in ZPR-9, assembly 34, the Uranium/Iron Benchmark (U/Fe), were aimed at investigating this discrepancy. U/Fe had a large, single-region core whose neutronics was governed almost entirely by /sup 235/U and iron. The essentially one-dimensional plate unit cell had one 1.6 mm-wide column of 93% enriched uranium (U(93)) near the center, imbedded in about 50 mm of iron and stainless steel. The neutron spectrum was roughly comparable to that of an LMFBR, but the adjoint spectrum was much flatter than an LMFBR's. The worths of four different fissile materials were measured and the worth of U(93) was measured using several different experimental techniques.

  14. Unstructured Adaptive (UA) NAS Parallel Benchmark. Version 1.0

    NASA Technical Reports Server (NTRS)

    Feng, Huiyu; VanderWijngaart, Rob; Biswas, Rupak; Mavriplis, Catherine

    2004-01-01

    We present a complete specification of a new benchmark for measuring the performance of modern computer systems when solving scientific problems featuring irregular, dynamic memory accesses. It complements the existing NAS Parallel Benchmark suite. The benchmark involves the solution of a stylized heat transfer problem in a cubic domain, discretized on an adaptively refined, unstructured mesh.

  15. Taking Stock of Corporate Benchmarking Practices: Panacea or Pandora's Box?

    ERIC Educational Resources Information Center

    Fleisher, Craig S.; Burton, Sara

    1995-01-01

    Discusses why corporate communications/public relations (cc/pr) should be benchmarked (an approach used by cc/pr managers to demonstrate the value of their activities to skeptical organizational executives). Discusses myths about cc/pr benchmarking; types, targets, and focus of cc/pr benchmarking; a process model; and critical decisions about…

  16. 42 CFR 422.258 - Calculation of benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Calculation of benchmarks. 422.258 Section 422.258... and Plan Approval § 422.258 Calculation of benchmarks. (a) The term “MA area-specific non-drug monthly... the plan bids. (c) Calculation of MA regional non-drug benchmark amount. CMS calculates the...

  17. Discovering and Implementing Best Practices to Strengthen SEAs: Collaborative Benchmarking

    ERIC Educational Resources Information Center

    Building State Capacity and Productivity Center, 2013

    2013-01-01

    This paper is written for state educational agency (SEA) leaders who are considering the benefits of collaborative benchmarking, and it addresses the following questions: (1) What does benchmarking of best practices entail?; (2) How does "collaborative benchmarking" enhance the process?; (3) How do SEAs control the process so that "their" needs…

  18. Winning Strategy: Set Benchmarks of Early Success to Build Momentum for the Long Term

    ERIC Educational Resources Information Center

    Spiro, Jody

    2012-01-01

    Change is a highly personal experience. Everyone participating in the effort has different reactions to change, different concerns, and different motivations for being involved. The smart change leader sets benchmarks along the way so there are guideposts and pause points instead of an endless change process. "Early wins"--a term used to describe…

  19. Prospective Elementary Teacher Understandings of Pest-Related Science and Agricultural Education Benchmarks.

    ERIC Educational Resources Information Center

    Trexler, Cary J.; Heinze, Kirk L.

    2001-01-01

    Clinical interviews with eight preservice elementary teachers elicited their understanding of pest-related benchmarks. Those with out-of-school experience were better able to articulate their understanding. Many were unable to make connections between scientific, societal and technological concepts. (Contains 39 references.) (SK)

  20. Benchmarking and Threshold Standards in Higher Education. Staff and Educational Development Series.

    ERIC Educational Resources Information Center

    Smith, Helen, Ed.; Armstrong, Michael, Ed.; Brown, Sally, Ed.

    This book explores the issues involved in developing standards in higher education, examining the practical issues involved in benchmarking and offering a critical analysis of the problems associated with this developmental tool. The book focuses primarily on experience in the United Kingdom (UK), but looks also at international activity in this…

  1. International E-Benchmarking: Flexible Peer Development of Authentic Learning Principles in Higher Education

    ERIC Educational Resources Information Center

    Leppisaari, Irja; Vainio, Leena; Herrington, Jan; Im, Yeonwook

    2011-01-01

    More and more, social technologies and virtual work methods are facilitating new ways of crossing boundaries in professional development and international collaborations. This paper examines the peer development of higher education teachers through the experiences of the IVBM project (International Virtual Benchmarking, 2009-2010). The…

  2. Reactor Physics and Criticality Benchmark Evaluations for Advanced Nuclear Fuel - Final Technical Report

    SciTech Connect

    William Anderson; James Tulenko; Bradley Rearden; Gary Harms

    2008-09-11

    The nuclear industry interest in advanced fuel and reactor design often drives towards fuel with uranium enrichments greater than 5 wt% 235U. Unfortunately, little data exists, in the form of reactor physics and criticality benchmarks, for uranium enrichments ranging between 5 and 10 wt% 235U. The primary purpose of this project is to provide benchmarks for fuel similar to what may be required for advanced light water reactors (LWRs). These experiments will ultimately provide additional information for application to the criticality-safety bases for commercial fuel facilities handling greater than 5 wt% 235U fuel.

  3. CFD validation in OECD/NEA t-junction benchmark.

    SciTech Connect

    Obabko, A. V.; Fischer, P. F.; Tautges, T. J.; Karabasov, S.; Goloviznin, V. M.; Zaytsev, M. A.; Chudanov, V. V.; Pervichko, V. A.; Aksenova, A. E.

    2011-08-23

    When streams of rapidly moving flow merge in a T-junction, the potential arises for large oscillations at the scale of the diameter, D, with a period scaling as O(D/U), where U is the characteristic flow velocity. If the streams are of different temperatures, the oscillations result in experimental fluctuations (thermal striping) at the pipe wall in the outlet branch that can accelerate thermal-mechanical fatigue and ultimately cause pipe failure. The importance of this phenomenon has prompted the nuclear energy modeling and simulation community to establish a benchmark to test the ability of computational fluid dynamics (CFD) codes to predict thermal striping. The benchmark is based on thermal and velocity data measured in an experiment designed specifically for this purpose. Thermal striping is intrinsically unsteady and hence not accessible to steady state simulation approaches such as steady state Reynolds-averaged Navier-Stokes (RANS) models.1 Consequently, one must consider either unsteady RANS or large eddy simulation (LES). This report compares the results for three LES codes: Nek5000, developed at Argonne National Laboratory (USA), and Cabaret and Conv3D, developed at the Moscow Institute of Nuclear Energy Safety at (IBRAE) in Russia. Nek5000 is based on the spectral element method (SEM), which is a high-order weighted residual technique that combines the geometric flexibility of the finite element method (FEM) with the tensor-product efficiencies of spectral methods. Cabaret is a 'compact accurately boundary-adjusting high-resolution technique' for fluid dynamics simulation. The method is second-order accurate on nonuniform grids in space and time, and has a small dispersion error and computational stencil defined within one space-time cell. The scheme is equipped with a conservative nonlinear correction procedure based on the maximum principle. CONV3D is based on the immersed boundary method and is validated on a wide set of the experimental and

  4. ASBench: benchmarking sets for allosteric discovery.

    PubMed

    Huang, Wenkang; Wang, Guanqiao; Shen, Qiancheng; Liu, Xinyi; Lu, Shaoyong; Geng, Lv; Huang, Zhimin; Zhang, Jian

    2015-08-01

    Allostery allows for the fine-tuning of protein function. Targeting allosteric sites is gaining increasing recognition as a novel strategy in drug design. The key challenge in the discovery of allosteric sites has strongly motivated the development of computational methods and thus high-quality, publicly accessible standard data have become indispensable. Here, we report benchmarking data for experimentally determined allosteric sites through a complex process, including a 'Core set' with 235 unique allosteric sites and a 'Core-Diversity set' with 147 structurally diverse allosteric sites. These benchmarking sets can be exploited to develop efficient computational methods to predict unknown allosteric sites in proteins and reveal unique allosteric ligand-protein interactions to guide allosteric drug design.

  5. Recommendations for Benchmarking Preclinical Studies of Nanomedicines.

    PubMed

    Dawidczyk, Charlene M; Russell, Luisa M; Searson, Peter C

    2015-10-01

    Nanoparticle-based delivery systems provide new opportunities to overcome the limitations associated with traditional small-molecule drug therapy for cancer and to achieve both therapeutic and diagnostic functions in the same platform. Preclinical trials are generally designed to assess therapeutic potential and not to optimize the design of the delivery platform. Consequently, progress in developing design rules for cancer nanomedicines has been slow, hindering progress in the field. Despite the large number of preclinical trials, several factors restrict comparison and benchmarking of different platforms, including variability in experimental design, reporting of results, and the lack of quantitative data. To solve this problem, we review the variables involved in the design of preclinical trials and propose a protocol for benchmarking that we recommend be included in in vivo preclinical studies of drug-delivery platforms for cancer therapy. This strategy will contribute to building the scientific knowledge base that enables development of design rules and accelerates the translation of new technologies.

  6. A new tool for benchmarking cardiovascular fluoroscopes.

    PubMed

    Balter, S; Heupler, F A; Lin, P J; Wondrow, M H

    2001-01-01

    This article reports the status of a new cardiovascular fluoroscopy benchmarking phantom. A joint working group of the Society for Cardiac Angiography and Interventions (SCA&I) and the National Electrical Manufacturers Association (NEMA) developed the phantom. The device was adopted as NEMA standard XR 21-2000, "Characteristics of and Test Procedures for a Phantom to Benchmark Cardiac Fluoroscopic and Photographic Performance," in August 2000. The test ensemble includes imaging field geometry, spatial resolution, low-contrast iodine detectability, working thickness range, visibility of moving targets, and phantom entrance dose. The phantom tests systems under conditions simulating normal clinical use for fluoroscopically guided invasive and interventional procedures. Test procedures rely on trained human observers.

  7. Toxicological benchmarks for wildlife. Environmental Restoration Program

    SciTech Connect

    Opresko, D.M.; Sample, B.E.; Suter, G.W.

    1993-09-01

    This report presents toxicological benchmarks for assessment of effects of 55 chemicals on six representative mammalian wildlife species (short-tailed shrew, white-footed mouse, cottontail ink, red fox, and whitetail deer) and eight avian wildlife species (American robin, woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, Cooper`s hawk, and redtailed hawk) (scientific names are presented in Appendix C). These species were chosen because they are widely distributed and provide a representative range of body sizes and diets. The chemicals are some of those that occur at United States Department of Energy (DOE) waste sites. The benchmarks presented in this report are values believed to be nonhazardous for the listed wildlife species.

  8. Parton distribution benchmarking with LHC data

    NASA Astrophysics Data System (ADS)

    Ball, Richard D.; Carrazza, Stefano; Del Debbio, Luigi; Forte, Stefano; Gao, Jun; Hartland, Nathan; Huston, Joey; Nadolsky, Pavel; Rojo, Juan; Stump, Daniel; Thorne, Robert S.; Yuan, C.-P.

    2013-04-01

    We present a detailed comparison of the most recent sets of NNLO PDFs from the ABM, CT, HERAPDF, MSTW and NNPDF collaborations. We compare parton distributions at low and high scales and parton luminosities relevant for LHC phenomenology. We study the PDF dependence of LHC benchmark inclusive cross sections and differential distributions for electroweak boson and jet production in the cases in which the experimental covariance matrix is available. We quantify the agreement between data and theory by computing the χ 2 for each data set with all the various PDFs. PDF comparisons are performed consistently for common values of the strong coupling. We also present a benchmark comparison of jet production at the LHC, comparing the results from various available codes and scale settings. Finally, we discuss the implications of the updated NNLO PDF sets for the combined PDF+ α s uncertainty in the gluon fusion Higgs production cross section.

  9. Specification for the VERA Depletion Benchmark Suite

    SciTech Connect

    Kim, Kang Seog

    2015-12-17

    CASL-X-2015-1014-000 iii Consortium for Advanced Simulation of LWRs EXECUTIVE SUMMARY The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the pressurized water reactor. MPACT includes the ORIGEN-API and internal depletion module to perform depletion calculations based upon neutron-material reaction and radioactive decay. It is a challenge to validate the depletion capability because of the insufficient measured data. One of the detoured methods to validate it is to perform a code-to-code comparison for benchmark problems. In this study a depletion benchmark suite has been developed and a detailed guideline has been provided to obtain meaningful computational outcomes which can be used in the validation of the MPACT depletion capability.

  10. Benchmark On Sensitivity Calculation (Phase III)

    SciTech Connect

    Ivanova, Tatiana; Laville, Cedric; Dyrda, James; Mennerdahl, Dennis; Golovko, Yury; Raskach, Kirill; Tsiboulia, Anatoly; Lee, Gil Soo; Woo, Sweng-Woong; Bidaud, Adrien; Patel, Amrit; Bledsoe, Keith C; Rearden, Bradley T; Gulliford, J.

    2012-01-01

    The sensitivities of the keff eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods.

  11. Assessing and benchmarking multiphoton microscopes for biologists

    PubMed Central

    Corbin, Kaitlin; Pinkard, Henry; Peck, Sebastian; Beemiller, Peter; Krummel, Matthew F.

    2017-01-01

    Multiphoton microscopy has become staple tool for tracking cells within tissues and organs due to superior depth of penetration, low excitation volumes, and reduced phototoxicity. Many factors, ranging from laser pulse width to relay optics to detectors and electronics, contribute to the overall ability of these microscopes to excite and detect fluorescence deep within tissues. However, we have found that there are few standard ways already described in the literature to distinguish between microscopes or to benchmark existing microscopes to measure the overall quality and efficiency of these instruments. Here, we discuss some simple parameters and methods that can either be used within a multiphoton facility or by a prospective purchaser to benchmark performance. This can both assist in identifying decay in microscope performance and in choosing features of a scope that are suited to experimental needs. PMID:24974026

  12. Assessing and benchmarking multiphoton microscopes for biologists.

    PubMed

    Corbin, Kaitlin; Pinkard, Henry; Peck, Sebastian; Beemiller, Peter; Krummel, Matthew F

    2014-01-01

    Multiphoton microscopy has become staple tool for tracking cells within tissues and organs due to superior depth of penetration, low excitation volumes, and reduced phototoxicity. Many factors, ranging from laser pulse width to relay optics to detectors and electronics, contribute to the overall ability of these microscopes to excite and detect fluorescence deep within tissues. However, we have found that there are few standard ways already described in the literature to distinguish between microscopes or to benchmark existing microscopes to measure the overall quality and efficiency of these instruments. Here, we discuss some simple parameters and methods that can either be used within a multiphoton facility or by a prospective purchaser to benchmark performance. This can both assist in identifying decay in microscope performance and in choosing features of a scope that are suited to experimental needs.

  13. BN-600 full MOX core benchmark analysis.

    SciTech Connect

    Kim, Y. I.; Hill, R. N.; Grimm, K.; Rimpault, G.; Newton, T.; Li, Z. H.; Rineiski, A.; Mohanakrishan, P.; Ishikawa, M.; Lee, K. B.; Danilytchev, A.; Stogov, V.; Nuclear Engineering Division; International Atomic Energy Agency; CEA SERCO Assurance; China Inst. of Atomic Energy; Forschnungszentrum Karlsruhe; Indira Gandhi Centre for Atomic Research; Japan Nuclear Cycle Development Inst.; Korea Atomic Energy Research Inst.; Inst. of Physics and Power Engineering

    2004-01-01

    As a follow-up of the BN-600 hybrid core benchmark, a full MOX core benchmark was performed within the framework of the IAEA co-ordinated research project. Discrepancies between the values of main reactivity coefficients obtained by the participants for the BN-600 full MOX core benchmark appear to be larger than those in the previous hybrid core benchmarks on traditional core configurations. This arises due to uncertainties in the proper modelling of the axial sodium plenum above the core. It was recognized that the sodium density coefficient strongly depends on the core model configuration of interest (hybrid core vs. fully MOX fuelled core with sodium plenum above the core) in conjunction with the calculation method (diffusion vs. transport theory). The effects of the discrepancies revealed between the participants results on the ULOF and UTOP transient behaviours of the BN-600 full MOX core were investigated in simplified transient analyses. Generally the diffusion approximation predicts more benign consequences for the ULOF accident but more hazardous ones for the UTOP accident when compared with the transport theory results. The heterogeneity effect does not have any significant effect on the simulation of the transient. The comparison of the transient analyses results concluded that the fuel Doppler coefficient and the sodium density coefficient are the two most important coefficients in understanding the ULOF transient behaviour. In particular, the uncertainty in evaluating the sodium density coefficient distribution has the largest impact on the description of reactor dynamics. This is because the maximum sodium temperature rise takes place at the top of the core and in the sodium plenum.

  14. Experimental Benchmarking of the Magnetized Friction Force

    SciTech Connect

    Fedotov, A. V.; Litvinenko, V. N.; Galnander, B.; Lofnes, T.; Ziemann, V.; Sidorin, A. O.; Smirnov, A. V.

    2006-03-20

    High-energy electron cooling, presently considered as essential tool for several applications in high-energy and nuclear physics, requires accurate description of the friction force. A series of measurements were performed at CELSIUS with the goal to provide accurate data needed for the benchmarking of theories and simulations. Some results of accurate comparison of experimental data with the friction force formulas are presented.

  15. EXPERIMENTAL BENCHMARKING OF THE MAGNETIZED FRICTION FORCE.

    SciTech Connect

    FEDOTOV, A.V.; GALNANDER, B.; LITVINENKO, V.N.; LOFNES, T.; SIDORIN, A.O.; SMIRNOV, A.V.; ZIEMANN, V.

    2005-09-18

    High-energy electron cooling, presently considered as essential tool for several applications in high-energy and nuclear physics, requires accurate description of the friction force. A series of measurements were performed at CELSIUS with the goal to provide accurate data needed for the benchmarking of theories and simulations. Some results of accurate comparison of experimental data with the friction force formulas are presented.

  16. Data Intensive Systems (DIS) Benchmark Performance Summary

    DTIC Science & Technology

    2003-08-01

    calculated. These give a rough measure of the texture of each ROI. A gray-level co-occurrence matrix ( GLCM ) contains information about the spatial...sum and difference histograms.19 The descriptors chosen as features for this benchmark are GLCM entropy and GLCM energy, and are defined in terms of...stressmark, the relationships of pairs of pixels within a randomly generated image are measured. These features quantify the texture of the image

  17. Optimal Quantum Control Using Randomized Benchmarking

    NASA Astrophysics Data System (ADS)

    Kelly, J.; Barends, R.; Campbell, B.; Chen, Y.; Chen, Z.; Chiaro, B.; Dunsworth, A.; Fowler, A. G.; Hoi, I.-C.; Jeffrey, E.; Megrant, A.; Mutus, J.; Neill, C.; O'Malley, P. J. J.; Quintana, C.; Roushan, P.; Sank, D.; Vainsencher, A.; Wenner, J.; White, T. C.; Cleland, A. N.; Martinis, John M.

    2014-06-01

    We present a method for optimizing quantum control in experimental systems, using a subset of randomized benchmarking measurements to rapidly infer error. This is demonstrated to improve single- and two-qubit gates, minimize gate bleedthrough, where a gate mechanism can cause errors on subsequent gates, and identify control crosstalk in superconducting qubits. This method is able to correct parameters so that control errors no longer dominate and is suitable for automated and closed-loop optimization of experimental systems.

  18. 42 CFR 440.385 - Delivery of benchmark and benchmark-equivalent coverage through managed care entities.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Delivery of benchmark and benchmark-equivalent coverage through managed care entities. 440.385 Section 440.385 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS SERVICES: GENERAL PROVISIONS Benchmark Benefit...

  19. Energy benchmarking of commercial buildings: a low-cost pathway toward urban sustainability

    NASA Astrophysics Data System (ADS)

    Cox, Matt; Brown, Marilyn A.; Sun, Xiaojing

    2013-09-01

    US cities are beginning to experiment with a regulatory approach to address information failures in the real estate market by mandating the energy benchmarking of commercial buildings. Understanding how a commercial building uses energy has many benefits; for example, it helps building owners and tenants identify poor-performing buildings and subsystems and it enables high-performing buildings to achieve greater occupancy rates, rents, and property values. This paper estimates the possible impacts of a national energy benchmarking mandate through analysis chiefly utilizing the Georgia Tech version of the National Energy Modeling System (GT-NEMS). Correcting input discount rates results in a 4.0% reduction in projected energy consumption for seven major classes of equipment relative to the reference case forecast in 2020, rising to 8.7% in 2035. Thus, the official US energy forecasts appear to overestimate future energy consumption by underestimating investments in energy-efficient equipment. Further discount rate reductions spurred by benchmarking policies yield another 1.3-1.4% in energy savings in 2020, increasing to 2.2-2.4% in 2035. Benchmarking would increase the purchase of energy-efficient equipment, reducing energy bills, CO2 emissions, and conventional air pollution. Achieving comparable CO2 savings would require more than tripling existing US solar capacity. Our analysis suggests that nearly 90% of the energy saved by a national benchmarking policy would benefit metropolitan areas, and the policy’s benefits would outweigh its costs, both to the private sector and society broadly.

  20. 'Wasteaware' benchmark indicators for integrated sustainable waste management in cities.

    PubMed

    Wilson, David C; Rodic, Ljiljana; Cowing, Michael J; Velis, Costas A; Whiteman, Andrew D; Scheinberg, Anne; Vilches, Recaredo; Masterson, Darragh; Stretz, Joachim; Oelz, Barbara

    2015-01-01

    This paper addresses a major problem in international solid waste management, which is twofold: a lack of data, and a lack of consistent data to allow comparison between cities. The paper presents an indicator set for integrated sustainable waste management (ISWM) in cities both North and South, to allow benchmarking of a city's performance, comparing cities and monitoring developments over time. It builds on pioneering work for UN-Habitat's solid waste management in the World's cities. The comprehensive analytical framework of a city's solid waste management system is divided into two overlapping 'triangles' - one comprising the three physical components, i.e. collection, recycling, and disposal, and the other comprising three governance aspects, i.e. inclusivity; financial sustainability; and sound institutions and proactive policies. The indicator set includes essential quantitative indicators as well as qualitative composite indicators. This updated and revised 'Wasteaware' set of ISWM benchmark indicators is the cumulative result of testing various prototypes in more than 50 cities around the world. This experience confirms the utility of indicators in allowing comprehensive performance measurement and comparison of both 'hard' physical components and 'soft' governance aspects; and in prioritising 'next steps' in developing a city's solid waste management system, by identifying both local strengths that can be built on and weak points to be addressed. The Wasteaware ISWM indicators are applicable to a broad range of cities with very different levels of income and solid waste management practices. Their wide application as a standard methodology will help to fill the historical data gap.

  1. Benchmarking and audit of breast units improves quality of care

    PubMed Central

    van Dam, P.A.; Verkinderen, L.; Hauspy, J.; Vermeulen, P.; Dirix, L.; Huizing, M.; Altintas, S.; Papadimitriou, K.; Peeters, M.; Tjalma, W.

    2013-01-01

    Quality Indicators (QIs) are measures of health care quality that make use of readily available hospital inpatient administrative data. Assessment quality of care can be performed on different levels: national, regional, on a hospital basis or on an individual basis. It can be a mandatory or voluntary system. In all cases development of an adequate database for data extraction, and feedback of the findings is of paramount importance. In the present paper we performed a Medline search on “QIs and breast cancer” and “benchmarking and breast cancer care”, and we have added some data from personal experience. The current data clearly show that the use of QIs for breast cancer care, regular internal and external audit of performance of breast units, and benchmarking are effective to improve quality of care. Adherence to guidelines improves markedly (particularly regarding adjuvant treatment) and there are data emerging showing that this results in a better outcome. As quality assurance benefits patients, it will be a challenge for the medical and hospital community to develop affordable quality control systems, which are not leading to excessive workload. PMID:24753926

  2. MODEL BENCHMARK WITH EXPERIMENT AT THE SNS LINAC

    SciTech Connect

    Shishlo, Andrei P; Aleksandrov, Alexander V; Liu, Yun; Plum, Michael A

    2016-01-01

    The history of attempts to perform a transverse match-ing in the Spallation Neutron Source (SNS) superconduct-ing linac (SCL) is discussed. The SCL has 9 laser wire (LW) stations to perform non-destructive measurements of the transverse beam profiles. Any matching starts with the measurement of the initial Twiss parameters, which in the SNS case was done by using the first four LW stations at the beginning of the superconducting linac. For years the consistency between data from all LW stations could not be achieved. This problem was resolved only after significant improvements in accuracy of the phase scans of the SCL cavities, more precise analysis of all available scan data, better optics planning, and the initial longitudi-nal Twiss parameter measurements. The presented paper discusses in detail these developed procedures.

  3. A Cross-Benchmarking and Validation Initiative for Tokamak 3D Equilibrium Calculations

    NASA Astrophysics Data System (ADS)

    Reiman, A.; Turnbull, A.; Evans, T.; Ferraro, N.; Lazarus, E.; Breslau, J.; Cerfon, A.; Chang, C. S.; Hager, R.; King, J.; Lanctot, M.; Lazerson, S.; Liu, Y.; McFadden, G.; Monticello, D.; Nazikian, R.; Park, J. K.; Sovinec, C.; Suzuki, Y.; Zhu, P.

    2014-10-01

    We are pursuing a cross-benchmarking and validation initiative for tokamak 3D equilibrium calculations, with 11 codes participating: the linearized tokamak equilibrium codes IPEC and MARS-F, the time-dependent extended MHD codes M3D-C1, M3D, and NIMROD, the gyrokinetic code XGC, as well as the stellarator codes VMEC, NSTAB, PIES, HINT and SPEC. Dedicated experiments for the purpose of generating data for validation have been done on the DIII-D tokamak. The data will allow us to do validation simultaneously with cross-benchmarking. Initial cross-benchmarking calculations are finding a disagreement between stellarator and tokamak 3D equilibrium codes. Work supported in part by U.S. DOE under Contracts DE-ACO2-09CH11466, DE-FC02-04E854698, DE-FG02-95E854309 and DE-AC05-000R22725.

  4. Benchmarking a Visual-Basic based multi-component one-dimensional reactive transport modeling tool

    NASA Astrophysics Data System (ADS)

    Torlapati, Jagadish; Prabhakar Clement, T.

    2013-01-01

    We present the details of a comprehensive numerical modeling tool, RT1D, which can be used for simulating biochemical and geochemical reactive transport problems. The code can be run within the standard Microsoft EXCEL Visual Basic platform, and it does not require any additional software tools. The code can be easily adapted by others for simulating different types of laboratory-scale reactive transport experiments. We illustrate the capabilities of the tool by solving five benchmark problems with varying levels of reaction complexity. These literature-derived benchmarks are used to highlight the versatility of the code for solving a variety of practical reactive transport problems. The benchmarks are described in detail to provide a comprehensive database, which can be used by model developers to test other numerical codes. The VBA code presented in the study is a practical tool that can be used by laboratory researchers for analyzing both batch and column datasets within an EXCEL platform.

  5. Introduction to the HPC Challenge Benchmark Suite

    SciTech Connect

    Luszczek, Piotr; Dongarra, Jack J.; Koester, David; Rabenseifner,Rolf; Lucas, Bob; Kepner, Jeremy; McCalpin, John; Bailey, David; Takahashi, Daisuke

    2005-04-25

    The HPC Challenge benchmark suite has been released by the DARPA HPCS program to help define the performance boundaries of future Petascale computing systems. HPC Challenge is a suite of tests that examine the performance of HPC architectures using kernels with memory access patterns more challenging than those of the High Performance Linpack (HPL) benchmark used in the Top500 list. Thus, the suite is designed to augment the Top500 list, providing benchmarks that bound the performance of many real applications as a function of memory access characteristics e.g., spatial and temporal locality, and providing a framework for including additional tests. In particular, the suite is composed of several well known computational kernels (STREAM, HPL, matrix multiply--DGEMM, parallel matrix transpose--PTRANS, FFT, RandomAccess, and bandwidth/latency tests--b{sub eff}) that attempt to span high and low spatial and temporal locality space. By design, the HPC Challenge tests are scalable with the size of data sets being a function of the largest HPL matrix for the tested system.

  6. Equilibrium Partitioning Sediment Benchmarks (ESBs) for the ...

    EPA Pesticide Factsheets

    This document describes procedures to determine the concentrations of nonionic organic chemicals in sediment interstitial waters. In previous ESB documents, the general equilibrium partitioning (EqP) approach was chosen for the derivation of sediment benchmarks because it accounts for the varying bioavailability of chemicals in different sediments and allows for the incorporation of the appropriate biological effects concentration. This provides for the derivation of benchmarks that are causally linked to the specific chemical, applicable across sediments, and appropriately protective of benthic organisms.  This equilibrium partitioning sediment benchmark (ESB) document was prepared by scientists from the Atlantic Ecology Division, Mid-Continent Ecology Division, and Western Ecology Division, the Office of Water, and private consultants. The document describes procedures to determine the interstitial water concentrations of nonionic organic chemicals in contaminated sediments. Based on these concentrations, guidance is provided on the derivation of toxic units to assess whether the sediments are likely to cause adverse effects to benthic organisms. The equilibrium partitioning (EqP) approach was chosen because it is based on the concentrations of chemical(s) that are known to be harmful and bioavailable in the environment.  This document, and five others published over the last nine years, will be useful for the Program Offices, including Superfund, a

  7. Benchmarking the Multidimensional Stellar Implicit Code MUSIC

    NASA Astrophysics Data System (ADS)

    Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.

    2017-03-01

    We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.

  8. A PWR Thorium Pin Cell Burnup Benchmark

    SciTech Connect

    Weaver, Kevan Dean; Zhao, X.; Pilat, E. E; Hejzlar, P.

    2000-05-01

    As part of work to evaluate the potential benefits of using thorium in LWR fuel, a thorium fueled benchmark comparison was made in this study between state-of-the-art codes, MOCUP (MCNP4B + ORIGEN2), and CASMO-4 for burnup calculations. The MOCUP runs were done individually at MIT and INEEL, using the same model but with some differences in techniques and cross section libraries. Eigenvalue and isotope concentrations were compared on a PWR pin cell model up to high burnup. The eigenvalue comparison as a function of burnup is good: the maximum difference is within 2% and the average absolute difference less than 1%. The isotope concentration comparisons are better than a set of MOX fuel benchmarks and comparable to a set of uranium fuel benchmarks reported in the literature. The actinide and fission product data sources used in the MOCUP burnup calculations for a typical thorium fuel are documented. Reasons for code vs code differences are analyzed and discussed.

  9. Sonic boom acceptability studies

    NASA Technical Reports Server (NTRS)

    Shepherd, Kevin P.; Sullivan, Brenda M.; Leatherwood, Jack D.; Mccurdy, David A.

    1992-01-01

    The determination of the magnitude of sonic boom exposure which would be acceptable to the general population requires, as a starting point, a method to assess and compare individual sonic booms. There is no consensus within the scientific and regulatory communities regarding an appropriate sonic boom assessment metric. Loudness, being a fundamental and well-understood attribute of human hearing was chosen as a means of comparing sonic booms of differing shapes and amplitudes. The figure illustrates the basic steps which yield a calculated value of loudness. Based upon the aircraft configuration and its operating conditions, the sonic boom pressure signature which reaches the ground is calculated. This pressure-time history is transformed to the frequency domain and converted into a one-third octave band spectrum. The essence of the loudness method is to account for the frequency response and integration characteristics of the auditory system. The result of the calculation procedure is a numerical description (perceived level, dB) which represents the loudness of the sonic boom waveform.

  10. A Benchmarking Initiative for Reactive Transport Modeling Applied to Subsurface Environmental Applications

    NASA Astrophysics Data System (ADS)

    Steefel, C. I.

    2015-12-01

    Over the last 20 years, we have seen the evolution of multicomponent reactive transport modeling and the expanding range and increasing complexity of subsurface environmental applications it is being used to address. Reactive transport modeling is being asked to provide accurate assessments of engineering performance and risk for important issues with far-reaching consequences. As a result, the complexity and detail of subsurface processes, properties, and conditions that can be simulated have significantly expanded. Closed form solutions are necessary and useful, but limited to situations that are far simpler than typical applications that combine many physical and chemical processes, in many cases in coupled form. In the absence of closed form and yet realistic solutions for complex applications, numerical benchmark problems with an accepted set of results will be indispensable to qualifying codes for various environmental applications. The intent of this benchmarking exercise, now underway for more than five years, is to develop and publish a set of well-described benchmark problems that can be used to demonstrate simulator conformance with norms established by the subsurface science and engineering community. The objective is not to verify this or that specific code--the reactive transport codes play a supporting role in this regard—but rather to use the codes to verify that a common solution of the problem can be achieved. Thus, the objective of each of the manuscripts is to present an environmentally-relevant benchmark problem that tests the conceptual model capabilities, numerical implementation, process coupling, and accuracy. The benchmark problems developed to date include 1) microbially-mediated reactions, 2) isotopes, 3) multi-component diffusion, 4) uranium fate and transport, 5) metal mobility in mining affected systems, and 6) waste repositories and related aspects.

  11. Protein-Protein Docking Benchmark Version 3.0

    PubMed Central

    Hwang, Howook; Pierce, Brian; Mintseris, Julian; Janin, Joël; Weng, Zhiping

    2009-01-01

    We present version 3.0 of our publicly available protein-protein docking benchmark. This update includes 40 new test cases, representing a 48% increase from Benchmark 2.0. For all of the new cases, the crystal structures of both binding partners are available. As with Benchmark 2.0, SCOP1 (Structural Classification of Proteins) was used to remove redundant test cases. The 124 unbound-unbound test cases in Benchmark 3.0 are classified into 88 rigid-body cases, 19 medium difficulty cases, and 17 difficult cases, based on the degree of conformational change at the interface upon complex formation. In addition to providing the community with more test cases for evaluating docking methods, the expansion of Benchmark 3.0 will facilitate the development of new algorithms that require a large number of training examples. Benchmark 3.0 is available to the public at http://zlab.bu.edu/benchmark. PMID:18491384

  12. Experience With the SCALE Criticality Safety Cross Section Libraries

    SciTech Connect

    Bowman, S.M.

    2000-08-21

    This report provides detailed information on the SCALE criticality safety cross-section libraries. Areas covered include the origins of the libraries, the data on which they are based, how they were generated, past experience and validations, and performance comparisons with measured critical experiments and numerical benchmarks. The performance of the SCALE criticality safety cross-section libraries on various types of fissile systems are examined in detail. Most of the performance areas are demonstrated by examining the performance of the libraries vs critical experiments to show general trends and weaknesses. In areas where directly applicable critical experiments do not exist, performance is examined based on the general knowledge of the strengths and weaknesses of the cross sections. In this case, the experience in the use of the cross sections and comparisons with the results of other libraries on the same systems are relied on for establishing acceptability of application of a particular SCALE library to a particular fissile system. This report should aid in establishing when a SCALE cross-section library would be expected to perform acceptably and where there are known or suspected deficiencies that would cause the calculations to be less reliable. To determine the acceptability of a library for a particular application, the calculational bias of the library should be established by directly applicable critical experiments.

  13. Death Acceptance through Ritual

    ERIC Educational Resources Information Center

    Reeves, Nancy C.

    2011-01-01

    This article summarizes the author's original research, which sought to discover the elements necessary for using death-related ritual as a psychotherapeutic technique for grieving people who experience their grief as "stuck," "unending," "maladaptive," and so on. A "death-related ritual" is defined as a ceremony, directly involving at least 1…

  14. Physiologic correlates to background noise acceptance

    NASA Astrophysics Data System (ADS)

    Tampas, Joanna; Harkrider, Ashley; Nabelek, Anna

    2001-05-01

    Acceptance of background noise can be evaluated by having listeners indicate the highest background noise level (BNL) they are willing to accept while following the words of a story presented at their most comfortable listening level (MCL). The difference between the selected MCL and BNL is termed the acceptable noise level (ANL). One of the consistent findings in previous studies of ANL is large intersubject variability in acceptance of background noise. This variability is not related to age, gender, hearing sensitivity, personality, type of background noise, or speech perception in noise performance. The purpose of the current experiment was to determine if individual differences in physiological activity measured from the peripheral and central auditory systems of young female adults with normal hearing can account for the variability observed in ANL. Correlations between ANL and various physiological responses, including spontaneous, click-evoked, and distortion-product otoacoustic emissions, auditory brainstem and middle latency evoked potentials, and electroencephalography will be presented. Results may increase understanding of the regions of the auditory system that contribute to individual noise acceptance.

  15. Evaluation of the Aleph PIC Code on Benchmark Simulations

    NASA Astrophysics Data System (ADS)

    Boerner, Jeremiah; Pacheco, Jose; Grillet, Anne

    2016-09-01

    Aleph is a massively parallel, 3D unstructured mesh, Particle-in-Cell (PIC) code, developed to model low temperature plasma applications. In order to verify and validate performance, Aleph is benchmarked against a series of canonical problems to demonstrate statistical indistinguishability in the results. Here, a series of four problems is studied: Couette flows over a range of Knudsen number, sheath formation in an undriven plasma, the two-stream instability, and a capacitive discharge. These problems respectively exercise collisional processes, particle motion in electrostatic fields, electrostatic field solves coupled to particle motion, and a fully coupled reacting plasma. Favorable comparison with accepted results establishes confidence in Aleph's capability and accuracy as a general purpose PIC code. Finally, Aleph is used to investigate the sensitivity of a triggered vacuum gap switch to the particle injection conditions associated with arc breakdown at the trigger. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  16. Engineering Codes' Numerical Benchmarks for RBMK-1000 Nuclear Reactor

    SciTech Connect

    Davidov, V.K.; Rozhdestvenskij, M.I.; Druzhinin, V.E.; Kashevarov, F.Yu.; Gorodkov, S.S.; Kvator, V.M.

    2002-07-01

    Solutions to a number of neutron transport problems, typical for RBMK-1000 reactor calculations were developed with the MCU-REA code, based on a Monte-Carlo method approximately 400 relatively simple problem variants - infinite height cylindrical and square cells and some poly-cells were considered. At that stage sole simplification of the complex internal structure of the RBMK cells was the homogenization of the fuel rods spacers. The burnup calculations were carried out with the detailed account of circa 50 isotopes history. The obtained results are supposed to be used as numerical benchmarks for the verification of engineering code complexes and their separate modules for RBMK-1000 calculations. They can also be considered as results of numerical experiments and analyzed to achieve better understanding of complex fine features of neutron physics processes in reactor. (authors)

  17. Benchmarking and performance analysis of the CM-2. [SIMD computer

    NASA Technical Reports Server (NTRS)

    Myers, David W.; Adams, George B., II

    1988-01-01

    A suite of benchmarking routines testing communication, basic arithmetic operations, and selected kernel algorithms written in LISP and PARIS was developed for the CM-2. Experiment runs are automated via a software framework that sequences individual tests, allowing for unattended overnight operation. Multiple measurements are made and treated statistically to generate well-characterized results from the noisy values given by cm:time. The results obtained provide a comparison with similar, but less extensive, testing done on a CM-1. Tests were chosen to aid the algorithmist in constructing fast, efficient, and correct code on the CM-2, as well as gain insight into what performance criteria are needed when evaluating parallel processing machines.

  18. A Benchmark Problem for Development of Autonomous Structural Modal Identification

    NASA Technical Reports Server (NTRS)

    Pappa, Richard S.; Woodard, Stanley E.; Juang, Jer-Nan

    1996-01-01

    This paper summarizes modal identification results obtained using an autonomous version of the Eigensystem Realization Algorithm on a dynamically complex, laboratory structure. The benchmark problem uses 48 of 768 free-decay responses measured in a complete modal survey test. The true modal parameters of the structure are well known from two previous, independent investigations. Without user involvement, the autonomous data analysis identified 24 to 33 structural modes with good to excellent accuracy in 62 seconds of CPU time (on a DEC Alpha 4000 computer). The modal identification technique described in the paper is the baseline algorithm for NASA's Autonomous Dynamics Determination (ADD) experiment scheduled to fly on International Space Station assembly flights in 1997-1999.

  19. NAS Parallel Benchmarks. 2.4

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We describe a new problem size, called Class D, for the NAS Parallel Benchmarks (NPB), whose MPI source code implementation is being released as NPB 2.4. A brief rationale is given for how the new class is derived. We also describe the modifications made to the MPI (Message Passing Interface) implementation to allow the new class to be run on systems with 32-bit integers, and with moderate amounts of memory. Finally, we give the verification values for the new problem size.

  20. Benchmarks of Global Clean Energy Manufacturing

    SciTech Connect

    Sandor, Debra; Chung, Donald; Keyser, David; Mann, Margaret; Engel-Cox, Jill

    2017-01-01

    The Clean Energy Manufacturing Analysis Center (CEMAC), sponsored by the U.S. Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE), provides objective analysis and up-to-date data on global supply chains and manufacturing of clean energy technologies. Benchmarks of Global Clean Energy Manufacturing sheds light on several fundamental questions about the global clean technology manufacturing enterprise: How does clean energy technology manufacturing impact national economies? What are the economic opportunities across the manufacturing supply chain? What are the global dynamics of clean energy technology manufacturing?

  1. Benchmarking East Tennessee`s economic capacity

    SciTech Connect

    1995-04-20

    This presentation is comprised of viewgraphs delineating major economic factors operating in 15 counties in East Tennessee. The purpose of the information presented is to provide a benchmark analysis of economic conditions for use in guiding economic growth in the region. The emphasis of the presentation is economic infrastructure, which is classified into six categories: human resources, technology, financial resources, physical infrastructure, quality of life, and tax and regulation. Data for analysis of key indicators in each of the categories are presented. Preliminary analyses, in the form of strengths and weaknesses and comparison to reference groups, are given.

  2. Benchmark Evaluation of the HTR-PROTEUS Absorber Rod Worths (Core 4)

    SciTech Connect

    John D. Bess; Leland M. Montierth

    2014-06-01

    PROTEUS was a zero-power research reactor at the Paul Scherrer Institute (PSI) in Switzerland. The critical assembly was constructed from a large graphite annulus surrounding a central cylindrical cavity. Various experimental programs were investigated in PROTEUS; during the years 1992 through 1996, it was configured as a pebble-bed reactor and designated HTR-PROTEUS. Various critical configurations were assembled with each accompanied by an assortment of reactor physics experiments including differential and integral absorber rod measurements, kinetics, reaction rate distributions, water ingress effects, and small sample reactivity effects [1]. Four benchmark reports were previously prepared and included in the March 2013 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook) [2] evaluating eleven critical configurations. A summary of that effort was previously provided [3] and an analysis of absorber rod worth measurements for Cores 9 and 10 have been performed prior to this analysis and included in PROTEUS-GCR-EXP-004 [4]. In the current benchmark effort, absorber rod worths measured for Core Configuration 4, which was the only core with a randomly-packed pebble loading, have been evaluated for inclusion as a revision to the HTR-PROTEUS benchmark report PROTEUS-GCR-EXP-002.

  3. Interactome mapping for analysis of complex phenotypes: insights from benchmarking binary interaction assays.

    PubMed

    Braun, Pascal

    2012-05-01

    Protein interactions mediate essentially all biological processes and analysis of protein-protein interactions using both large-scale and small-scale approaches has contributed fundamental insights to the understanding of biological systems. In recent years, interactome network maps have emerged as an important tool for analyzing and interpreting genetic data of complex phenotypes. Complementary experimental approaches to test for binary, direct interactions, and for membership in protein complexes are used to explore the interactome. The two approaches are not redundant but yield orthogonal perspectives onto the complex network of physical interactions by which proteins mediate biological processes. In recent years, several publications have demonstrated that interactions from high-throughput experiments can be equally reliable as the high quality subset of interactions identified in small-scale studies. Critical for this insight was the introduction of standardized experimental benchmarking of interaction and validation assays using reference sets. The data obtained in these benchmarking experiments have resulted in greater appreciation of the limitations and the complementary strengths of different assays. Moreover, benchmarking is a central element of a conceptual framework to estimate interactome sizes and thereby measure progress toward near complete network maps. These estimates have revealed that current large-scale data sets, although often of high quality, cover only a small fraction of a given interactome. Here, I review the findings of assay benchmarking and discuss implications for quality control, and for strategies toward obtaining a near-complete map of the interactome of an organism.

  4. Acceptance of tinnitus: validation of the tinnitus acceptance questionnaire.

    PubMed

    Weise, Cornelia; Kleinstäuber, Maria; Hesser, Hugo; Westin, Vendela Zetterqvist; Andersson, Gerhard

    2013-01-01

    The concept of acceptance has recently received growing attention within tinnitus research due to the fact that tinnitus acceptance is one of the major targets of psychotherapeutic treatments. Accordingly, acceptance-based treatments will most likely be increasingly offered to tinnitus patients and assessments of acceptance-related behaviours will thus be needed. The current study investigated the factorial structure of the Tinnitus Acceptance Questionnaire (TAQ) and the role of tinnitus acceptance as mediating link between sound perception (i.e. subjective loudness of tinnitus) and tinnitus distress. In total, 424 patients with chronic tinnitus completed the TAQ and validated measures of tinnitus distress, anxiety, and depression online. Confirmatory factor analysis provided support to a good fit of the data to the hypothesised bifactor model (root-mean-square-error of approximation = .065; Comparative Fit Index = .974; Tucker-Lewis Index = .958; standardised root mean square residual = .032). In addition, mediation analysis, using a non-parametric joint coefficient approach, revealed that tinnitus-specific acceptance partially mediated the relation between subjective tinnitus loudness and tinnitus distress (path ab = 5.96; 95% CI: 4.49, 7.69). In a multiple mediator model, tinnitus acceptance had a significantly stronger indirect effect than anxiety. The results confirm the factorial structure of the TAQ and suggest the importance of a general acceptance factor that contributes important unique variance beyond that of the first-order factors activity engagement and tinnitus suppression. Tinnitus acceptance as measured with the TAQ is proposed to be a key construct in tinnitus research and should be further implemented into treatment concepts to reduce tinnitus distress.

  5. Employee Acceptance of BOS and BES Performance Appraisals.

    ERIC Educational Resources Information Center

    Dossett, Dennis L.; Gier, Joseph A.

    Previous research on performance evaluation systems has failed to take into account user acceptance. Employee acceptance of a behaviorally-based performance appraisal system was assessed in a field experiment contrasting user preference for Behavioral Expectations Scales (BES) versus Behavioral Observation Scales (BOS). Non-union sales associates…

  6. Evaluation of the HTR-10 Reactor as a Benchmark for Physics Code QA

    SciTech Connect

    William K. Terry; Soon Sam Kim; Leland M. Montierth; Joshua J. Cogliati; Abderrafi M. Ougouag

    2006-09-01

    The HTR-10 is a small (10 MWt) pebble-bed research reactor intended to develop pebble-bed reactor (PBR) technology in China. It will be used to test and develop fuel, verify PBR safety features, demonstrate combined electricity production and co-generation of heat, and provide experience in PBR design, operation, and construction. As the only currently operating PBR in the world, the HTR-10 can provide data of great interest to everyone involved in PBR technology. In particular, if it yields data of sufficient quality, it can be used as a benchmark for assessing the accuracy of computer codes proposed for use in PBR analysis. This paper summarizes the evaluation for the International Reactor Physics Experiment Evaluation Project (IRPhEP) of data obtained in measurements of the HTR-10’s initial criticality experiment for use as benchmarks for reactor physics codes.

  7. Learning Through Benchmarking: Developing a Relational, Prospective Approach to Benchmarking ICT in Learning and Teaching

    ERIC Educational Resources Information Center

    Ellis, Robert A.; Moore, Roger R.

    2006-01-01

    This study discusses benchmarking the use of information and communication technologies (ICT) in teaching and learning between two universities with different missions: one an Australian campus-based metropolitan university and the other a British distance-education provider. It argues that the differences notwithstanding, it is possible to…

  8. Development of a California commercial building benchmarking database

    SciTech Connect

    Kinney, Satkartar; Piette, Mary Ann

    2002-05-17

    Building energy benchmarking is a useful starting point for commercial building owners and operators to target energy savings opportunities. There are a number of tools and methods for benchmarking energy use. Benchmarking based on regional data can provides more relevant information for California buildings than national tools such as Energy Star. This paper discusses issues related to benchmarking commercial building energy use and the development of Cal-Arch, a building energy benchmarking database for California. Currently Cal-Arch uses existing survey data from California's Commercial End Use Survey (CEUS), a largely underutilized wealth of information collected by California's major utilities. Doe's Commercial Building Energy Consumption Survey (CBECS) is used by a similar tool, Arch, and by a number of other benchmarking tools. Future versions of Arch/Cal-Arch will utilize additional data sources including modeled data and individual buildings to expand the database.

  9. Cone penetrometer acceptance test report

    SciTech Connect

    Boechler, G.N.

    1996-09-19

    This Acceptance Test Report (ATR) documents the results of acceptance test procedure WHC-SD-WM-ATR-151. Included in this report is a summary of the tests, the results and issues, the signature and sign- off ATP pages, and a summarized table of the specification vs. ATP section that satisfied the specification.

  10. MARS code developments, benchmarking and applications

    SciTech Connect

    Mokhov, N.V.

    2000-03-24

    Recent developments of the MARS Monte Carlo code system for simulation of hadronic and electromagnetic cascades in shielding, accelerator and detector components in the energy range from a fraction of an electron volt up to 100 TeV are described. The physical model of hadron and lepton interactions with nuclei and atoms has undergone substantial improvements. These include a new nuclear cross section library, a model for soft prior production, a cascade-exciton model, a dual parton model, deuteron-nucleus and neutrino-nucleus interaction models, a detailed description of negative hadron and muon absorption, and a unified treatment of muon and charged hadron electro-magnetic interactions with matter. New algorithms have been implemented into the code and benchmarked against experimental data. A new Graphical-User Interface has been developed. The code capabilities to simulate cascades and generate a variety of results in complex systems have been enhanced. The MARS system includes links to the MCNP code for neutron and photon transport below 20 MeV, to the ANSYS code for thermal and stress analyses and to the STRUCT code for multi-turn particle tracking in large synchrotrons and collider rings. Results of recent benchmarking of the MARS code are presented. Examples of non-trivial code applications are given for the Fermilab Booster and Main Injector, for a 1.5 MW target station and a muon storage ring.

  11. Parallel Ada benchmarks for the SVMS

    NASA Technical Reports Server (NTRS)

    Collard, Philippe E.

    1990-01-01

    The use of parallel processing paradigm to design and develop faster and more reliable computers appear to clearly mark the future of information processing. NASA started the development of such an architecture: the Spaceborne VHSIC Multi-processor System (SVMS). Ada will be one of the languages used to program the SVMS. One of the unique characteristics of Ada is that it supports parallel processing at the language level through the tasking constructs. It is important for the SVMS project team to assess how efficiently the SVMS architecture will be implemented, as well as how efficiently Ada environment will be ported to the SVMS. AUTOCLASS II, a Bayesian classifier written in Common Lisp, was selected as one of the benchmarks for SVMS configurations. The purpose of the R and D effort was to provide the SVMS project team with the version of AUTOCLASS II, written in Ada, that would make use of Ada tasking constructs as much as possible so as to constitute a suitable benchmark. Additionally, a set of programs was developed that would measure Ada tasking efficiency on parallel architectures as well as determine the critical parameters influencing tasking efficiency. All this was designed to provide the SVMS project team with a set of suitable tools in the development of the SVMS architecture.

  12. Benchmarking database performance for genomic data.

    PubMed

    Khushi, Matloob

    2015-06-01

    Genomic regions represent features such as gene annotations, transcription factor binding sites and epigenetic modifications. Performing various genomic operations such as identifying overlapping/non-overlapping regions or nearest gene annotations are common research needs. The data can be saved in a database system for easy management, however, there is no comprehensive database built-in algorithm at present to identify overlapping regions. Therefore I have developed a novel region-mapping (RegMap) SQL-based algorithm to perform genomic operations and have benchmarked the performance of different databases. Benchmarking identified that PostgreSQL extracts overlapping regions much faster than MySQL. Insertion and data uploads in PostgreSQL were also better, although general searching capability of both databases was almost equivalent. In addition, using the algorithm pair-wise, overlaps of >1000 datasets of transcription factor binding sites and histone marks, collected from previous publications, were reported and it was found that HNF4G significantly co-locates with cohesin subunit STAG1 (SA1).Inc.

  13. Simple mathematical law benchmarks human confrontations

    NASA Astrophysics Data System (ADS)

    Johnson, Neil F.; Medina, Pablo; Zhao, Guannan; Messinger, Daniel S.; Horgan, John; Gill, Paul; Bohorquez, Juan Camilo; Mattson, Whitney; Gangi, Devon; Qi, Hong; Manrique, Pedro; Velasquez, Nicolas; Morgenstern, Ana; Restrepo, Elvira; Johnson, Nicholas; Spagat, Michael; Zarama, Roberto

    2013-12-01

    Many high-profile societal problems involve an individual or group repeatedly attacking another - from child-parent disputes, sexual violence against women, civil unrest, violent conflicts and acts of terror, to current cyber-attacks on national infrastructure and ultrafast cyber-trades attacking stockholders. There is an urgent need to quantify the likely severity and timing of such future acts, shed light on likely perpetrators, and identify intervention strategies. Here we present a combined analysis of multiple datasets across all these domains which account for >100,000 events, and show that a simple mathematical law can benchmark them all. We derive this benchmark and interpret it, using a minimal mechanistic model grounded by state-of-the-art fieldwork. Our findings provide quantitative predictions concerning future attacks; a tool to help detect common perpetrators and abnormal behaviors; insight into the trajectory of a `lone wolf' identification of a critical threshold for spreading a message or idea among perpetrators; an intervention strategy to erode the most lethal clusters; and more broadly, a quantitative starting point for cross-disciplinary theorizing about human aggression at the individual and group level, in both real and online worlds.

  14. Simple mathematical law benchmarks human confrontations.

    PubMed

    Johnson, Neil F; Medina, Pablo; Zhao, Guannan; Messinger, Daniel S; Horgan, John; Gill, Paul; Bohorquez, Juan Camilo; Mattson, Whitney; Gangi, Devon; Qi, Hong; Manrique, Pedro; Velasquez, Nicolas; Morgenstern, Ana; Restrepo, Elvira; Johnson, Nicholas; Spagat, Michael; Zarama, Roberto

    2013-12-10

    Many high-profile societal problems involve an individual or group repeatedly attacking another - from child-parent disputes, sexual violence against women, civil unrest, violent conflicts and acts of terror, to current cyber-attacks on national infrastructure and ultrafast cyber-trades attacking stockholders. There is an urgent need to quantify the likely severity and timing of such future acts, shed light on likely perpetrators, and identify intervention strategies. Here we present a combined analysis of multiple datasets across all these domains which account for >100,000 events, and show that a simple mathematical law can benchmark them all. We derive this benchmark and interpret it, using a minimal mechanistic model grounded by state-of-the-art fieldwork. Our findings provide quantitative predictions concerning future attacks; a tool to help detect common perpetrators and abnormal behaviors; insight into the trajectory of a 'lone wolf'; identification of a critical threshold for spreading a message or idea among perpetrators; an intervention strategy to erode the most lethal clusters; and more broadly, a quantitative starting point for cross-disciplinary theorizing about human aggression at the individual and group level, in both real and online worlds.

  15. Transparency benchmarking on audio watermarks and steganography

    NASA Astrophysics Data System (ADS)

    Kraetzer, Christian; Dittmann, Jana; Lang, Andreas

    2006-02-01

    The evaluation of transparency plays an important role in the context of watermarking and steganography algorithms. This paper introduces a general definition of the term transparency in the context of steganography, digital watermarking and attack based evaluation of digital watermarking algorithms. For this purpose the term transparency is first considered individually for each of the three application fields (steganography, digital watermarking and watermarking algorithm evaluation). From the three results a general definition for the overall context is derived in a second step. The relevance and applicability of the definition given is evaluated in practise using existing audio watermarking and steganography algorithms (which work in time, frequency and wavelet domain) as well as an attack based evaluation suite for audio watermarking benchmarking - StirMark for Audio (SMBA). For this purpose selected attacks from the SMBA suite are modified by adding transparency enhancing measures using a psychoacoustic model. The transparency and robustness of the evaluated audio watermarking algorithms by using the original and modifid attacks are compared. The results of this paper show hat transparency benchmarking will lead to new information regarding the algorithms under observation and their usage. This information can result in concrete recommendations for modification, like the ones resulting from the tests performed here.

  16. Computational Thermochemistry and Benchmarking of Reliable Methods

    SciTech Connect

    Feller, David F.; Dixon, David A.; Dunning, Thom H.; Dupuis, Michel; McClemore, Doug; Peterson, Kirk A.; Xantheas, Sotiris S.; Bernholdt, David E.; Windus, Theresa L.; Chalasinski, Grzegorz; Fosada, Rubicelia; Olguim, Jorge; Dobbs, Kerwin D.; Frurip, Donald; Stevens, Walter J.; Rondan, Nelson; Chase, Jared M.; Nichols, Jeffrey A.

    2006-06-20

    During the first and second years of the Computational Thermochemistry and Benchmarking of Reliable Methods project, we completed several studies using the parallel computing capabilities of the NWChem software and Molecular Science Computing Facility (MSCF), including large-scale density functional theory (DFT), second-order Moeller-Plesset (MP2) perturbation theory, and CCSD(T) calculations. During the third year, we continued to pursue the computational thermodynamic and benchmarking studies outlined in our proposal. With the issues affecting the robustness of the coupled cluster part of NWChem resolved, we pursued studies of the heats-of-formation of compounds containing 5 to 7 first- and/or second-row elements and approximately 10 to 14 hydrogens. The size of these systems, when combined with the large basis sets (cc-pVQZ and aug-cc-pVQZ) that are necessary for extrapolating to the complete basis set limit, creates a formidable computational challenge, for which NWChem on NWMPP1 is well suited.

  17. EVA Health and Human Performance Benchmarking Study

    NASA Technical Reports Server (NTRS)

    Abercromby, A. F.; Norcross, J.; Jarvis, S. L.

    2016-01-01

    Multiple HRP Risks and Gaps require detailed characterization of human health and performance during exploration extravehicular activity (EVA) tasks; however, a rigorous and comprehensive methodology for characterizing and comparing the health and human performance implications of current and future EVA spacesuit designs does not exist. This study will identify and implement functional tasks and metrics, both objective and subjective, that are relevant to health and human performance, such as metabolic expenditure, suit fit, discomfort, suited postural stability, cognitive performance, and potentially biochemical responses for humans working inside different EVA suits doing functional tasks under the appropriate simulated reduced gravity environments. This study will provide health and human performance benchmark data for humans working in current EVA suits (EMU, Mark III, and Z2) as well as shirtsleeves using a standard set of tasks and metrics with quantified reliability. Results and methodologies developed during this test will provide benchmark data against which future EVA suits, and different suit configurations (eg, varied pressure, mass, CG) may be reliably compared in subsequent tests. Results will also inform fitness for duty standards as well as design requirements and operations concepts for future EVA suits and other exploration systems.

  18. Simple mathematical law benchmarks human confrontations

    PubMed Central

    Johnson, Neil F.; Medina, Pablo; Zhao, Guannan; Messinger, Daniel S.; Horgan, John; Gill, Paul; Bohorquez, Juan Camilo; Mattson, Whitney; Gangi, Devon; Qi, Hong; Manrique, Pedro; Velasquez, Nicolas; Morgenstern, Ana; Restrepo, Elvira; Johnson, Nicholas; Spagat, Michael; Zarama, Roberto

    2013-01-01

    Many high-profile societal problems involve an individual or group repeatedly attacking another – from child-parent disputes, sexual violence against women, civil unrest, violent conflicts and acts of terror, to current cyber-attacks on national infrastructure and ultrafast cyber-trades attacking stockholders. There is an urgent need to quantify the likely severity and timing of such future acts, shed light on likely perpetrators, and identify intervention strategies. Here we present a combined analysis of multiple datasets across all these domains which account for >100,000 events, and show that a simple mathematical law can benchmark them all. We derive this benchmark and interpret it, using a minimal mechanistic model grounded by state-of-the-art fieldwork. Our findings provide quantitative predictions concerning future attacks; a tool to help detect common perpetrators and abnormal behaviors; insight into the trajectory of a ‘lone wolf'; identification of a critical threshold for spreading a message or idea among perpetrators; an intervention strategy to erode the most lethal clusters; and more broadly, a quantitative starting point for cross-disciplinary theorizing about human aggression at the individual and group level, in both real and online worlds. PMID:24322528

  19. Multisensor benchmark data for riot control

    NASA Astrophysics Data System (ADS)

    Jäger, Uwe; Höpken, Marc; Dürr, Bernhard; Metzler, Jürgen; Willersinn, Dieter

    2008-10-01

    Quick and precise response is essential for riot squads when coping with escalating violence in crowds. Often it is just a single person, known as the leader of the gang, who instigates other people and thus is responsible of excesses. Putting this single person out of action in most cases leads to a de-escalating situation. Fostering de-escalations is one of the main tasks of crowd and riot control. To do so, extensive situation awareness is mandatory for the squads and can be promoted by technical means such as video surveillance using sensor networks. To develop software tools for situation awareness appropriate input data with well-known quality is needed. Furthermore, the developer must be able to measure algorithm performance and ongoing improvements. Last but not least, after algorithm development has finished and marketing aspects emerge, meeting of specifications must be proved. This paper describes a multisensor benchmark which exactly serves this purpose. We first define the underlying algorithm task. Then we explain details about data acquisition and sensor setup and finally we give some insight into quality measures of multisensor data. Currently, the multisensor benchmark described in this paper is applied to the development of basic algorithms for situational awareness, e.g. tracking of individuals in a crowd.

  20. Generation IV benchmarking of TRISO fuel performance models under accident conditions: Modeling input data

    SciTech Connect

    Collin, Blaise P.

    2014-09-01

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: the modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release; the modeling of the AGR-1 and HFR-EU1bis safety testing experiments; and, the comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from ''Case 5'' of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. ''Case 5'' of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to ''effects of the numerical calculation method rather than the physical model''[IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read this document

  1. The NAS Parallel Benchmarks 2.1 Results

    NASA Technical Reports Server (NTRS)

    Saphir, William; Woo, Alex; Yarrow, Maurice

    1996-01-01

    We present performance results for version 2.1 of the NAS Parallel Benchmarks (NPB) on the following architectures: IBM SP2/66 MHz; SGI Power Challenge Array/90 MHz; Cray Research T3D; and Intel Paragon. The NAS Parallel Benchmarks are a widely-recognized suite of benchmarks originally designed to compare the performance of highly parallel computers with that of traditional supercomputers.

  2. Benchmark Problems of the Geothermal Technologies Office Code Comparison Study

    SciTech Connect

    White, Mark D.; Podgorney, Robert; Kelkar, Sharad M.; McClure, Mark W.; Danko, George; Ghassemi, Ahmad; Fu, Pengcheng; Bahrami, Davood; Barbier, Charlotte; Cheng, Qinglu; Chiu, Kit-Kwan; Detournay, Christine; Elsworth, Derek; Fang, Yi; Furtney, Jason K.; Gan, Quan; Gao, Qian; Guo, Bin; Hao, Yue; Horne, Roland N.; Huang, Kai; Im, Kyungjae; Norbeck, Jack; Rutqvist, Jonny; Safari, M. R.; Sesetty, Varahanaresh; Sonnenthal, Eric; Tao, Qingfeng; White, Signe K.; Wong, Yang; Xia, Yidong

    2016-12-02

    involved two phases of research, stimulation, development, and circulation in two separate reservoirs. The challenge problems had specific questions to be answered via numerical simulation in three topical areas: 1) reservoir creation/stimulation, 2) reactive and passive transport, and 3) thermal recovery. Whereas the benchmark class of problems were designed to test capabilities for modeling coupled processes under strictly specified conditions, the stated objective for the challenge class of problems was to demonstrate what new understanding of the Fenton Hill experiments could be realized via the application of modern numerical simulation tools by recognized expert practitioners.

  3. The Influence of Acceptance Goals on Relational Perceptions.

    PubMed

    Tyler, James M; Branch, Sara E

    2015-01-01

    We examined whether relational perceptions (social involvement, relational value, interaction experience) differ depending on interaction acceptance goals (establish, maintain, or repair). Results indicated that relational perceptions were more positive in the maintain condition compared to the establish condition, which in turn was more positive than the repair condition. The data also supported a moderated mediation model: the indirect effects of social involvement and relational value on the relationship between acceptance goals and participant's interaction experience were contingent on self-esteem. These findings identify boundary conditions that influence the impact of acceptance goals on how much people experience an interaction positively. The findings provide an integrated framework outlining the potential relationship between acceptance goals, relational perceptions, interaction experience, and self-esteem.

  4. Hospital Energy Benchmarking Guidance - Version 1.0

    SciTech Connect

    Singer, Brett C.

    2009-09-08

    This document describes an energy benchmarking framework for hospitals. The document is organized as follows. The introduction provides a brief primer on benchmarking and its application to hospitals. The next two sections discuss special considerations including the identification of normalizing factors. The presentation of metrics is preceded by a description of the overall framework and the rationale for the grouping of metrics. Following the presentation of metrics, a high-level protocol is provided. The next section presents draft benchmarks for some metrics; benchmarks are not available for many metrics owing to a lack of data. This document ends with a list of research needs for further development.

  5. Using benchmarks for radiation testing of microprocessors and FPGAs

    SciTech Connect

    Quinn, Heather; Robinson, William H.; Rech, Paolo; Aguirre, Miguel; Barnard, Arno; Desogus, Marco; Entrena, Luis; Garcia-Valderas, Mario; Guertin, Steven M.; Kaeli, David; Kastensmidt, Fernanda Lima; Kiddie, Bradley T.; Sanchez-Clemente, Antonio; Reorda, Matteo Sonza; Sterpone, Luca; Wirthlin, Michael

    2015-12-01

    Performance benchmarks have been used over the years to compare different systems. These benchmarks can be useful for researchers trying to determine how changes to the technology, architecture, or compiler affect the system's performance. No such standard exists for systems deployed into high radiation environments, making it difficult to assess whether changes in the fabrication process, circuitry, architecture, or software affect reliability or radiation sensitivity. In this paper, we propose a benchmark suite for high-reliability systems that is designed for field-programmable gate arrays and microprocessors. As a result, we describe the development process and report neutron test data for the hardware and software benchmarks.

  6. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; Fatoohi, Rod

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  7. A Field-Based Aquatic Life Benchmark for Conductivity in ...

    EPA Pesticide Factsheets

    EPA announced the availability of the final report, A Field-Based Aquatic Life Benchmark for Conductivity in Central Appalachian Streams. This report describes a method to characterize the relationship between the extirpation (the effective extinction) of invertebrate genera and salinity (measured as conductivity) and from that relationship derives a freshwater aquatic life benchmark. This benchmark of 300 µS/cm may be applied to waters in Appalachian streams that are dominated by calcium and magnesium salts of sulfate and bicarbonate at circum-neutral to mildly alkaline pH. This report provides scientific evidence for a conductivity benchmark in a specific region rather than for the entire United States.

  8. Toxicological benchmarks for screening potential contaminants of concern for effects on soil and litter invertebrates and heterotrophic process

    SciTech Connect

    Will, M.E.; Suter, G.W. II

    1994-09-01

    One of the initial stages in ecological risk assessments for hazardous waste sites is the screening of contaminants to determine which of them are worthy of further consideration as {open_quotes}contaminants of potential concern.{close_quotes} This process is termed {open_quotes}contaminant screening.{close_quotes} It is performed by comparing measured ambient concentrations of chemicals to benchmark concentrations. Currently, no standard benchmark concentrations exist for assessing contaminants in soil with respect to their toxicity to soil- and litter-dwelling invertebrates, including earthworms, other micro- and macroinvertebrates, or heterotrophic bacteria and fungi. This report presents a standard method for deriving benchmarks for this purpose, sets of data concerning effects of chemicals in soil on invertebrates and soil microbial processes, and benchmarks for chemicals potentially associated with United States Department of Energy sites. In addition, literature describing the experiments from which data were drawn for benchmark derivation. Chemicals that are found in soil at concentrations exceeding both the benchmarks and the background concentration for the soil type should be considered contaminants of potential concern.

  9. Benchmarking kinetic calculations of resistive wall mode stability

    NASA Astrophysics Data System (ADS)

    Berkery, J. W.; Liu, Y. Q.; Wang, Z. R.; Sabbagh, S. A.; Logan, N. C.; Park, J.-K.; Manickam, J.; Betti, R.

    2014-05-01

    Validating the calculations of kinetic resistive wall mode (RWM) stability is important for confidently predicting RWM stable operating regions in ITER and other high performance tokamaks for disruption avoidance. Benchmarking the calculations of the Magnetohydrodynamic Resistive Spectrum—Kinetic (MARS-K) [Y. Liu et al., Phys. Plasmas 15, 112503 (2008)], Modification to Ideal Stability by Kinetic effects (MISK) [B. Hu et al., Phys. Plasmas 12, 057301 (2005)], and Perturbed Equilibrium Nonambipolar Transport PENT) [N. Logan et al., Phys. Plasmas 20, 122507 (2013)] codes for two Solov'ev analytical equilibria and a projected ITER equilibrium has demonstrated good agreement between the codes. The important particle frequencies, the frequency resonance energy integral in which they are used, the marginally stable eigenfunctions, perturbed Lagrangians, and fluid growth rates are all generally consistent between the codes. The most important kinetic effect at low rotation is the resonance between the mode rotation and the trapped thermal particle's precession drift, and MARS-K, MISK, and PENT show good agreement in this term. The different ways the rational surface contribution was treated historically in the codes is identified as a source of disagreement in the bounce and transit resonance terms at higher plasma rotation. Calculations from all of the codes support the present understanding that RWM stability can be increased by kinetic effects at low rotation through precession drift resonance and at high rotation by bounce and transit resonances, while intermediate rotation can remain susceptible to instability. The applicability of benchmarked kinetic stability calculations to experimental results is demonstrated by the prediction of MISK calculations of near marginal growth rates for experimental marginal stability points from the National Spherical Torus Experiment (NSTX) [M. Ono et al., Nucl. Fusion 40, 557 (2000)].

  10. Benchmarking kinetic calculations of resistive wall mode stability

    SciTech Connect

    Berkery, J. W.; Sabbagh, S. A.; Liu, Y. Q.; Betti, R.

    2014-05-15

    Validating the calculations of kinetic resistive wall mode (RWM) stability is important for confidently predicting RWM stable operating regions in ITER and other high performance tokamaks for disruption avoidance. Benchmarking the calculations of the Magnetohydrodynamic Resistive Spectrum—Kinetic (MARS-K) [Y. Liu et al., Phys. Plasmas 15, 112503 (2008)], Modification to Ideal Stability by Kinetic effects (MISK) [B. Hu et al., Phys. Plasmas 12, 057301 (2005)], and Perturbed Equilibrium Nonambipolar Transport (PENT) [N. Logan et al., Phys. Plasmas 20, 122507 (2013)] codes for two Solov'ev analytical equilibria and a projected ITER equilibrium has demonstrated good agreement between the codes. The important particle frequencies, the frequency resonance energy integral in which they are used, the marginally stable eigenfunctions, perturbed Lagrangians, and fluid growth rates are all generally consistent between the codes. The most important kinetic effect at low rotation is the resonance between the mode rotation and the trapped thermal particle's precession drift, and MARS-K, MISK, and PENT show good agreement in this term. The different ways the rational surface contribution was treated historically in the codes is identified as a source of disagreement in the bounce and transit resonance terms at higher plasma rotation. Calculations from all of the codes support the present understanding that RWM stability can be increased by kinetic effects at low rotation through precession drift resonance and at high rotation by bounce and transit resonances, while intermediate rotation can remain susceptible to instability. The applicability of benchmarked kinetic stability calculations to experimental results is demonstrated by the prediction of MISK calculations of near marginal growth rates for experimental marginal stability points from the National Spherical Torus Experiment (NSTX) [M. Ono et al., Nucl. Fusion 40, 557 (2000)].

  11. TREAT Transient Analysis Benchmarking for the HEU Core

    SciTech Connect

    Kontogeorgakos, D. C.; Connaway, H. M.; Wright, A. E.

    2014-05-01

    This work was performed to support the feasibility study on the potential conversion of the Transient Reactor Test Facility (TREAT) at Idaho National Laboratory from the use of high enriched uranium (HEU) fuel to the use of low enriched uranium (LEU) fuel. The analyses were performed by the GTRI Reactor Conversion staff at the Argonne National Laboratory (ANL). The objective of this study was to benchmark the transient calculations against temperature-limited transients performed in the final operating HEU TREAT core configuration. The MCNP code was used to evaluate steady-state neutronics behavior, and the point kinetics code TREKIN was used to determine core power and energy during transients. The first part of the benchmarking process was to calculate with MCNP all the neutronic parameters required by TREKIN to simulate the transients: the transient rod-bank worth, the prompt neutron generation lifetime, the temperature reactivity feedback as a function of total core energy, and the core-average temperature and peak temperature as a functions of total core energy. The results of these calculations were compared against measurements or against reported values as documented in the available TREAT reports. The heating of the fuel was simulated as an adiabatic process. The reported values were extracted from ANL reports, intra-laboratory memos and experiment logsheets and in some cases it was not clear if the values were based on measurements, on calculations or a combination of both. Therefore, it was decided to use the term “reported” values when referring to such data. The methods and results from the HEU core transient analyses will be used for the potential LEU core configurations to predict the converted (LEU) core’s performance.

  12. Extending the Technology Acceptance Model: Policy Acceptance Model (PAM)

    NASA Astrophysics Data System (ADS)

    Pierce, Tamra

    There has been extensive research on how new ideas and technologies are accepted in society. This has resulted in the creation of many models that are used to discover and assess the contributing factors. The Technology Acceptance Model (TAM) is one that is a widely accepted model. This model examines people's acceptance of new technologies based on variables that directly correlate to how the end user views the product. This paper introduces the Policy Acceptance Model (PAM), an expansion of TAM, which is designed for the analysis and evaluation of acceptance of new policy implementation. PAM includes the traditional constructs of TAM and adds the variables of age, ethnicity, and family. The model is demonstrated using a survey of people's attitude toward the upcoming healthcare reform in the United States (US) from 72 survey respondents. The aim is that the theory behind this model can be used as a framework that will be applicable to studies looking at the introduction of any new or modified policies.

  13. Evaluation of the potential of benchmarking to facilitate the measurement of chemical persistence in lakes.

    PubMed

    Zou, Hongyan; MacLeod, Matthew; McLachlan, Michael S

    2014-01-01

    The persistence of chemicals in the environment is rarely measured in the field due to a paucity of suitable methods. Here we explore the potential of chemical benchmarking to facilitate the measurement of persistence in lake systems using a multimedia chemical fate model. The model results show that persistence in a lake can be assessed by quantifying the ratio of test chemical and benchmark chemical at as few as two locations: the point of emission and the outlet of the lake. Appropriate selection of benchmark chemicals also allows pseudo-first-order rate constants for physical removal processes such as volatilization and sediment burial to be quantified. We use the model to explore how the maximum persistence that can be measured in a particular lake depends on the partitioning properties of the test chemical of interest and the characteristics of the lake. Our model experiments demonstrate that combining benchmarking techniques with good experimental design and sensitive environmental analytical chemistry may open new opportunities for quantifying chemical persistence, particularly for relatively slowly degradable chemicals for which current methods do not perform well.

  14. Benchmark study on glyphosate-resistant crop systems in the United States. Part 2: Perspectives.

    PubMed

    Owen, Micheal D K; Young, Bryan G; Shaw, David R; Wilson, Robert G; Jordan, David L; Dixon, Philip M; Weller, Stephen C

    2011-07-01

    A six-state, 5 year field project was initiated in 2006 to study weed management methods that foster the sustainability of genetically engineered (GE) glyphosate-resistant (GR) crop systems. The benchmark study field-scale experiments were initiated following a survey, conducted in the winter of 2005-2006, of farmer opinions on weed management practices and their views on GR weeds and management tactics. The main survey findings supported the premise that growers were generally less aware of the significance of evolved herbicide resistance and did not have a high recognition of the strong selection pressure from herbicides on the evolution of herbicide-resistant (HR) weeds. The results of the benchmark study survey indicated that there are educational challenges to implement sustainable GR-based crop systems and helped guide the development of the field-scale benchmark study. Paramount is the need to develop consistent and clearly articulated science-based management recommendations that enable farmers to reduce the potential for HR weeds. This paper provides background perspectives about the use of GR crops, the impact of these crops and an overview of different opinions about the use of GR crops on agriculture and society, as well as defining how the benchmark study will address these issues.

  15. TRIPOLI-4® - MCNP5 ITER A-lite neutronic model benchmarking

    NASA Astrophysics Data System (ADS)

    Jaboulay, J.-C.; Cayla, P.-Y.; Fausser, C.; Lee, Y.-K.; Trama, J.-C.; Li-Puma, A.

    2014-06-01

    The aim of this paper is to present the capability of TRIPOLI-4®, the CEA Monte Carlo code, to model a large-scale fusion reactor with complex neutron source and geometry. In the past, numerous benchmarks were conducted for TRIPOLI-4® assessment on fusion applications. Experiments (KANT, OKTAVIAN, FNG) analysis and numerical benchmarks (between TRIPOLI-4® and MCNP5) on the HCLL DEMO2007 and ITER models were carried out successively. In this previous ITER benchmark, nevertheless, only the neutron wall loading was analyzed, its main purpose was to present MCAM (the FDS Team CAD import tool) extension for TRIPOLI-4®. Starting from this work a more extended benchmark has been performed about the estimation of neutron flux, nuclear heating in the shielding blankets and tritium production rate in the European TBMs (HCLL and HCPB) and it is presented in this paper. The methodology to build the TRIPOLI-4® A-lite model is based on MCAM and the MCNP A-lite model (version 4.1). Simplified TBMs (from KIT) have been integrated in the equatorial-port. Comparisons of neutron wall loading, flux, nuclear heating and tritium production rate show a good agreement between the two codes. Discrepancies are mainly included in the Monte Carlo codes statistical error.

  16. The high-precision videometrics methods to determining absolute vertical benchmark

    NASA Astrophysics Data System (ADS)

    Liu, Jinbo; Zhu, Zhaokun

    2013-01-01

    The mobile measurement equipment plays an important role in engineering measurement tasks and its measuring device is fixed with the vehicle platform. Therefore, how to correct the measured error in time that caused by swayed platform is a basic problem. Videometrics has its inherent advantages in solving this problem. First of all, videometrics technology is non-contact measurement, which has no effect on the target's structural characteristics and motion characteristics. Secondly, videometrics technology has high precision especially for surface targets and linear targets in the field of view. Thirdly, videometrics technology has the advantages of automatic, real-time and dynamic. This paper is mainly for mobile theodolite.etc that works under the environment of absolute vertical benchmark and proposed two high-precision methods to determine vertical benchmark: Direct-Extracting, which is based on the intersection of plats under the help of two cameras; Benchmark-Transformation, which gets the vertical benchmark by reconstructing the level-plat. Two methods both have the precision of under 10 seconds by digital simulation and physical experiments. The methods proposed by this paper have significance both on the theory and application.

  17. Application of QC_DR software for acceptance testing and routine quality control of direct digital radiography systems: initial experiences using the Italian Association of Physicist in Medicine quality control protocol.

    PubMed

    Nitrosi, Andrea; Bertolini, Marco; Borasi, Giovanni; Botti, Andrea; Barani, Adriana; Rivetti, Stefano; Pierotti, Luisa

    2009-12-01

    Ideally, medical x-ray imaging systems should be designed to deliver maximum image quality at an acceptable radiation risk to the patient. Quality assurance procedures are employed to ensure that these standards are maintained. A quality control protocol for direct digital radiography (DDR) systems is described and discussed. Software to automatically process and analyze the required images was developed. In this paper, the initial results obtained on equipment of different DDR manufacturers were reported. The protocol was developed to highlight even small discrepancies in standard operating performance.

  18. NASA Indexing Benchmarks: Evaluating Text Search Engines

    NASA Technical Reports Server (NTRS)

    Esler, Sandra L.; Nelson, Michael L.

    1997-01-01

    The current proliferation of on-line information resources underscores the requirement for the ability to index collections of information and search and retrieve them in a convenient manner. This study develops criteria for analytically comparing the index and search engines and presents results for a number of freely available search engines. A product of this research is a toolkit capable of automatically indexing, searching, and extracting performance statistics from each of the focused search engines. This toolkit is highly configurable and has the ability to run these benchmark tests against other engines as well. Results demonstrate that the tested search engines can be grouped into two levels. Level one engines are efficient on small to medium sized data collections, but show weaknesses when used for collections 100MB or larger. Level two search engines are recommended for data collections up to and beyond 100MB.

  19. Benchmark cyclic plastic notch strain measurements

    NASA Technical Reports Server (NTRS)

    Sharpe, W. N., Jr.; Ward, M.

    1983-01-01

    Plastic strains at the roots of notched specimens of Inconel 718 subjected to tension-compression cycling at 650 C are reported. These strains were measured with a laser-based technique over a gage length of 0.1 mm and are intended to serve as 'benchmark' data for further development of experimental, analytical, and computational approaches. The specimens were 250 mm by 2.5 mm in the test section with double notches of 4.9 mm radius subjected to axial loading sufficient to cause yielding at the notch root on the tensile portion of the first cycle. The tests were run for 1000 cycles at 10 cpm or until cracks initiated at the notch root. The experimental techniques are described, and then representative data for the various load spectra are presented. All the data for each cycle of every test are available on floppy disks from NASA.

  20. Benchmarking finite- β ITG gyrokinetic simulations

    NASA Astrophysics Data System (ADS)

    Nevins, W. M.; Dimits, A. M.; Candy, J.; Holland, C.; Howard, N.

    2016-10-01

    We report the results of an electromagnetic gyrokinetic-simulation benchmarking study based on a well-diagnosed ion-temperature-gradient (ITG)-turbulence dominated experimental plasma. We compare the 4x3 matrix of transport/transfer quantities for each plasma species; namely the (a) particle flux, Γa, (b) momentum flux, Πa, (c) energy flux, Qa, and (d) anomalous heat exchange, Sa, with each transport coefficient broken down into: (1) electrostatic (δφ) (2) transverse electromagnetic (δA∥) , and (3) compressional electromagnetic, (δB∥) contributions. We compare realization-independent quantities (correlation functions, spectral densities, etc.), which characterize the fluctuating fields from various gyrokinetic simulation codes. Prepared for US DOE by LLNL under Contract DE-AC52-07NA27344 and by GA under Contract DE-FG03-95ER54309. This work was supported by the U.S. DOE, Office of Science, Fusion Energy Sciences.