Science.gov

Sample records for benchmark evaluation project

  1. Criticality safety benchmark evaluation project: Recovering the past

    SciTech Connect

    Trumble, E.F.

    1997-06-01

    A very brief summary of the Criticality Safety Benchmark Evaluation Project of the Westinghouse Savannah River Company is provided in this paper. The purpose of the project is to provide a source of evaluated criticality safety experiments in an easily usable format. Another project goal is to search for any experiments that may have been lost or contain discrepancies, and to determine if they can be used. Results of evaluated experiments are being published as US DOE handbooks.

  2. INTEGRAL BENCHMARKS AVAILABLE THROUGH THE INTERNATIONAL REACTOR PHYSICS EXPERIMENT EVALUATION PROJECT AND THE INTERNATIONAL CRITICALITY SAFETY BENCHMARK EVALUATION PROJECT

    SciTech Connect

    J. Blair Briggs; Lori Scott; Enrico Sartori; Yolanda Rugama

    2008-09-01

    Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) continue to expand their efforts and broaden their scope to identify, evaluate, and provide integral benchmark data for method and data validation. Benchmark model specifications provided by these two projects are used heavily by the international reactor physics, nuclear data, and criticality safety communities. Thus far, 14 countries have contributed to the IRPhEP, and 20 have contributed to the ICSBEP. The status of the IRPhEP and ICSBEP is discussed in this paper, and the future of the two projects is outlined and discussed. Selected benchmarks that have been added to the IRPhEP and ICSBEP handbooks since PHYSOR’06 are highlighted, and the future of the two projects is discussed.

  3. Benchmark Data Through The International Reactor Physics Experiment Evaluation Project (IRPHEP)

    SciTech Connect

    J. Blair Briggs; Dr. Enrico Sartori

    2005-09-01

    The International Reactor Physics Experiments Evaluation Project (IRPhEP) was initiated by the Organization for Economic Cooperation and Development (OECD) Nuclear Energy Agency’s (NEA) Nuclear Science Committee (NSC) in June of 2002. The IRPhEP focus is on the derivation of internationally peer reviewed benchmark models for several types of integral measurements, in addition to the critical configuration. While the benchmarks produced by the IRPhEP are of primary interest to the Reactor Physics Community, many of the benchmarks can be of significant value to the Criticality Safety and Nuclear Data Communities. Benchmarks that support the Next Generation Nuclear Plant (NGNP), for example, also support fuel manufacture, handling, transportation, and storage activities and could challenge current analytical methods. The IRPhEP is patterned after the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and is closely coordinated with the ICSBEP. This paper highlights the benchmarks that are currently being prepared by the IRPhEP that are also of interest to the Criticality Safety Community. The different types of measurements and associated benchmarks that can be expected in the first publication and beyond are described. The protocol for inclusion of IRPhEP benchmarks as ICSBEP benchmarks and for inclusion of ICSBEP benchmarks as IRPhEP benchmarks is detailed. The format for IRPhEP benchmark evaluations is described as an extension of the ICSBEP format. Benchmarks produced by the IRPhEP add new dimension to criticality safety benchmarking efforts and expand the collection of available integral benchmarks for nuclear data testing. The first publication of the "International Handbook of Evaluated Reactor Physics Benchmark Experiments" is scheduled for January of 2006.

  4. The Activities of the International Criticality Safety Benchmark Evaluation Project (ICSBEP)

    SciTech Connect

    Briggs, Joseph Blair

    2001-10-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organization for Economic Cooperation and Development (OECD) – Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Yugoslavia, Kazakhstan, Spain, and Israel are now participating. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled “International Handbook of Evaluated Criticality Safety Benchmark Experiments”. The 2001 Edition of the Handbook contains benchmark specifications for 2642 critical or subcritical configurations that are intended for use in validation efforts and for testing basic nuclear data.

  5. Integral Reactor Physics Benchmarks - the International Criticality Safety Benchmark Evaluation Project (icsbep) and the International Reactor Physics Experiment Evaluation Project (irphep)

    NASA Astrophysics Data System (ADS)

    Briggs, J. Blair; Nigg, David W.; Sartori, Enrico

    2006-04-01

    Since the beginning of the nuclear industry, thousands of integral experiments related to reactor physics and criticality safety have been performed. Many of these experiments can be used as benchmarks for validation of calculational techniques and improvements to nuclear data. However, many were performed in direct support of operations and thus were not performed with a high degree of quality assurance and were not well documented. For years, common validation practice included the tedious process of researching integral experiment data scattered throughout journals, transactions, reports, and logbooks. Two projects have been established to help streamline the validation process and preserve valuable integral data: the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP). The two projects are closely coordinated to avoid duplication of effort and to leverage limited resources to achieve a common goal. A short history of these two projects and their common purpose are discussed in this paper. Accomplishments of the ICSBEP are highlighted and the future of the two projects outlined.

  6. Growth and Expansion of the International Criticality Safety Benchmark Evaluation Project and the Newly Organized International Reactor Physics Experiment Evaluation Project

    SciTech Connect

    J. Blair Briggs; Lori Scott; Yolanda Rugama; Enrico Satori

    2007-05-01

    Since ICNC 2003, the International Criticality Safety Benchmark Evaluation Project (ICSBEP) has continued to expand its efforts and broaden its scope. Criticality-alarm / shielding type benchmarks and fundamental physics measurements that are relevant to criticality safety applications are not only included in the scope of the project, but benchmark data are also included in the latest version of the handbook. A considerable number of improvements have been made to the searchable database, DICE and the criticality-alarm / shielding benchmarks and fundamental physics measurements have been included in the database. There were 12 countries participating on the ICSBEP in 2003. That number has increased to 18 with recent contributions of data and/or resources from Brazil, Czech Republic, Poland, India, Canada, and China. South Africa, Germany, Argentina, and Australia have been invited to participate. Since ICNC 2003, the contents of the “International Handbook of Evaluated Criticality Safety Benchmark Experiments” have increased from 350 evaluations (28,000 pages) containing benchmark specifications for 3070 critical or subcritical configurations to 442 evaluations (over 38,000 pages) containing benchmark specifications for 3957 critical or subcritical configurations, 23 criticality-alarm-placement / shielding configurations with multiple dose points for each, and 20 configurations that have been categorized as fundamental physics measurements that are relevant to criticality safety applications in the 2006 Edition of the ICSBEP Handbook. Approximately 30 new evaluations and 250 additional configurations are expected to be added to the 2007 Edition of the Handbook. Since ICNC 2003, a reactor physics counterpart to the ICSBEP, The International Reactor Physics Experiment Evaluation Project (IRPhEP) was initiated. Beginning in 1999, the IRPhEP was conducted as a pilot activity by the by the Organization of Economic Cooperation and Development (OECD) Nuclear Energy

  7. Evaluating soil carbon in global climate models: benchmarking, future projections, and model drivers

    NASA Astrophysics Data System (ADS)

    Todd-Brown, K. E.; Randerson, J. T.; Post, W. M.; Allison, S. D.

    2012-12-01

    The carbon cycle plays a critical role in how the climate responds to anthropogenic carbon dioxide. To evaluate how well Earth system models (ESMs) from the Climate Model Intercomparison Project (CMIP5) represent the carbon cycle, we examined predictions of current soil carbon stocks from the historical simulation. We compared the soil and litter carbon pools from 17 ESMs with data on soil carbon stocks from the Harmonized World Soil Database (HWSD). We also examined soil carbon predictions for 2100 from 16 ESMs from the rcp85 (highest radiative forcing) simulation to investigate the effects of climate change on soil carbon stocks. In both analyses, we used a reduced complexity model to separate the effects of variation in model drivers from the effects of model parameters on soil carbon predictions. Drivers included NPP, soil temperature, and soil moisture, and the reduced complexity model represented one pool of soil carbon as a function of these drivers. The ESMs predicted global soil carbon totals of 500 to 2980 Pg-C, compared to 1260 Pg-C in the HWSD. This 5-fold variation in predicted soil stocks was a consequence of a 3.4-fold variation in NPP inputs and 3.8-fold variability in mean global turnover times. None of the ESMs correlated well with the global distribution of soil carbon in the HWSD (Pearson's correlation <0.40, RMSE 9-22 kg m-2). On a biome level there was a broad range of agreement between the ESMs and the HWSD. Some models predicted HWSD biome totals well (R2=0.91) while others did not (R2=0.23). All of the ESM terrestrial decomposition models are structurally similar with outputs that were well described by a reduced complexity model that included NPP and soil temperature (R2 of 0.73-0.93). However, MPI-ESM-LR outputs showed only a moderate fit to this model (R2=0.51), and CanESM2 outputs were better described by a reduced model that included soil moisture (R2=0.74), We also found a broad range in soil carbon responses to climate change

  8. Providing Nuclear Criticality Safety Analysis Education through Benchmark Experiment Evaluation

    SciTech Connect

    John D. Bess; J. Blair Briggs; David W. Nigg

    2009-11-01

    One of the challenges that today's new workforce of nuclear criticality safety engineers face is the opportunity to provide assessment of nuclear systems and establish safety guidelines without having received significant experience or hands-on training prior to graduation. Participation in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and/or the International Reactor Physics Experiment Evaluation Project (IRPhEP) provides students and young professionals the opportunity to gain experience and enhance critical engineering skills.

  9. Benchmark Evaluation of Plutonium Nitrate Solution Arrays

    SciTech Connect

    M. A. Marshall; J. D. Bess

    2011-09-01

    In October and November of 1981 thirteen approach-to-critical experiments were performed on a remote split table machine (RSTM) in the Critical Mass Laboratory of Pacific Northwest Laboratory (PNL) in Richland, Washington, using planar arrays of polyethylene bottles filled with plutonium (Pu) nitrate solution. Arrays of up to sixteen bottles were used to measure the critical number of bottles and critical array spacing with a tight fitting Plexiglas{reg_sign} reflector on all sides of the arrays except the top. Some experiments used Plexiglas shells fitted around each bottles to determine the effect of moderation on criticality. Each bottle contained approximately 2.4 L of Pu(NO3)4 solution with a Pu content of 105 g Pu/L and a free acid molarity H+ of 5.1. The plutonium was of low 240Pu (2.9 wt.%) content. These experiments were performed to fill a gap in experimental data regarding criticality limits for storing and handling arrays of Pu solution in reprocessing facilities. Of the thirteen approach-to-critical experiments eleven resulted in extrapolations to critical configurations. Four of the approaches were extrapolated to the critical number of bottles; these were not evaluated further due to the large uncertainty associated with the modeling of a fraction of a bottle. The remaining seven approaches were extrapolated to critical array spacing of 3-4 and 4-4 arrays; these seven critical configurations were evaluation for inclusion as acceptable benchmark experiments in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbook. Detailed and simple models of these configurations were created and the associated bias of these simplifications was determined to range from 0.00116 and 0.00162 {+-} 0.00006 ?keff. Monte Carlo analysis of all models was completed using MCNP5 with ENDF/BVII.0 neutron cross section libraries. A thorough uncertainty analysis of all critical, geometric, and material parameters was performed using parameter

  10. Benchmark testing of {sup 233}U evaluations

    SciTech Connect

    Wright, R.Q.; Leal, L.C.

    1997-07-01

    In this paper we investigate the adequacy of available {sup 233}U cross-section data (ENDF/B-VI and JENDL-3) for calculation of critical experiments. An ad hoc revised {sup 233}U evaluation is also tested and appears to give results which are improved relative to those obtained with either ENDF/B-VI or JENDL-3 cross sections. Calculations of k{sub eff} were performed for ten fast benchmarks and six thermal benchmarks using the three cross-section sets. Central reaction-rate-ratio calculations were also performed.

  11. IT-benchmarking of clinical workflows: concept, implementation, and evaluation.

    PubMed

    Thye, Johannes; Straede, Matthias-Christopher; Liebe, Jan-David; Hübner, Ursula

    2014-01-01

    Due to the emerging evidence of health IT as opportunity and risk for clinical workflows, health IT must undergo a continuous measurement of its efficacy and efficiency. IT-benchmarks are a proven means for providing this information. The aim of this study was to enhance the methodology of an existing benchmarking procedure by including, in particular, new indicators of clinical workflows and by proposing new types of visualisation. Drawing on the concept of information logistics, we propose four workflow descriptors that were applied to four clinical processes. General and specific indicators were derived from these descriptors and processes. 199 chief information officers (CIOs) took part in the benchmarking. These hospitals were assigned to reference groups of a similar size and ownership from a total of 259 hospitals. Stepwise and comprehensive feedback was given to the CIOs. Most participants who evaluated the benchmark rated the procedure as very good, good, or rather good (98.4%). Benchmark information was used by CIOs for getting a general overview, advancing IT, preparing negotiations with board members, and arguing for a new IT project. PMID:24825693

  12. Performance Evaluation and Benchmarking of Intelligent Systems

    SciTech Connect

    Madhavan, Raj; Messina, Elena; Tunstel, Edward

    2009-09-01

    To design and develop capable, dependable, and affordable intelligent systems, their performance must be measurable. Scientific methodologies for standardization and benchmarking are crucial for quantitatively evaluating the performance of emerging robotic and intelligent systems technologies. There is currently no accepted standard for quantitatively measuring the performance of these systems against user-defined requirements; and furthermore, there is no consensus on what objective evaluation procedures need to be followed to understand the performance of these systems. The lack of reproducible and repeatable test methods has precluded researchers working towards a common goal from exchanging and communicating results, inter-comparing system performance, and leveraging previous work that could otherwise avoid duplication and expedite technology transfer. Currently, this lack of cohesion in the community hinders progress in many domains, such as manufacturing, service, healthcare, and security. By providing the research community with access to standardized tools, reference data sets, and open source libraries of solutions, researchers and consumers will be able to evaluate the cost and benefits associated with intelligent systems and associated technologies. In this vein, the edited book volume addresses performance evaluation and metrics for intelligent systems, in general, while emphasizing the need and solutions for standardized methods. To the knowledge of the editors, there is not a single book on the market that is solely dedicated to the subject of performance evaluation and benchmarking of intelligent systems. Even books that address this topic do so only marginally or are out of date. The research work presented in this volume fills this void by drawing from the experiences and insights of experts gained both through theoretical development and practical implementation of intelligent systems in a variety of diverse application domains. The book presents

  13. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; Fatoohi, Rod

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  14. Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program

    SciTech Connect

    Bess, John D.; Montierth, Leland; Köberl, Oliver; Snoj, Luka

    2014-10-09

    Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the ²³⁵U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of keff with MCNP5 and ENDF/B-VII.0 neutron nuclear data are greater than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of keff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.

  15. Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program

    DOE PAGESBeta

    Bess, John D.; Montierth, Leland; Köberl, Oliver; Snoj, Luka

    2014-10-09

    Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the ²³⁵U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of keff with MCNP5 and ENDF/B-VII.0 neutron nuclear data are greatermore » than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of keff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  16. Benchmark Evaluation of the NRAD Reactor LEU Core Startup Measurements

    SciTech Connect

    J. D. Bess; T. L. Maddock; M. A. Marshall

    2011-09-01

    The Neutron Radiography (NRAD) reactor is a 250-kW TRIGA-(Training, Research, Isotope Production, General Atomics)-conversion-type reactor at the Idaho National Laboratory; it is primarily used for neutron radiography analysis of irradiated and unirradiated fuels and materials. The NRAD reactor was converted from HEU to LEU fuel with 60 fuel elements and brought critical on March 31, 2010. This configuration of the NRAD reactor has been evaluated as an acceptable benchmark experiment and is available in the 2011 editions of the International Handbook of Evaluated Criticality Safety Benchmark Experiments (ICSBEP Handbook) and the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook). Significant effort went into precisely characterizing all aspects of the reactor core dimensions and material properties; detailed analyses of reactor parameters minimized experimental uncertainties. The largest contributors to the total benchmark uncertainty were the 234U, 236U, Er, and Hf content in the fuel; the manganese content in the stainless steel cladding; and the unknown level of water saturation in the graphite reflector blocks. A simplified benchmark model of the NRAD reactor was prepared with a keff of 1.0012 {+-} 0.0029 (1s). Monte Carlo calculations with MCNP5 and KENO-VI and various neutron cross section libraries were performed and compared with the benchmark eigenvalue for the 60-fuel-element core configuration; all calculated eigenvalues are between 0.3 and 0.8% greater than the benchmark value. Benchmark evaluations of the NRAD reactor are beneficial in understanding biases and uncertainties affecting criticality safety analyses of storage, handling, or transportation applications with LEU-Er-Zr-H fuel.

  17. NASA Indexing Benchmarks: Evaluating Text Search Engines

    NASA Technical Reports Server (NTRS)

    Esler, Sandra L.; Nelson, Michael L.

    1997-01-01

    The current proliferation of on-line information resources underscores the requirement for the ability to index collections of information and search and retrieve them in a convenient manner. This study develops criteria for analytically comparing the index and search engines and presents results for a number of freely available search engines. A product of this research is a toolkit capable of automatically indexing, searching, and extracting performance statistics from each of the focused search engines. This toolkit is highly configurable and has the ability to run these benchmark tests against other engines as well. Results demonstrate that the tested search engines can be grouped into two levels. Level one engines are efficient on small to medium sized data collections, but show weaknesses when used for collections 100MB or larger. Level two search engines are recommended for data collections up to and beyond 100MB.

  18. Benchmarking Evaluation Results for Prototype Extravehicular Activity Gloves

    NASA Technical Reports Server (NTRS)

    Aitchison, Lindsay; McFarland, Shane

    2012-01-01

    The Space Suit Assembly (SSA) Development Team at NASA Johnson Space Center has invested heavily in the advancement of rear-entry planetary exploration suit design but largely deferred development of extravehicular activity (EVA) glove designs, and accepted the risk of using the current flight gloves, Phase VI, for unique mission scenarios outside the Space Shuttle and International Space Station (ISS) Program realm of experience. However, as design reference missions mature, the risks of using heritage hardware have highlighted the need for developing robust new glove technologies. To address the technology gap, the NASA Game-Changing Technology group provided start-up funding for the High Performance EVA Glove (HPEG) Project in the spring of 2012. The overarching goal of the HPEG Project is to develop a robust glove design that increases human performance during EVA and creates pathway for future implementation of emergent technologies, with specific aims of increasing pressurized mobility to 60% of barehanded capability, increasing the durability by 100%, and decreasing the potential of gloves to cause injury during use. The HPEG Project focused initial efforts on identifying potential new technologies and benchmarking the performance of current state of the art gloves to identify trends in design and fit leading to establish standards and metrics against which emerging technologies can be assessed at both the component and assembly levels. The first of the benchmarking tests evaluated the quantitative mobility performance and subjective fit of four prototype gloves developed by Flagsuit LLC, Final Frontier Designs, LLC Dover, and David Clark Company as compared to the Phase VI. All of the companies were asked to design and fabricate gloves to the same set of NASA provided hand measurements (which corresponded to a single size of Phase Vi glove) and focus their efforts on improving mobility in the metacarpal phalangeal and carpometacarpal joints. Four test

  19. Quality in E-Learning--A Conceptual Framework Based on Experiences from Three International Benchmarking Projects

    ERIC Educational Resources Information Center

    Ossiannilsson, E.; Landgren, L.

    2012-01-01

    Between 2008 and 2010, Lund University took part in three international benchmarking projects, "E-xcellence+," the "eLearning Benchmarking Exercise 2009," and the "First Dual-Mode Distance Learning Benchmarking Club." A comparison of these models revealed a rather high level of correspondence. From this finding and from desktop studies of the…

  20. Benchmarking for the Effective Use of Student Evaluation Data

    ERIC Educational Resources Information Center

    Smithson, John; Birks, Melanie; Harrison, Glenn; Nair, Chenicheri Sid; Hitchins, Marnie

    2015-01-01

    Purpose: The purpose of this paper is to examine current approaches to interpretation of student evaluation data and present an innovative approach to developing benchmark targets for the effective and efficient use of these data. Design/Methodology/Approach: This article discusses traditional approaches to gathering and using student feedback…

  1. A benchmark for fault tolerant flight control evaluation

    NASA Astrophysics Data System (ADS)

    Smaili, H.; Breeman, J.; Lombaerts, T.; Stroosma, O.

    2013-12-01

    A large transport aircraft simulation benchmark (REconfigurable COntrol for Vehicle Emergency Return - RECOVER) has been developed within the GARTEUR (Group for Aeronautical Research and Technology in Europe) Flight Mechanics Action Group 16 (FM-AG(16)) on Fault Tolerant Control (2004 2008) for the integrated evaluation of fault detection and identification (FDI) and reconfigurable flight control strategies. The benchmark includes a suitable set of assessment criteria and failure cases, based on reconstructed accident scenarios, to assess the potential of new adaptive control strategies to improve aircraft survivability. The application of reconstruction and modeling techniques, based on accident flight data, has resulted in high-fidelity nonlinear aircraft and fault models to evaluate new Fault Tolerant Flight Control (FTFC) concepts and their real-time performance to accommodate in-flight failures.

  2. Benchmarks and performance indicators: two tools for evaluating organizational results and continuous quality improvement efforts.

    PubMed

    McKeon, T

    1996-04-01

    Benchmarks are tools that can be compared across companies and industries to measure process output. The key to benchmarking is understanding the composition of the benchmark and whether the benchmarks consist of homogeneous groupings. Performance measures expand the concept of benchmarking and cross organizational boundaries to include factors that are strategically important to organizational success. Incorporating performance measures into a balanced score card will provide a comprehensive tool to evaluate organizational results. PMID:8634466

  3. Performance Evaluation and Benchmarking of Next Intelligent Systems

    SciTech Connect

    del Pobil, Angel; Madhavan, Raj; Bonsignorio, Fabio

    2009-10-01

    Performance Evaluation and Benchmarking of Intelligent Systems presents research dedicated to the subject of performance evaluation and benchmarking of intelligent systems by drawing from the experiences and insights of leading experts gained both through theoretical development and practical implementation of intelligent systems in a variety of diverse application domains. This contributed volume offers a detailed and coherent picture of state-of-the-art, recent developments, and further research areas in intelligent systems. The chapters cover a broad range of applications, such as assistive robotics, planetary surveying, urban search and rescue, and line tracking for automotive assembly. Subsystems or components described in this book include human-robot interaction, multi-robot coordination, communications, perception, and mapping. Chapters are also devoted to simulation support and open source software for cognitive platforms, providing examples of the type of enabling underlying technologies that can help intelligent systems to propagate and increase in capabilities. Performance Evaluation and Benchmarking of Intelligent Systems serves as a professional reference for researchers and practitioners in the field. This book is also applicable to advanced courses for graduate level students and robotics professionals in a wide range of engineering and related disciplines including computer science, automotive, healthcare, manufacturing, and service robotics.

  4. 239Pu Resonance Evaluation for Thermal Benchmark System Calculations

    NASA Astrophysics Data System (ADS)

    Leal, L. C.; Noguere, G.; de Saint Jean, C.; Kahler, A. C.

    2014-04-01

    Analyses of thermal plutonium solution critical benchmark systems have indicated a deficiency in the 239Pu resonance evaluation. To investigate possible solutions to this issue, the Organisation for Economic Co-operation and Development (OECD) Nuclear Energy Agency (NEA) Working Party for Evaluation Cooperation (WPEC) established Subgroup 34 to focus on the reevaluation of the 239Pu resolved resonance parameters. In addition, the impacts of the prompt neutron multiplicity (νbar) and the prompt neutron fission spectrum (PFNS) have been investigated. The objective of this paper is to present the results of the 239Pu resolved resonance evaluation effort.

  5. COVE 2A Benchmarking calculations using NORIA; Yucca Mountain Site Characterization Project

    SciTech Connect

    Carrigan, C.R.; Bixler, N.E.; Hopkins, P.L.; Eaton, R.R.

    1991-10-01

    Six steady-state and six transient benchmarking calculations have been performed, using the finite element code NORIA, to simulate one-dimensional infiltration into Yucca Mountain. These calculations were made to support the code verification (COVE 2A) activity for the Yucca Mountain Site Characterization Project. COVE 2A evaluates the usefulness of numerical codes for analyzing the hydrology of the potential Yucca Mountain site. Numerical solutions for all cases were found to be stable. As expected, the difficulties and computer-time requirements associated with obtaining solutions increased with infiltration rate. 10 refs., 128 figs., 5 tabs.

  6. Evaluation of the HTR-10 Reactor as a Benchmark for Physics Code QA

    SciTech Connect

    William K. Terry; Soon Sam Kim; Leland M. Montierth; Joshua J. Cogliati; Abderrafi M. Ougouag

    2006-09-01

    The HTR-10 is a small (10 MWt) pebble-bed research reactor intended to develop pebble-bed reactor (PBR) technology in China. It will be used to test and develop fuel, verify PBR safety features, demonstrate combined electricity production and co-generation of heat, and provide experience in PBR design, operation, and construction. As the only currently operating PBR in the world, the HTR-10 can provide data of great interest to everyone involved in PBR technology. In particular, if it yields data of sufficient quality, it can be used as a benchmark for assessing the accuracy of computer codes proposed for use in PBR analysis. This paper summarizes the evaluation for the International Reactor Physics Experiment Evaluation Project (IRPhEP) of data obtained in measurements of the HTR-10’s initial criticality experiment for use as benchmarks for reactor physics codes.

  7. Limitations of Community College Benchmarking and Benchmarks

    ERIC Educational Resources Information Center

    Bers, Trudy H.

    2006-01-01

    This chapter distinguishes between benchmarks and benchmarking, describes a number of data and cultural limitations to benchmarking projects, and suggests that external demands for accountability are the dominant reason for growing interest in benchmarking among community colleges.

  8. Evaluation of matching cost on the ISPRS stereo matching benchmark

    NASA Astrophysics Data System (ADS)

    Yue, Qingxing; Tang, Xinming; Gao, Xiaoming

    2015-12-01

    In this paper we evaluated several typical matching costs including CENSUS, mutual information (MI) and the normalized cross correlation using the ISPRS Stereo Matching Benchmark datasets for DSM generation by stereo matching. Two kinds of global optimization algorithms including semi-global matching (SGM) and graph cuts (GC) were used as optimization method. We used a sub-pixel method to obtain more accurate MI lookup table and a sub-pixel method was also used when computing cost by MI lookup table. MI itself is sensitive to partial radiation differences. So we used a kind of cost combined MI and CENSUS. After DSM generation, the deviation data between the generated DSM and Lidar was statistics out to compute the mean deviation (Mean), the median deviation (Med), the standard deviation (Stdev), the normalized median absolute deviation (NMAD), the percentage of deviation in tolerance etc., which were used to evaluate the accuracy of DSM generated from different cost.

  9. Thermal Performance Benchmarking (Presentation)

    SciTech Connect

    Moreno, G.

    2014-11-01

    This project will benchmark the thermal characteristics of automotive power electronics and electric motor thermal management systems. Recent vehicle systems will be benchmarked to establish baseline metrics, evaluate advantages and disadvantages of different thermal management systems, and identify areas of improvement to advance the state-of-the-art.

  10. Middleware Evaluation and Benchmarking for Use in Mission Operations Centers

    NASA Technical Reports Server (NTRS)

    Antonucci, Rob; Waktola, Waka

    2005-01-01

    Middleware technologies have been promoted as timesaving, cost-cutting alternatives to the point-to-point communication used in traditional mission operations systems. However, missions have been slow to adopt the new technology. The lack of existing middleware-based missions has given rise to uncertainty about middleware's ability to perform in an operational setting. Most mission architects are also unfamiliar with the technology and do not know the benefits and detriments to architectural choices - or even what choices are available. We will present the findings of a study that evaluated several middleware options specifically for use in a mission operations system. We will address some common misconceptions regarding the applicability of middleware-based architectures, and we will identify the design decisions and tradeoffs that must be made when choosing a middleware solution. The Middleware Comparison and Benchmark Study was conducted at NASA Goddard Space Flight Center to comprehensively evaluate candidate middleware products, compare and contrast the performance of middleware solutions with the traditional point- to-point socket approach, and assess data delivery and reliability strategies. The study focused on requirements of the Global Precipitation Measurement (GPM) mission, validating the potential use of middleware in the GPM mission ground system. The study was jointly funded by GPM and the Goddard Mission Services Evolution Center (GMSEC), a virtual organization for providing mission enabling solutions and promoting the use of appropriate new technologies for mission support. The study was broken into two phases. To perform the generic middleware benchmarking and performance analysis, a network was created with data producers and consumers passing data between themselves. The benchmark monitored the delay, throughput, and reliability of the data as the characteristics were changed. Measurements were taken under a variety of topologies, data demands

  11. Model evaluation using a community benchmarking system for land surface models

    NASA Astrophysics Data System (ADS)

    Mu, M.; Hoffman, F. M.; Lawrence, D. M.; Riley, W. J.; Keppel-Aleks, G.; Kluzek, E. B.; Koven, C. D.; Randerson, J. T.

    2014-12-01

    Evaluation of atmosphere, ocean, sea ice, and land surface models is an important step in identifying deficiencies in Earth system models and developing improved estimates of future change. For the land surface and carbon cycle, the design of an open-source system has been an important objective of the International Land Model Benchmarking (ILAMB) project. Here we evaluated CMIP5 and CLM models using a benchmarking system that enables users to specify models, data sets, and scoring systems so that results can be tailored to specific model intercomparison projects. Our scoring system used information from four different aspects of global datasets, including climatological mean spatial patterns, seasonal cycle dynamics, interannual variability, and long-term trends. Variable-to-variable comparisons enable investigation of the mechanistic underpinnings of model behavior, and allow for some control of biases in model drivers. Graphics modules allow users to evaluate model performance at local, regional, and global scales. Use of modular structures makes it relatively easy for users to add new variables, diagnostic metrics, benchmarking datasets, or model simulations. Diagnostic results are automatically organized into HTML files, so users can conveniently share results with colleagues. We used this system to evaluate atmospheric carbon dioxide, burned area, global biomass and soil carbon stocks, net ecosystem exchange, gross primary production, ecosystem respiration, terrestrial water storage, evapotranspiration, and surface radiation from CMIP5 historical and ESM historical simulations. We found that the multi-model mean often performed better than many of the individual models for most variables. We plan to publicly release a stable version of the software during fall of 2014 that has land surface, carbon cycle, hydrology, radiation and energy cycle components.

  12. Using Web-Based Peer Benchmarking to Manage the Client-Based Project

    ERIC Educational Resources Information Center

    Raska, David; Keller, Eileen Weisenbach; Shaw, Doris

    2013-01-01

    The complexities of integrating client-based projects into marketing courses provide challenges for the instructor but produce richness of context and active learning for the student. This paper explains the integration of Web-based peer benchmarking as a means of improving student performance on client-based projects within a single semester in…

  13. A Better Benchmark Assessment: Multiple-Choice versus Project-Based

    ERIC Educational Resources Information Center

    Peariso, Jamon F.

    2006-01-01

    The purpose of this literature review and Ex Post Facto descriptive study was to determine which type of benchmark assessment, multiple-choice or project-based, provides the best indication of general success on the history portion of the CST (California Standards Tests). The result of the study indicates that although the project-based benchmark…

  14. Improving HEI Productivity and Performance through Project Management: Implications from a Benchmarking Case Study

    ERIC Educational Resources Information Center

    Bryde, David; Leighton, Diana

    2009-01-01

    As higher education institutions (HEIs) look to be more commercial in their outlook they are likely to become more dependent on the successful implementation of projects. This article reports a benchmarking survey of PM maturity in a HEI, with the purpose of assessing its capability to implement projects. Data were collected via questionnaires…

  15. BENCHMARK EVALUATION OF THE START-UP CORE REACTOR PHYSICS MEASUREMENTS OF THE HIGH TEMPERATURE ENGINEERING TEST REACTOR

    SciTech Connect

    John Darrell Bess

    2010-05-01

    The benchmark evaluation of the start-up core reactor physics measurements performed with Japan’s High Temperature Engineering Test Reactor, in support of the Next Generation Nuclear Plant Project and Very High Temperature Reactor Program activities at the Idaho National Laboratory, has been completed. The evaluation was performed using MCNP5 with ENDF/B-VII.0 nuclear data libraries and according to guidelines provided for inclusion in the International Reactor Physics Experiment Evaluation Project Handbook. Results provided include updated evaluation of the initial six critical core configurations (five annular and one fully-loaded). The calculated keff eigenvalues agree within 1s of the benchmark values. Reactor physics measurements that were evaluated include reactivity effects measurements such as excess reactivity during the core loading process and shutdown margins for the fully-loaded core, four isothermal temperature reactivity coefficient measurements for the fully-loaded core, and axial reaction rate measurements in the instrumentation columns of three core configurations. The calculated values agree well with the benchmark experiment measurements. Fully subcritical and warm critical configurations of the fully-loaded core were also assessed. The calculated keff eigenvalues for these two configurations also agree within 1s of the benchmark values. The reactor physics measurement data can be used in the validation and design development of future High Temperature Gas-cooled Reactor systems.

  16. Benchmark Evaluation of Plutonium Hemispheres Reflected by Steel and Oil

    SciTech Connect

    John Darrell Bess

    2008-06-01

    During the period from June 1967 through September 1969 a series of critical experiments was performed at the Rocky Flats Critical Mass Laboratory with spherical and hemispherical plutonium assemblies as nested hemishells as part of a Nuclear Safety Facility Experimental Program to evaluate operational safety margins for the Rocky Flats Plant. These assemblies were both bare and fully or partially oil-reflected. Many of these experiments were subcritical with an extrapolation to critical configurations or critical at a particular oil height. Existing records reveal that 167 experiments were performed over the course of 28 months. Unfortunately, much of the data was not recorded. A reevaluation of the experiments had been summarized in a report for future experimental and computational analyses. This report examines only fifteen partially oil-reflected hemispherical assemblies. Fourteen of these assemblies also had close-fitting stainless-steel hemishell reflectors, used to determine the effective critical reflector height of oil with varying steel-reflector thickness. The experiments and their uncertainty in keff values were evaluated to determine their potential as valid criticality benchmark experiments of plutonium.

  17. BENCHMARK EVALUATION OF THE INITIAL ISOTHERMAL PHYSICS MEASUREMENTS AT THE FAST FLUX TEST FACILITY

    SciTech Connect

    John Darrell Bess

    2010-05-01

    The benchmark evaluation of the initial isothermal physics tests performed at the Fast Flux Test Facility, in support of Fuel Cycle Research and Development and Generation-IV activities at the Idaho National Laboratory, has been completed. The evaluation was performed using MCNP5 with ENDF/B-VII.0 nuclear data libraries and according to guidelines provided for inclusion in the International Reactor Physics Experiment Evaluation Project Handbook. Results provided include evaluation of the initial fully-loaded core critical, two neutron spectra measurements near the axial core center, 32 reactivity effects measurements (21 control rod worths, two control rod bank worths, six differential control rod worths, two shutdown margins, and one excess reactivity), isothermal temperature coefficient, and low-energy electron and gamma spectra measurements at the core center. All measurements were performed at 400 ºF. There was good agreement between the calculated and benchmark values for the fully-loaded core critical eigenvalue, reactivity effects measurements, and isothermal temperature coefficient. General agreement between benchmark experiment measurements and calculated spectra for neutrons and low-energy gammas at the core midplane exists, but calculations of the neutron spectra below the core and the low-energy gamma spectra at core midplane did not agree well. Homogenization of core components may have had a significant impact upon computational assessment of these effects. Future work includes development of a fully-heterogeneous model for comprehensive evaluation. The reactor physics measurement data can be used in nuclear data adjustment and validation of computational methods for advanced fuel cycle and nuclear reactor systems using Liquid Metal Fast Reactor technology.

  18. Proposal of an innovative benchmark for accuracy evaluation of dental crown manufacturing.

    PubMed

    Atzeni, Eleonora; Iuliano, Luca; Minetola, Paolo; Salmi, Alessandro

    2012-05-01

    An innovative benchmark representing a dental arch with classic features corresponding to different kinds of prepared teeth is proposed. Dental anatomy and general rules for tooth preparation are taken into account. This benchmark includes tooth orientation and provides oblique surfaces similar to those of real prepared teeth. The benchmark is produced by additive manufacturing (AM) and subjected to digitization by a dental three-dimensional scanner. The evaluation procedure proves that the scan data can be used as reference model for crown restorations design. Therefore this benchmark is at the basis for comparative studies about different CAD/CAM and AM techniques for dental crowns. PMID:22364825

  19. TPC-V: A Benchmark for Evaluating the Performance of Database Applications in Virtual Environments

    NASA Astrophysics Data System (ADS)

    Sethuraman, Priya; Reza Taheri, H.

    For two decades, TPC benchmarks have been the gold standards for evaluating the performance of database servers. An area that TPC benchmarks had not addressed until now was virtualization. Virtualization is now a major technology in use in data centers, and is the number one technology on Gartner Group's Top Technologies List. In 2009, the TPC formed a Working Group to develop a benchmark specifically intended for virtual environments that run database applications. We will describe the characteristics of this benchmark, and provide a status update on its development.

  20. Learning from Follow Up Surveys of Graduates: The Austin Teacher Program and the Benchmark Project. A Discussion Paper.

    ERIC Educational Resources Information Center

    Baker, Thomas E.

    This paper describes Austin College's (Texas) participation in the Benchmark Project, a collaborative followup study of teacher education graduates and their principals, focusing on the second round of data collection. The Benchmark Project was a collaboration of 11 teacher preparation programs that gathered and analyzed data comparing graduates…

  1. NRC-BNL BENCHMARK PROGRAM ON EVALUATION OF METHODS FOR SEISMIC ANALYSIS OF COUPLED SYSTEMS.

    SciTech Connect

    XU,J.

    1999-08-15

    A NRC-BNL benchmark program for evaluation of state-of-the-art analysis methods and computer programs for seismic analysis of coupled structures with non-classical damping is described. The program includes a series of benchmarking problems designed to investigate various aspects of complexities, applications and limitations associated with methods for analysis of non-classically damped structures. Discussions are provided on the benchmarking process, benchmark structural models, and the evaluation approach, as well as benchmarking ground rules. It is expected that the findings and insights, as well as recommendations from this program will be useful in developing new acceptance criteria and providing guidance for future regulatory activities involving licensing applications of these alternate methods to coupled systems.

  2. Thermal and mechanical codes first benchmark exercise: Part 1, Thermal analysis; Yucca Mountain Project

    SciTech Connect

    Costin, L.S.; Bauer, S.J.

    1990-06-01

    Thermal and mechanical models for intact and jointed rock mass behavior are being developed, verified, and validated at Sandia National Laboratories for the Yucca Mountain Project. Benchmarking is an essential part of this effort and is the primary tool used to demonstrate verification of engineering software used to solve thermomechanical problems. This report presents the results of the first phase of the first thermomechanical benchmark exercise. In the first phase of this exercise, three finite element codes for nonlinear heat conduction and one coupled thermoelastic boundary element code were used to solve the thermal portion of the benchmark problem. The codes used by the participants in this study were DOT, COYOTE, SPECTROM-41, and HEFF. The problem solved by each code was a two-dimensional idealization of a series of drifts whose dimensions approximate those of the underground layout in the conceptual design of a prospective repository for high-level radioactive waste at Yucca Mountain. 20 refs., 50 figs., 3 tabs.

  3. Monitoring Based Commissioning: Benchmarking Analysis of 24 UC/CSU/IOU Projects

    SciTech Connect

    Mills, Evan; Mathew, Paul

    2009-04-01

    Buildings rarely perform as intended, resulting in energy use that is higher than anticipated. Building commissioning has emerged as a strategy for remedying this problem in non-residential buildings. Complementing traditional hardware-based energy savings strategies, commissioning is a 'soft' process of verifying performance and design intent and correcting deficiencies. Through an evaluation of a series of field projects, this report explores the efficacy of an emerging refinement of this practice, known as monitoring-based commissioning (MBCx). MBCx can also be thought of as monitoring-enhanced building operation that incorporates three components: (1) Permanent energy information systems (EIS) and diagnostic tools at the whole-building and sub-system level; (2) Retro-commissioning based on the information from these tools and savings accounting emphasizing measurement as opposed to estimation or assumptions; and (3) On-going commissioning to ensure efficient building operations and measurement-based savings accounting. MBCx is thus a measurement-based paradigm which affords improved risk-management by identifying problems and opportunities that are missed with periodic commissioning. The analysis presented in this report is based on in-depth benchmarking of a portfolio of MBCx energy savings for 24 buildings located throughout the University of California and California State University systems. In the course of the analysis, we developed a quality-control/quality-assurance process for gathering and evaluating raw data from project sites and then selected a number of metrics to use for project benchmarking and evaluation, including appropriate normalizations for weather and climate, accounting for variations in central plant performance, and consideration of differences in building types. We performed a cost-benefit analysis of the resulting dataset, and provided comparisons to projects from a larger commissioning 'Meta-analysis' database. A total of 1120

  4. Applications of Integral Benchmark Data

    SciTech Connect

    Giuseppe Palmiotti; Teruhiko Kugo; Fitz Trumble; Albert C. Kahler; Dale Lancaster

    2014-10-09

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) provide evaluated integral benchmark data that may be used for validation of reactor physics / nuclear criticality safety analytical methods and data, nuclear data testing, advanced modeling and simulation, and safety analysis licensing activities. The handbooks produced by these programs are used in over 30 countries. Five example applications are presented in this paper: (1) Use of IRPhEP Data in Uncertainty Analyses and Cross Section Adjustment, (2) Uncertainty Evaluation Methods for Reactor Core Design at JAEA Using Reactor Physics Experimental Data, (3) Application of Benchmarking Data to a Broad Range of Criticality Safety Problems, (4) Cross Section Data Testing with ICSBEP Benchmarks, and (5) Use of the International Handbook of Evaluated Reactor Physics Benchmark Experiments to Support the Power Industry.

  5. Putting Data to Work: Interim Recommendations from The Benchmarking Project

    ERIC Educational Resources Information Center

    Miles, Marty; Maguire, Sheila; Woodruff-Bolte, Stacy; Clymer, Carol

    2010-01-01

    As public and private funders have focused on evaluating the effectiveness of workforce development programs, a myriad of data collection systems and reporting processes have taken shape. Navigating these systems takes significant time and energy and often saps frontline providers' capacity to use data internally for program improvement.…

  6. Evaluating the Information Power Grid using the NAS Grid Benchmarks

    NASA Technical Reports Server (NTRS)

    VanderWijngaartm Rob F.; Frumkin, Michael A.

    2004-01-01

    The NAS Grid Benchmarks (NGB) are a collection of synthetic distributed applications designed to rate the performance and functionality of computational grids. We compare several implementations of the NGB to determine programmability and efficiency of NASA's Information Power Grid (IPG), whose services are mostly based on the Globus Toolkit. We report on the overheads involved in porting existing NGB reference implementations to the IPG. No changes were made to the component tasks of the NGB can still be improved.

  7. State Education Agency Communications Process: Benchmark and Best Practices Project. Benchmark and Best Practices Project. Issue No. 01

    ERIC Educational Resources Information Center

    Zavadsky, Heather

    2014-01-01

    The role of state education agencies (SEAs) has shifted significantly from low-profile, compliance activities like managing federal grants to engaging in more complex and politically charged tasks like setting curriculum standards, developing accountability systems, and creating new teacher evaluation systems. The move from compliance-monitoring…

  8. An Overview of the International Reactor Physics Experiment Evaluation Project

    SciTech Connect

    Briggs, J. Blair; Gulliford, Jim

    2014-10-09

    Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties associated with advanced modeling and simulation accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. Two Organization for Economic Cooperation and Development (OECD) Nuclear Energy Agency (NEA) activities, the International Criticality Safety Benchmark Evaluation Project (ICSBEP), initiated in 1992, and the International Reactor Physics Experiment Evaluation Project (IRPhEP), initiated in 2003, have been identifying existing integral experiment data, evaluating those data, and providing integral benchmark specifications for methods and data validation for nearly two decades. Data provided by those two projects will be of use to the international reactor physics, criticality safety, and nuclear data communities for future decades. An overview of the IRPhEP and a brief update of the ICSBEP are provided in this paper.

  9. Benchmarking on the evaluation of major accident-related risk assessment.

    PubMed

    Fabbri, Luciano; Contini, Sergio

    2009-03-15

    This paper summarises the main results of a European project BEQUAR (Benchmarking Exercise in Quantitative Area Risk Assessment in Central and Eastern European Countries). This project is among the first attempts to explore how independent evaluations of the same risk study associated with a certain chemical establishment could differ from each other and the consequent effects on the resulting area risk estimate. The exercise specifically aimed at exploring the manner and degree to which independent experts may disagree on the interpretation of quantitative risk assessments for the same entity. The project first compared the results of a number of independent expert evaluations of a quantitative risk assessment study for the same reference chemical establishment. This effort was then followed by a study of the impact of the different interpretations on the estimate of the overall risk on the area concerned. In order to improve the inter-comparability of the results, this exercise was conducted using a single tool for area risk assessment based on the ARIPAR methodology. The results of this study are expected to contribute to an improved understanding of the inspection criteria and practices used by the different national authorities responsible for the implementation of the Seveso II Directive in their countries. The activity was funded under the Enlargement and Integration Action of the Joint Research Centre (JRC), that aims at providing scientific and technological support for promoting integration of the New Member States and assisting the Candidate Countries on their way towards accession to the European Union. PMID:18657363

  10. Preliminary Benchmark Evaluation of Japan’s High Temperature Engineering Test Reactor

    SciTech Connect

    John Darrell Bess

    2009-05-01

    A benchmark model of the initial fully-loaded start-up core critical of Japan’s High Temperature Engineering Test Reactor (HTTR) was developed to provide data in support of ongoing validation efforts of the Very High Temperature Reactor Program using publicly available resources. The HTTR is a 30 MWt test reactor utilizing graphite moderation, helium coolant, and prismatic TRISO fuel. The benchmark was modeled using MCNP5 with various neutron cross-section libraries. An uncertainty evaluation was performed by perturbing the benchmark model and comparing the resultant eigenvalues. The calculated eigenvalues are approximately 2-3% greater than expected with an uncertainty of ±0.70%. The primary sources of uncertainty are the impurities in the core and reflector graphite. The release of additional HTTR data could effectively reduce the benchmark model uncertainties and bias. Sensitivity of the results to the graphite impurity content might imply that further evaluation of the graphite content could significantly improve calculated results. Proper characterization of graphite for future Next Generation Nuclear Power reactor designs will improve computational modeling capabilities. Current benchmarking activities include evaluation of the annular HTTR cores and assessment of the remaining start-up core physics experiments, including reactivity effects, reactivity coefficient, and reaction-rate distribution measurements. Long term benchmarking goals might include analyses of the hot zero-power critical, rise-to-power tests, and other irradiation, safety, and technical evaluations performed with the HTTR.

  11. Evaluation of microfinance projects.

    PubMed

    Johnson, S

    1999-08-01

    This paper criticizes the quick system proposed by Henk Moll for evaluating microfinance projects in the article ¿How to Pre-Evaluate Credit Projects in Ten Minutes¿. The author contended that there is a need to emphasize the objectives of the project. The procedure used by Moll, he contended, is applicable only to projects that have only two key objectives, such as credit operations, and the provision of services. Arguments are presented on the three specific questions proposed by Moll, ranging from the availability of externally audited financial reports, the performance of interest rate on loans vis-a-vis the inflation rate, and the provision of loans according to the individual requirements of the borrowers. Lastly, the author emphasizes that the overall approach is not useful and suggests that careful considerations should be observed in the use or abuse of a simple scoring system or checklist such as the one proposed by Moll. PMID:12349295

  12. Concept of using a benchmark part to evaluate rapid prototype processes

    NASA Technical Reports Server (NTRS)

    Cariapa, Vikram

    1994-01-01

    A conceptual benchmark part for guiding manufacturers and users of rapid prototyping technologies is proposed. This is based on a need to have some tool to evaluate the development of this technology and to assist the user in judiciously selecting a process. The benchmark part is designed to have unique product details and features. The extent to which a rapid prototyping process can reproduce these features becomes a measure of the capability of the process. Since rapid prototyping is a dynamic technology, this benchmark part should be used to continuously monitor process capability of existing and developing technologies. Development of this benchmark part is, therefore, based on an understanding of the properties required from prototypes and characteristics of various rapid prototyping processes and measuring equipment that is used for evaluation.

  13. Benchmark Evaluation of Uranium Metal Annuli and Cylinders with Beryllium Reflectors

    SciTech Connect

    John D. Bess

    2010-06-01

    An extensive series of delayed critical experiments were performed at the Oak Ridge Critical Experiments Facility using enriched uranium metal during the 1960s and 1970s in support of criticality safety operations at the Y-12 Plant. These experiments were designed to evaluate the storage, casting, and handling limits of the Y-12 Plant and to provide data for the verification of cross sections and calculation methods utilized in nuclear criticality safety applications. Many of these experiments have already been evaluated and included in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbook: unreflected (HEU-MET-FAST-051), graphite-reflected (HEU-MET-FAST-071), and polyethylene-reflected (HEU-MET-FAST-076). Three of the experiments consisted of highly-enriched uranium (HEU, ~93.2% 235U) metal parts reflected by beryllium metal discs. The first evaluated experiment was constructed from a stack of 7-in.-diameter, 4-1/8-in.-high stack of HEU discs top-reflected by a 7-in.-diameter, 5-9/16-in.-high stack of beryllium discs. The other two experiments were formed from stacks of concentric HEU metal annular rings surrounding a 7-in.diameter beryllium core. The nominal outer diameters were 13 and 15 in. with a nominal stack height of 5 and 4 in., respectively. These experiments have been evaluated for inclusion in the ICSBEP Handbook.

  14. Towards a benchmark simulation model for plant-wide control strategy performance evaluation of WWTPs.

    PubMed

    Jeppsson, U; Rosen, C; Alex, J; Copp, J; Gernaey, K V; Pons, M N; Vanrolleghem, P A

    2006-01-01

    The COST/IWA benchmark simulation model has been available for seven years. Its primary purpose has been to create a platform for control strategy benchmarking of activated sludge processes. The fact that the benchmark has resulted in more than 100 publications, not only in Europe but also worldwide, demonstrates the interest in such a tool within the research community In this paper, an extension of the benchmark simulation model no 1 (BSM1) is proposed. This extension aims at facilitating control strategy development and performance evaluation at a plant-wide level and, consequently, includes both pre-treatment of wastewater as well as the processes describing sludge treatment. The motivation for the extension is the increasing interest and need to operate and control wastewater treatment systems not only at an individual process level but also on a plant-wide basis. To facilitate the changes, the evaluation period has been extended to one year. A prolonged evaluation period allows for long-term control strategies to be assessed and enables the use of control handles that cannot be evaluated in a realistic fashion in the one-week BSM1 evaluation period. In the paper, the extended plant layout is proposed and the new suggested process models are described briefly. Models for influent file design, the benchmarking procedure and the evaluation criteria are also discussed. And finally, some important remaining topics, for which consensus is required, are identified. PMID:16532759

  15. Benchmark Evaluation of the Neutron Radiography (NRAD) Reactor Upgraded LEU-Fueled Core

    SciTech Connect

    John D. Bess

    2001-09-01

    Benchmark models were developed to evaluate the cold-critical start-up measurements performed during the fresh core reload of the Neutron Radiography (NRAD) reactor with Low Enriched Uranium (LEU) fuel. The final upgraded core configuration with 64 fuel elements has been completed. Evaluated benchmark measurement data include criticality, control-rod worth measurements, shutdown margin, and excess reactivity. Dominant uncertainties in keff include the manganese content and impurities contained within the stainless steel cladding of the fuel and the 236U and erbium poison content in the fuel matrix. Calculations with MCNP5 and ENDF/B-VII.0 nuclear data are approximately 1.4% greater than the benchmark model eigenvalue, supporting contemporary research regarding errors in the cross section data necessary to simulate TRIGA-type reactors. Uncertainties in reactivity effects measurements are estimated to be ~10% with calculations in agreement with benchmark experiment values within 2s. The completed benchmark evaluation de-tails are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Experiments (IRPhEP Handbook). Evaluation of the NRAD LEU cores containing 56, 60, and 62 fuel elements have also been completed, including analysis of their respective reactivity effects measurements; they are also available in the IRPhEP Handbook but will not be included in this summary paper.

  16. Overview of the 2014 Edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook)

    SciTech Connect

    John D. Bess; J. Blair Briggs; Jim Gulliford; Ian Hill

    2014-10-01

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) is a widely recognized world class program. The work of the IRPhEP is documented in the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook). Integral data from the IRPhEP Handbook is used by reactor safety and design, nuclear data, criticality safety, and analytical methods development specialists, worldwide, to perform necessary validations of their calculational techniques. The IRPhEP Handbook is among the most frequently quoted reference in the nuclear industry and is expected to be a valuable resource for future decades.

  17. Key findings of the US Cystic Fibrosis Foundation's clinical practice benchmarking project.

    PubMed

    Boyle, Michael P; Sabadosa, Kathryn A; Quinton, Hebe B; Marshall, Bruce C; Schechter, Michael S

    2014-04-01

    Benchmarking is the process of using outcome data to identify high-performing centres and determine practices associated with their outstanding performance. The US Cystic Fibrosis Foundation (CFF) Patient Registry contains centre-specific outcomes data for all CFF-certified paediatric and adult cystic fibrosis (CF) care programmes in the USA. The CFF benchmarking project analysed these registry data, adjusting for differences in patient case mix known to influence outcomes, and identified the top-performing US paediatric and adult CF care programmes for pulmonary and nutritional outcomes. Separate multidisciplinary paediatric and adult benchmarking teams each visited 10 CF care programmes, five in the top quintile for pulmonary outcomes and five in the top quintile for nutritional outcomes. Key practice patterns and approaches present in both paediatric and adult programmes with outstanding clinical outcomes were identified and could be summarised as systems, attitudes, practices, patient/family empowerment and projects. These included: (1) the presence of strong leadership and a well-functioning care team working with a systematic approach to providing consistent care; (2) high expectations for outcomes among providers and families; (3) early and aggressive management of clinical declines, avoiding reliance on 'rescues'; and (4) patients/families that were engaged, empowered and well informed on disease management and its rationale. In summary, assessment of practice patterns at CF care centres with top-quintile pulmonary and nutritional outcomes provides insight into characteristic practices that may aid in optimising patient outcomes. PMID:24608546

  18. The ORSphere Benchmark Evaluation and Its Potential Impact on Nuclear Criticality Safety

    SciTech Connect

    John D. Bess; Margaret A. Marshall; J. Blair Briggs

    2013-10-01

    In the early 1970’s, critical experiments using an unreflected metal sphere of highly enriched uranium (HEU) were performed with the focus to provide a “very accurate description…as an ideal benchmark for calculational methods and cross-section data files.” Two near-critical configurations of the Oak Ridge Sphere (ORSphere) were evaluated as acceptable benchmark experiments for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments (ICSBEP Handbook). The results from those benchmark experiments were then compared with additional unmoderated and unreflected HEU metal benchmark experiment configurations currently found in the ICSBEP Handbook. For basic geometries (spheres, cylinders, and slabs) the eigenvalues calculated using MCNP5 and ENDF/B-VII.0 were within 3 of their respective benchmark values. There appears to be generally a good agreement between calculated and benchmark values for spherical and slab geometry systems. Cylindrical geometry configurations tended to calculate low, including more complex bare HEU metal systems containing cylinders. The ORSphere experiments do not calculate within their 1s uncertainty and there is a possibility that the effect of the measured uncertainties for the GODIVA I benchmark may need reevaluated. There is significant scatter in the calculations for the highly-correlated ORCEF cylinder experiments, which are constructed from close-fitting HEU discs and annuli. Selection of a nuclear data library can have a larger impact on calculated eigenvalue results than the variation found within calculations of a given experimental series, such as the ORCEF cylinders, using a single nuclear data set.

  19. Benchmark Testing of a New 56Fe Evaluation for Criticality Safety Applications

    SciTech Connect

    Leal, Luiz C; Ivanov, E.

    2015-01-01

    The SAMMY code was used to evaluate resonance parameters of the 56Fe cross section in the resolved resonance energy range of 0–2 MeV using transmission data, capture, elastic, inelastic, and double differential elastic cross sections. The resonance analysis was performed with the code SAMMY that fits R-matrix resonance parameters using the generalized least-squares technique (Bayes’ theory). The evaluation yielded a set of resonance parameters that reproduced the experimental data very well, along with a resonance parameter covariance matrix for data uncertainty calculations. Benchmark tests were conducted to assess the evaluation performance in benchmark calculations.

  20. DICE: Database for the International Criticality Safety Benchmark Evaluation Program Handbook

    SciTech Connect

    Nouri, Ali; Nagel, Pierre; Briggs, J. Blair; Ivanova, Tatiana

    2003-09-15

    The 2002 edition of the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments' (ICSBEP Handbook) spans more than 26 000 pages and contains 330 evaluations with benchmark specifications for 2881 critical or near-critical configurations. With such a large content, it became evident that the users needed more than a broad and qualitative classification of experiments to make efficient use of the ICSBEP Handbook. This paper describes the features of Database for the International Handbook of Evaluated Criticality Safety Benchmark Experiments (DICE), which is a database for the ICSBEP Handbook. The DICE program contains a relational database loaded with selected information from each configuration and a users' interface that enables one to query the database and to extract specific parameters. Summary descriptions of each experimental configuration can also be obtained. In addition, plotting capabilities provide the means of comparing neutron spectra and sensitivity coefficients for a set of configurations.

  1. Reactor Physics and Criticality Benchmark Evaluations for Advanced Nuclear Fuel - Final Technical Report

    SciTech Connect

    William Anderson; James Tulenko; Bradley Rearden; Gary Harms

    2008-09-11

    The nuclear industry interest in advanced fuel and reactor design often drives towards fuel with uranium enrichments greater than 5 wt% 235U. Unfortunately, little data exists, in the form of reactor physics and criticality benchmarks, for uranium enrichments ranging between 5 and 10 wt% 235U. The primary purpose of this project is to provide benchmarks for fuel similar to what may be required for advanced light water reactors (LWRs). These experiments will ultimately provide additional information for application to the criticality-safety bases for commercial fuel facilities handling greater than 5 wt% 235U fuel.

  2. Associations between CMS's Clinical Performance Measures project benchmarks, profit structure, and mortality in dialysis units.

    PubMed

    Szczech, L A; Klassen, P S; Chua, B; Hedayati, S S; Flanigan, M; McClellan, W M; Reddan, D N; Rettig, R A; Frankenfield, D L; Owen, W F

    2006-06-01

    Prior studies observing greater mortality in for-profit dialysis units have not captured information about benchmarks of care. This study was undertaken to examine the association between profit status and mortality while achieving benchmarks. Utilizing data from the US Renal Data System and the Centers for Medicare & Medicaid Services' end-stage renal disease (ESRD) Clinical Performance Measures project, hemodialysis units were categorized as for-profit or not-for-profit. Associations with mortality at 1 year were estimated using Cox regression. Two thousand six hundred and eighty-five dialysis units (31,515 patients) were designated as for-profit and 1018 (15,085 patients) as not-for-profit. Patients in for-profit facilities were more likely to be older, black, female, diabetic, and have higher urea reduction ratio (URR), hematocrit, serum albumin, and transferrin saturation. Patients (19.4 and 18.6%) in for-profit and not-for-profit units died, respectively. In unadjusted analyses, profit status was not associated with mortality (hazard ratio (HR)=1.04, P=0.09). When added to models with profit status, the following resulted in a significant association between profit status (for-profit vs not-for-profit) and increasing mortality risk: URR, hematocrit, albumin, and ESRD Network. In adjusted models, patients in for-profit facilities had a greater death risk (HR 1.09, P=0.004). More patients in for-profit units met clinical benchmarks. Survival among patients in for-profit units was similar to not-for-profit units. This suggests that in the contemporary era, interventions in for-profit dialysis units have not impaired their ability to deliver performance benchmarks and do not affect survival. PMID:16732194

  3. TOSPAC calculations in support of the COVE 2A benchmarking activity; Yucca Mountain Site Characterization Project

    SciTech Connect

    Gauthier, J.H.; Zieman, N.B.; Miller, W.B.

    1991-10-01

    The purpose of the the Code Verification (COVE) 2A benchmarking activity is to assess the numerical accuracy of several computer programs for the Yucca Mountain Site Characterization Project of the Department of Energy. This paper presents a brief description of the computer program TOSPAC and a discussion of the calculational effort and results generated by TOSPAC for the COVE 2A problem set. The calculations were performed twice. The initial calculations provided preliminary results for comparison with the results from other COVE 2A participants. TOSPAC was modified in response to the comparison and the final calculations included a correction and several enhancements to improve efficiency. 8 refs.

  4. Benchmark Evaluation of Start-Up and Zero-Power Measurements at the High-Temperature Engineering Test Reactor

    SciTech Connect

    Bess, John D.; Fujimoto, Nozomu

    2014-10-09

    Benchmark models were developed to evaluate six cold-critical and two warm-critical, zero-power measurements of the HTTR. Additional measurements of a fully-loaded subcritical configuration, core excess reactivity, shutdown margins, six isothermal temperature coefficients, and axial reaction-rate distributions were also evaluated as acceptable benchmark experiments. Insufficient information is publicly available to develop finely-detailed models of the HTTR as much of the design information is still proprietary. However, the uncertainties in the benchmark models are judged to be of sufficient magnitude to encompass any biases and bias uncertainties incurred through the simplification process used to develop the benchmark models. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the impurity content of the various graphite blocks that comprise the HTTR. Monte Carlo calculations of keff are between approximately 0.9 % and 2.7 % greater than the benchmark values. Reevaluation of the HTTR models as additional information becomes available could improve the quality of this benchmark and possibly reduce the computational biases. High-quality characterization of graphite impurities would significantly improve the quality of the HTTR benchmark assessment. Simulation of the other reactor physics measurements are in good agreement with the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.

  5. Benchmark Evaluation of Start-Up and Zero-Power Measurements at the High-Temperature Engineering Test Reactor

    DOE PAGESBeta

    Bess, John D.; Fujimoto, Nozomu

    2014-10-09

    Benchmark models were developed to evaluate six cold-critical and two warm-critical, zero-power measurements of the HTTR. Additional measurements of a fully-loaded subcritical configuration, core excess reactivity, shutdown margins, six isothermal temperature coefficients, and axial reaction-rate distributions were also evaluated as acceptable benchmark experiments. Insufficient information is publicly available to develop finely-detailed models of the HTTR as much of the design information is still proprietary. However, the uncertainties in the benchmark models are judged to be of sufficient magnitude to encompass any biases and bias uncertainties incurred through the simplification process used to develop the benchmark models. Dominant uncertainties in themore » experimental keff for all core configurations come from uncertainties in the impurity content of the various graphite blocks that comprise the HTTR. Monte Carlo calculations of keff are between approximately 0.9 % and 2.7 % greater than the benchmark values. Reevaluation of the HTTR models as additional information becomes available could improve the quality of this benchmark and possibly reduce the computational biases. High-quality characterization of graphite impurities would significantly improve the quality of the HTTR benchmark assessment. Simulation of the other reactor physics measurements are in good agreement with the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  6. How Can the eCampus Be Organized and Run To Address Traditional Concerns, but Maintain an Innovative Approach to Providing Educational Access? Project Eagle Evaluation Question #3. Benchmarking St. Petersburg College: A Report to Leadership.

    ERIC Educational Resources Information Center

    Burkhart, Joyce

    This paper discusses the findings of St. Petersburg College's (SPC) (Florida) evaluation question: "How can the eCampus be organized and run to address traditional faculty concerns, but maintain an innovative approach to providing educational access?" In order to evaluate this question, a list was compiled of faculty issues identified by…

  7. Benchmarks for evaluation and comparison of udder health status using monthly individual somatic cell count

    PubMed Central

    Fauteux, Véronique; Roy, Jean-Philippe; Scholl, Daniel T.; Bouchard, Émile

    2014-01-01

    The objectives of this study were to propose benchmarks for the interpretation of herd udder health using monthly individual somatic cell counts (SCC) from dairy herds in Quebec, Canada and to evaluate the association of risk factors with intramammary infection (IMI) dynamics relative to these benchmarks. The mean and percentiles of indices related to udder infection status [e.g., proportion of healthy or chronically infected cows, cows cured and new IMI (NIMI) rate] during lactation and over the dry period were calculated using a threshold of ≥ 200 000 cells/mL at test day. Mean NIMI proportion and proportion of cows cured during lactation were 0.11 and 0.27. Benchmarks of 0.70 and 0.03 for healthy and chronically infected cows over the dry period were proposed. Season and herd mean SCC were risk factors influencing IMI dynamics during lactation and over the dry period. PMID:25082989

  8. Evaluation of mobile phone camera benchmarking using objective camera speed and image quality metrics

    NASA Astrophysics Data System (ADS)

    Peltoketo, Veli-Tapani

    2014-11-01

    When a mobile phone camera is tested and benchmarked, the significance of image quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. However, the speed or rapidity metrics of the mobile phone's camera system has not been used with the quality metrics even if the camera speed has become a more and more important camera performance feature. There are several tasks in this work. First, the most important image quality and speed-related metrics of a mobile phone's camera system are collected from the standards and papers and, also, novel speed metrics are identified. Second, combinations of the quality and speed metrics are validated using mobile phones on the market. The measurements are done toward application programming interface of different operating systems. Finally, the results are evaluated and conclusions are made. The paper defines a solution to combine different image quality and speed metrics to a single benchmarking score. A proposal of the combined benchmarking metric is evaluated using measurements of 25 mobile phone cameras on the market. The paper is a continuation of a previous benchmarking work expanded with visual noise measurement and updates of the latest mobile phone versions.

  9. Benchmarking and Performance Measurement.

    ERIC Educational Resources Information Center

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  10. What Are the Appropriate Models for St. Petersburg College and the University Partnership Center To Expand Access to Bachelor's and Master's Degrees? Project Eagle Evaluation Question #5. Benchmarking St. Petersburg College: A Report to Leadership.

    ERIC Educational Resources Information Center

    Burkhart, Joyce

    St. Petersburg College (SPC) (Florida), formerly a two-year community college, now offers four-year degrees. This paper discusses the findings of SPC's evaluation question focusing on what the appropriate models are for St. Petersburg College and the University Partnership Center (UPC) to increase access to bachelor's and master's programs.…

  11. How Can St. Petersburg College Leverage Technology To Increase Access to Courses and Programs for an Expanded Pool of Learners? Project Eagle Evaluation Question #4. Benchmarking St. Petersburg College: A Report to Leadership.

    ERIC Educational Resources Information Center

    Burkhart, Joyce

    This report discusses St. Petersburg College's (SPC) (Florida) evaluation question, "How can St. Petersburg College leverage technology to increase access to courses and programs for an expanded pool of learners?" The report summarizes both nationwide/worldwide best practices and current SPC efforts related to four strategies: (1) an E-learning…

  12. Benchmarking the Remote-Handled Waste Facility at the West Valley Demonstration Project

    SciTech Connect

    O. P. Mendiratta; D. K. Ploetz

    2000-02-29

    ABSTRACT Facility decontamination activities at the West Valley Demonstration Project (WVDP), the site of a former commercial nuclear spent fuel reprocessing facility near Buffalo, New York, have resulted in the removal of radioactive waste. Due to high dose and/or high contamination levels of this waste, it needs to be handled remotely for processing and repackaging into transport/disposal-ready containers. An initial conceptual design for a Remote-Handled Waste Facility (RHWF), completed in June 1998, was estimated to cost $55 million and take 11 years to process the waste. Benchmarking the RHWF with other facilities around the world, completed in November 1998, identified unique facility design features and innovative waste pro-cessing methods. Incorporation of the benchmarking effort has led to a smaller yet fully functional, $31 million facility. To distinguish it from the June 1998 version, the revised design is called the Rescoped Remote-Handled Waste Facility (RRHWF) in this topical report. The conceptual design for the RRHWF was completed in June 1999. A design-build contract was approved by the Department of Energy in September 1999.

  13. Evaluation of HEU-Beryllium Benchmark Experiments to Improve Computational Analysis of Space Reactors

    SciTech Connect

    Bess, John; Bledsoe, Keith C; Rearden, Bradley T

    2011-01-01

    An assessment was previously performed to evaluate modeling capabilities and quantify preliminary biases and uncertainties associated with the modeling methods and data utilized in designing a nuclear reactor such as a beryllium-reflected, highly-enriched-uranium (HEU)-O2 fission surface power (FSP) system for space nuclear power. The conclusion of the previous study was that current capabilities could preclude the necessity of a cold critical test of the FSP; however, additional testing would reduce uncertainties in the beryllium and uranium cross-section data and the overall uncertainty in the computational models. A series of critical experiments using HEU metal were performed in the 1960s and 1970s in support of criticality safety operations at the Y-12 Plant. Of the hundreds of experiments, three were identified as fast-fission configurations reflected by beryllium metal. These experiments have been evaluated as benchmarks for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE). Further evaluation of the benchmark experiments was performed using the sensitivity and uncertainty analysis capabilities of SCALE 6. The data adjustment methods of SCALE 6 have been employed in the validation of an example FSP design model to reduce the uncertainty due to the beryllium cross section data.

  14. Evaluation of HEU-Beryllium Benchmark Experiments to Improve Computational Analysis of Space Reactors

    SciTech Connect

    John D. Bess; Keith C. Bledsoe; Bradley T. Rearden

    2011-02-01

    An assessment was previously performed to evaluate modeling capabilities and quantify preliminary biases and uncertainties associated with the modeling methods and data utilized in designing a nuclear reactor such as a beryllium-reflected, highly-enriched-uranium (HEU)-O2 fission surface power (FSP) system for space nuclear power. The conclusion of the previous study was that current capabilities could preclude the necessity of a cold critical test of the FSP; however, additional testing would reduce uncertainties in the beryllium and uranium cross-section data and the overall uncertainty in the computational models. A series of critical experiments using HEU metal were performed in the 1960s and 1970s in support of criticality safety operations at the Y-12 Plant. Of the hundreds of experiments, three were identified as fast-fission configurations reflected by beryllium metal. These experiments have been evaluated as benchmarks for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE). Further evaluation of the benchmark experiments was performed using the sensitivity and uncertainty analysis capabilities of SCALE 6. The data adjustment methods of SCALE 6 have been employed in the validation of an example FSP design model to reduce the uncertainty due to the beryllium cross section data.

  15. Identification of Integral Benchmarks for Nuclear Data Testing Using DICE (Database for the International Handbook of Evaluated Criticality Safety Benchmark Experiments)

    SciTech Connect

    J. Blair Briggs; A. Nichole Ellis; Yolanda Rugama; Nicolas Soppera; Manuel Bossant

    2011-08-01

    Typical users of the International Criticality Safety Evaluation Project (ICSBEP) Handbook have specific criteria to which they desire to find matching experiments. Depending on the application, those criteria may consist of any combination of physical or chemical characteristics and/or various neutronic parameters. The ICSBEP Handbook contains a structured format helping the user narrow the search for experiments of interest. However, with nearly 4300 different experimental configurations and the ever increasing addition of experimental data, the necessity to perform multiple criteria searches have rendered these features insufficient. As a result, a relational database was created with information extracted from the ICSBEP Handbook. A users’ interface was designed by OECD and DOE to allow the interrogation of this database. The database and the corresponding users’ interface are referred to as DICE. DICE currently offers the capability to perform multiple criteria searches that go beyond simple fuel, physical form and spectra and includes expanded general information, fuel form, moderator/coolant, neutron-absorbing material, cladding, reflector, separator, geometry, benchmark results, spectra, and neutron balance parameters. DICE also includes the capability to display graphical representations of neutron spectra, detailed neutron balance, sensitivity coefficients for capture, fission, elastic scattering, inelastic scattering, nu-bar and mu-bar, as well as several other features.

  16. Benchmark Evaluation of the Medium-Power Reactor Experiment Program Critical Configurations

    SciTech Connect

    Margaret A. Marshall; John D. Bess

    2013-02-01

    A series of small, compact critical assembly (SCCA) experiments were performed in 1962-1965 at the Oak Ridge National Laboratory Critical Experiments Facility (ORCEF) for the Medium-Power Reactor Experiment (MPRE) program. The MPRE was a stainless-steel clad, highly enriched uranium (HEU)-O2 fuelled, BeO reflected reactor design to provide electrical power to space vehicles. Cooling and heat transfer were to be achieved by boiling potassium in the reactor core and passing vapor directly through a turbine. Graphite- and beryllium-reflected assemblies were constructed at ORCEF to verify the critical mass, power distribution, and other reactor physics measurements needed to validate reactor calculations and reactor physics methods. The experimental series was broken into three parts, with the third portion of the experiments representing the beryllium-reflected measurements. The latter experiments are of interest for validating current reactor design efforts for a fission surface power reactor. The entire series has been evaluated as acceptable benchmark experiments and submitted for publication in the International Handbook of Evaluated Criticality Safety Benchmark Experiments and in the International Handbook of Evaluated Reactor Physics Benchmark Experiments.

  17. Windows NT Workstation Performance Evaluation Based on Pro/E 2000i BENCHMARK

    SciTech Connect

    DAVIS,SEAN M.

    2000-08-02

    A performance evaluation of several computers was necessary, so an evaluation program, or benchmark, was run on each computer to determine maximum possible performance. The program was used to test the Computer Aided Drafting (CAD) ability of each computer by monitoring the speed with which several functions were executed. The main objective of the benchmarking program was to record assembly loading times and image regeneration times and then compile a composite score that could be compared with the same tests on other computers. The three computers that were tested were the Compaq AP550, the SGI 230, and the Hewlett-PackardP750C. The Compaq and SGI computers each had a Pentium III 733mhz processor, while the Hewlett-Packard had a Pentium III 750mhz processor. The size and speed of Random Access Memory (RAM) in each computer varied, as did the type of graphics card. Each computer that was tested was using Windows NT 4.0 and Pro/ENGINEER{trademark} 2000i CAD benchmark software provided by Standard Performance Evaluation Corporation (SPEC). The benchmarking program came with its own assembly, automatically loaded and ran tests on the assembly, then compiled the time each test took to complete. Due to the automation of the tests, any sort of user error affecting test scores was virtually eliminated. After all the tests were completed, scores were then compiled and compared. The Silicon Graphics 230 was by far the overall winner with a composite score of 8.57. The Compaq AP550 was next with a score of 5.19, while the Hewlett-Packard P750C performed dismally, achieving a score of 3.34. Several factors, including motherboard chipset, graphics card, and the size and speed of RAM, were involved in the differing scores of the three machines. Surprisingly the Hewlett-Packard, which had the fastest processor, came back with the lowest score. The above factors most likely contributed to the poor performance of the Hewlett-Packard. Based on the results of the benchmark test

  18. Benchmarking and Its Relevance to the Library and Information Sector. Interim Findings of "Best Practice Benchmarking in the Library and Information Sector," a British Library Research and Development Department Project.

    ERIC Educational Resources Information Center

    Kinnell, Margaret; Garrod, Penny

    This British Library Research and Development Department study assesses current activities and attitudes toward quality management in library and information services (LIS) in the academic sector as well as the commercial/industrial sector. Definitions and types of benchmarking are described, and the relevance of benchmarking to LIS is evaluated.…

  19. Gifted Science Project: Evaluation Report.

    ERIC Educational Resources Information Center

    Ott, Susan L.; Emanuel, Elizabeth, Ed.

    The document contains the evaluation report on the Gifted Science Project in Montgomery County, Maryland, a program to identify resources for students in grades 3-8 who are motivated in science. The Project's primary product is a Project Resource File (PRF) listing people, places, and published materials that can be used by individual students. An…

  20. Project Change Evaluation Research Brief.

    ERIC Educational Resources Information Center

    Leiderman, Sally A.; Dupree, David M.

    Project Change is a community-driven anti-racism initiative operating in four communities: Albuquerque, New Mexico; El Paso, Texas; Knoxville, Tennessee; and Valdosta, Georgia. The formative evaluation of Project Change began in 1994 when all of the sites were still in planning or early action phases. Findings from the summative evaluation will be…

  1. Team Projects and Peer Evaluations

    ERIC Educational Resources Information Center

    Doyle, John Kevin; Meeker, Ralph D.

    2008-01-01

    The authors assign semester- or quarter-long team-based projects in several Computer Science and Finance courses. This paper reports on our experience in designing, managing, and evaluating such projects. In particular, we discuss the effects of team size and of various peer evaluation schemes on team performance and student learning. We report…

  2. Evaluation of 3D surface scanners for skin documentation in forensic medicine: comparison of benchmark surfaces

    PubMed Central

    Schweitzer, Wolf; Häusler, Martin; Bär, Walter; Schaepman, Michael

    2007-01-01

    Background Two 3D surface scanners using collimated light patterns were evaluated in a new application domain: to document details of surfaces similar to the ones encountered in forensic skin pathology. Since these scanners have not been specifically designed for forensic skin pathology, we tested their performance under practical constraints in an application domain that is to be considered new. Methods Two solid benchmark objects containing relevant features were used to compare two 3D surface scanners: the ATOS-II (GOM, Germany) and the QTSculptor (Polygon Technology, Germany). Both scanners were used to capture and process data within a limited amount of time, whereas point-and-click editing was not allowed. We conducted (a) a qualitative appreciation of setup, handling and resulting 3D data, (b) an experimental subjective evaluation of matching 3D data versus photos of benchmark object regions by a number of 12 judges who were forced to state their preference for either of the two scanners, and (c) a quantitative characterization of both 3D data sets comparing 220 single surface areas with the real benchmark objects in order to determine the recognition rate's possible dependency on feature size and geometry. Results The QTSculptor generated significantly better 3D data in both qualitative tests (a, b) that we had conducted, possibly because of a higher lateral point resolution; statistical evaluation (c) showed that the QTSculptor-generated data allowed the discrimination of features as little as 0.3 mm, whereas ATOS-II-generated data allowed for discrimination of features sized not smaller than 1.2 mm. Conclusion It is particularly important to conduct specific benchmark tests if devices are brought into new application domains they were not specifically designed for; using a realistic test featuring forensic skin pathology features, QT Sculptor-generated data quantitatively exceeded manufacturer's specifications, whereas ATOS-II-generated data was within

  3. Project financial evaluation

    SciTech Connect

    None, None

    2009-01-18

    The project financial section of the Renewable Energy Technology Characterizations describes structures and models to support the technical and economic status of emerging renewable energy options for electricity supply.

  4. MPI performance evaluation and characterization using a compact application benchmark code

    SciTech Connect

    Worley, P.H.

    1996-06-01

    In this paper the parallel benchmark code PSTSWM is used to evaluate the performance of the vendor-supplied implementations of the MPI message-passing standard on the Intel Paragon, IBM SP2, and Cray Research T3D. This study is meant to complement the performance evaluation of individual MPI commands by providing information on the practical significance of MPI performance on the execution of a communication-intensive application code. In particular, three performance questions are addressed: how important is the communication protocol in determining performance when using MPI, how does MPI performance compare with that of the native communication library, and how efficient are the collective communication routines.

  5. Performance evaluation of tile-based Fisher Ratio analysis using a benchmark yeast metabolome dataset.

    PubMed

    Watson, Nathanial E; Parsons, Brendon A; Synovec, Robert E

    2016-08-12

    Performance of tile-based Fisher Ratio (F-ratio) data analysis, recently developed for discovery-based studies using comprehensive two-dimensional gas chromatography coupled with time-of-flight mass spectrometry (GC×GC-TOFMS), is evaluated with a metabolomics dataset that had been previously analyzed in great detail, but while taking a brute force approach. The previously analyzed data (referred to herein as the benchmark dataset) were intracellular extracts from Saccharomyces cerevisiae (yeast), either metabolizing glucose (repressed) or ethanol (derepressed), which define the two classes in the discovery-based analysis to find metabolites that are statistically different in concentration between the two classes. Beneficially, this previously analyzed dataset provides a concrete means to validate the tile-based F-ratio software. Herein, we demonstrate and validate the significant benefits of applying tile-based F-ratio analysis. The yeast metabolomics data are analyzed more rapidly in about one week versus one year for the prior studies with this dataset. Furthermore, a null distribution analysis is implemented to statistically determine an adequate F-ratio threshold, whereby the variables with F-ratio values below the threshold can be ignored as not class distinguishing, which provides the analyst with confidence when analyzing the hit table. Forty-six of the fifty-four benchmarked changing metabolites were discovered by the new methodology while consistently excluding all but one of the benchmarked nineteen false positive metabolites previously identified. PMID:27393630

  6. The DLESE Evaluation Toolkit Project

    NASA Astrophysics Data System (ADS)

    Buhr, S. M.; Barker, L. J.; Marlino, M.

    2002-12-01

    The Evaluation Toolkit and Community project is a new Digital Library for Earth System Education (DLESE) collection designed to raise awareness of project evaluation within the geoscience education community, and to enable principal investigators, teachers, and evaluators to implement project evaluation more readily. This new resource is grounded in the needs of geoscience educators, and will provide a virtual home for a geoscience education evaluation community. The goals of the project are to 1) provide a robust collection of evaluation resources useful for Earth systems educators, 2) establish a forum and community for evaluation dialogue within DLESE, and 3) disseminate the resources through the DLESE infrastructure and through professional society workshops and proceedings. Collaboration and expertise in education, geoscience and evaluation are necessary if we are to conduct the best possible geoscience education. The Toolkit allows users to engage in evaluation at whichever level best suits their needs, get more evaluation professional development if desired, and access the expertise of other segments of the community. To date, a test web site has been built and populated, initial community feedback from the DLESE and broader community is being garnered, and we have begun to heighten awareness of geoscience education evaluation within our community. The web site contains features that allow users to access professional development about evaluation, search and find evaluation resources, submit resources, find or offer evaluation services, sign up for upcoming workshops, take the user survey, and submit calendar items. The evaluation resource matrix currently contains resources that have met our initial review. The resources are currently organized by type; they will become searchable on multiple dimensions of project type, audience, objectives and evaluation resource type as efforts to develop a collection-specific search engine mature. The peer review

  7. Project Proposals Evaluation

    NASA Astrophysics Data System (ADS)

    Encheva, Sylvia; Tumin, Sharil

    2009-08-01

    Collaboration among various firms has been traditionally used trough single project joint ventures for bonding purposes. Eventhough the performed work is usually beneficial to some extend to all participants, the type of collaboration option to be adapted is strongly influenced by overall purposes and goals that can be achieved. In order to facilitate a choice of collaboration option best suited to a firm's need a computer based model is proposed.

  8. Inservice Evaluation Project.

    ERIC Educational Resources Information Center

    Samuels, Marilyn; Price, M. Anne

    The report details information on a study of effective inservice programs in the area of learning disabilities (LD) in Calgary, Alberta, Canada. Section 1 describes the content of the 28 Learning Centre inservice programs which were attended by 739 educators. Compilation of participant evaluations revealed a diverse list of recommendations for…

  9. Schoolwide Project Evaluations: Workshop Guide.

    ERIC Educational Resources Information Center

    RMC Research Corp., Denver, CO.

    This publication is a guide with the materials necessary for leading a workshop session on Chapter 1 schoolwide project evaluations aimed at meeting federal accountability requirements. As the packet points out, elementary school, middle school, and secondary school projects differ from the traditional Chapter 1 delivery models and as a…

  10. GEAR UP Aspirations Project Evaluation

    ERIC Educational Resources Information Center

    Trimble, Brad A.

    2013-01-01

    The purpose of this study was to conduct a formative evaluation of the first two years of the Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP) Aspirations Project (Aspirations) using a Context, Input, Process, and Product (CIPP) model so as to gain an in-depth understanding of the project during the middle school…

  11. NASA PC software evaluation project

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Kuan, Julie C.

    1986-01-01

    The USL NASA PC software evaluation project is intended to provide a structured framework for facilitating the development of quality NASA PC software products. The project will assist NASA PC development staff to understand the characteristics and functions of NASA PC software products. Based on the results of the project teams' evaluations and recommendations, users can judge the reliability, usability, acceptability, maintainability and customizability of all the PC software products. The objective here is to provide initial, high-level specifications and guidelines for NASA PC software evaluation. The primary tasks to be addressed in this project are as follows: to gain a strong understanding of what software evaluation entails and how to organize a structured software evaluation process; to define a structured methodology for conducting the software evaluation process; to develop a set of PC software evaluation criteria and evaluation rating scales; and to conduct PC software evaluations in accordance with the identified methodology. Communication Packages, Network System Software, Graphics Support Software, Environment Management Software, General Utilities. This report represents one of the 72 attachment reports to the University of Southwestern Louisiana's Final Report on NASA Grant NGT-19-010-900. Accordingly, appropriate care should be taken in using this report out of context of the full Final Report.

  12. Conceptual Soundness, Metric Development, Benchmarking, and Targeting for PATH Subprogram Evaluation

    SciTech Connect

    Mosey. G.; Doris, E.; Coggeshall, C.; Antes, M.; Ruch, J.; Mortensen, J.

    2009-01-01

    The objective of this study is to evaluate the conceptual soundness of the U.S. Department of Housing and Urban Development (HUD) Partnership for Advancing Technology in Housing (PATH) program's revised goals and establish and apply a framework to identify and recommend metrics that are the most useful for measuring PATH's progress. This report provides an evaluative review of PATH's revised goals, outlines a structured method for identifying and selecting metrics, proposes metrics and benchmarks for a sampling of individual PATH programs, and discusses other metrics that potentially could be developed that may add value to the evaluation process. The framework and individual program metrics can be used for ongoing management improvement efforts and to inform broader program-level metrics for government reporting requirements.

  13. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  14. DSM Accuracy Evaluation for the ISPRS Commission I Image Matching Benchmark

    NASA Astrophysics Data System (ADS)

    Kuschk, G.; d'Angelo, P.; Qin, R.; Poli, D.; Reinartz, P.; Cremers, D.

    2014-11-01

    To improve the quality of algorithms for automatic generation of Digital Surface Models (DSM) from optical stereo data in the remote sensing community, the Working Group 4 of Commission I: Geometric and Radiometric Modeling of Optical Airborne and Spaceborne Sensors provides on its website benchmark-test.html"target="_blank">http://www2.isprs.org/commissions/comm1/wg4/benchmark-test.html a benchmark dataset for measuring and comparing the accuracy of dense stereo algorithms. The data provided consists of several optical spaceborne stereo images together with ground truth data produced by aerial laser scanning. In this paper we present our latest work on this benchmark, based upon previous work. As a first point, we noticed that providing the abovementioned test data as geo-referenced satellite images together with their corresponding RPC camera model seems too high a burden for being used widely by other researchers, as a considerable effort still has to be made to integrate the test datas camera model into the researchers local stereo reconstruction framework. To bypass this problem, we now also provide additional rectified input images, which enable stereo algorithms to work out of the box without the need for implementing special camera models. Care was taken to minimize the errors resulting from the rectification transformation and the involved image resampling. We further improved the robustness of the evaluation method against errors in the orientation of the satellite images (with respect to the LiDAR ground truth). To this end we implemented a point cloud alignment of the DSM and the LiDAR reference points using an Iterative Closest Point (ICP) algorithm and an estimation of the best fitting transformation. This way, we concentrate on the errors from the stereo reconstruction and make sure that the result is not biased by errors in the absolute orientation of the satellite images. The evaluation of

  15. GROWTH OF THE INTERNATIONAL CRITICALITY SAFETY AND REACTOR PHYSICS EXPERIMENT EVALUATION PROJECTS

    SciTech Connect

    J. Blair Briggs; John D. Bess; Jim Gulliford

    2011-09-01

    Since the International Conference on Nuclear Criticality Safety (ICNC) 2007, the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) have continued to expand their efforts and broaden their scope. Eighteen countries participated on the ICSBEP in 2007. Now, there are 20, with recent contributions from Sweden and Argentina. The IRPhEP has also expanded from eight contributing countries in 2007 to 16 in 2011. Since ICNC 2007, the contents of the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments1' have increased from 442 evaluations (38000 pages), containing benchmark specifications for 3955 critical or subcritical configurations to 516 evaluations (nearly 55000 pages), containing benchmark specifications for 4405 critical or subcritical configurations in the 2010 Edition of the ICSBEP Handbook. The contents of the Handbook have also increased from 21 to 24 criticality-alarm-placement/shielding configurations with multiple dose points for each, and from 20 to 200 configurations categorized as fundamental physics measurements relevant to criticality safety applications. Approximately 25 new evaluations and 150 additional configurations are expected to be added to the 2011 edition of the Handbook. Since ICNC 2007, the contents of the 'International Handbook of Evaluated Reactor Physics Benchmark Experiments2' have increased from 16 different experimental series that were performed at 12 different reactor facilities to 53 experimental series that were performed at 30 different reactor facilities in the 2011 edition of the Handbook. Considerable effort has also been made to improve the functionality of the searchable database, DICE (Database for the International Criticality Benchmark Evaluation Project) and verify the accuracy of the data contained therein. DICE will be discussed in separate papers at ICNC 2011. The status of the ICSBEP and the IRPh

  16. ComprehensiveBench: a Benchmark for the Extensive Evaluation of Global Scheduling Algorithms

    NASA Astrophysics Data System (ADS)

    Pilla, Laércio L.; Bozzetti, Tiago C.; Castro, Márcio; Navaux, Philippe O. A.; Méhaut, Jean-François

    2015-10-01

    Parallel applications that present tasks with imbalanced loads or complex communication behavior usually do not exploit the underlying resources of parallel platforms to their full potential. In order to mitigate this issue, global scheduling algorithms are employed. As finding the optimal task distribution is an NP-Hard problem, identifying the most suitable algorithm for a specific scenario and comparing algorithms are not trivial tasks. In this context, this paper presents ComprehensiveBench, a benchmark for global scheduling algorithms that enables the variation of a vast range of parameters that affect performance. ComprehensiveBench can be used to assist in the development and evaluation of new scheduling algorithms, to help choose a specific algorithm for an arbitrary application, to emulate other applications, and to enable statistical tests. We illustrate its use in this paper with an evaluation of Charm++ periodic load balancers that stresses their characteristics.

  17. 2D and 3D turbulent reconnection as a benchmark within the SWIFF project

    NASA Astrophysics Data System (ADS)

    Lapenta, G.; Markidis, S.; Bettarini, L.

    2012-04-01

    The goals of SWIFF (swiff.eu/) are: * Zero-in on the physics of all aspects of space weather and design mathematical models that can address them. * Develop specific computational models that are especially suited to handling the great complexity of space weather events where the range of time evolutions and of spatial variations are so much more challenging than in regular meteorological models. * Develop the software needed to implement such computational models on the modern supercomputers available now in Europe. Within Swiff a rigorous benchmarking acrtivity is taking place that will be reported here. A full description is available at: swiff.eu/wiki/index.php?title=Main_Page#Benchmark_Activities

  18. SU-E-J-30: Benchmark Image-Based TCP Calculation for Evaluation of PTV Margins for Lung SBRT Patients

    SciTech Connect

    Li, M; Chetty, I; Zhong, H

    2014-06-01

    Purpose: Tumor control probability (TCP) calculated with accumulated radiation doses may help design appropriate treatment margins. Image registration errors, however, may compromise the calculated TCP. The purpose of this study is to develop benchmark CT images to quantify registration-induced errors in the accumulated doses and their corresponding TCP. Methods: 4DCT images were registered from end-inhale (EI) to end-exhale (EE) using a “demons” algorithm. The demons DVFs were corrected by an FEM model to get realistic deformation fields. The FEM DVFs were used to warp the EI images to create the FEM-simulated images. The two images combined with the FEM DVF formed a benchmark model. Maximum intensity projection (MIP) images, created from the EI and simulated images, were used to develop IMRT plans. Two plans with 3 and 5 mm margins were developed for each patient. With these plans, radiation doses were recalculated on the simulated images and warped back to the EI images using the FEM DVFs to get the accumulated doses. The Elastix software was used to register the FEM-simulated images to the EI images. TCPs calculated with the Elastix-accumulated doses were compared with those generated by the FEM to get the TCP error of the Elastix registrations. Results: For six lung patients, the mean Elastix registration error ranged from 0.93 to 1.98 mm. Their relative dose errors in PTV were between 0.28% and 6.8% for 3mm margin plans, and between 0.29% and 6.3% for 5mm-margin plans. As the PTV margin reduced from 5 to 3 mm, the mean TCP error of the Elastix-reconstructed doses increased from 2.0% to 2.9%, and the mean NTCP errors decreased from 1.2% to 1.1%. Conclusion: Patient-specific benchmark images can be used to evaluate the impact of registration errors on the computed TCPs, and may help select appropriate PTV margins for lung SBRT patients.

  19. ImQual: a web-service dedicated to image quality evaluation and metrics benchmark

    NASA Astrophysics Data System (ADS)

    Nauge, Michael; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine

    2011-01-01

    Quality assessment is becoming an important issue in the framework of image and video processing. Images are generally intended to be viewed by human observers and thus the consideration of the visual perception is an intrinsic aspect of the effective assessment of image quality. This observation has been made for different application domains such as printing, compression, transmission, and so on. Recently hundreds of research paper have proposed objective quality metrics dedicated to several image and video applications. With this abundance of quality tools, it is more than ever important to have a set of rules/methods allowing to assess the efficiency of a given metric. In this direction, technical groups such as VQEG (Video Quality Experts Group) or JPEG AIC (Advanced Image Coding) have focused their interest on the definition of test-plans to measure the impact of a metric. Following this wave in the image and video community, we propose in this paper a web-service or a web-application dedicated to the benchmark of quality metrics for image compression and open to all possible extensions. This application is intended to be the reference tool for the JPEG committee in order to ease the evaluation of new compression technologies. Also it is seen as a global help for our community to help researchers time while trying to evaluate their algorithms of watermarking, compression, enhancement, . . . As an illustration of the web-application, we propose a benchmark of many well-known metrics on several image databases to provide a small overview of the possible use.

  20. Benchmark Data for Evaluation of Aeroacoustic Propagation Codes With Grazing Flow

    NASA Technical Reports Server (NTRS)

    Jones, Michael G.; Watson, Willie R.; Parrott, Tony L.

    2005-01-01

    Increased understanding of the effects of acoustic treatment on the propagation of sound through commercial aircraft engine nacelles is a requirement for more efficient liner design. To this end, one of NASA s goals is to further the development of duct propagation and impedance reduction codes. A number of these codes have been developed over the last three decades. These codes are typically divided into two categories: (1) codes that use the measured complex acoustic pressure field to reduce the acoustic impedance of treatment that is positioned along the wall of the duct, and (2) codes that use the acoustic impedance of the treatment as input and compute the sound field throughout the duct. Clearly, the value of these codes is dependent upon the quality of the data used for their validation. Over the past two decades, data acquired in the NASA Langley Research Center Grazing Incidence Tube have been used by a number of researchers for comparison with their propagation codes. Many of these comparisons have been based upon Grazing Incidence Tube tests that were conducted to study specific liner technology components, and were incomplete for general propagation code validation. Thus, the objective of the current investigation is to provide a quality data set that can be used as a benchmark for evaluation of duct propagation and impedance reduction codes. In order to achieve this objective, two parallel efforts have been undertaken. The first of these is the development of an enhanced impedance eduction code that uses data acquired in the Grazing Incidence Tube. This enhancement is intended to place the benchmark data on as firm a foundation as possible. The second key effort is the acquisition of a comprehensive set of data selected to allow propagation code evaluations over a range of test conditions.

  1. A Quantitative Methodology for Determining the Critical Benchmarks for Project 2061 Strand Maps

    ERIC Educational Resources Information Center

    Kuhn, G.

    2008-01-01

    The American Association for the Advancement of Science (AAAS) was tasked with identifying the key science concepts for science literacy in K-12 students in America (AAAS, 1990, 1993). The AAAS Atlas of Science Literacy (2001) has organized roughly half of these science concepts or benchmarks into fifty flow charts. Each flow chart or strand map…

  2. Evaluation of the Bangalore Project.

    ERIC Educational Resources Information Center

    Beretta, Alan; Davies, Alan

    1985-01-01

    Follows up an article by Brumfit on the Bangalore/Madras Communicational Teaching Project (CTP). Discusses the framework, tests, and results of a 1984 evaluation supporting the claim that grammar construction can take place through a focus on meaning alone. (SED)

  3. Block Transfer Agreement Evaluation Project

    ERIC Educational Resources Information Center

    Bastedo, Helena

    2010-01-01

    The objective of this project is to evaluate for the British Columbia Council on Admissions and Transfer (BCCAT) the effectiveness of block transfer agreements (BTAs) in the BC Transfer System and recommend steps to be taken to improve their effectiveness. Findings of this study revealed that institutions want to expand block credit transfer;…

  4. Research Reactor Benchmarks

    SciTech Connect

    Ravnik, Matjaz; Jeraj, Robert

    2003-09-15

    A criticality benchmark experiment performed at the Jozef Stefan Institute TRIGA Mark II research reactor is described. This experiment and its evaluation are given as examples of benchmark experiments at research reactors. For this reason the differences and possible problems compared to other benchmark experiments are particularly emphasized. General guidelines for performing criticality benchmarks in research reactors are given. The criticality benchmark experiment was performed in a normal operating reactor core using commercially available fresh 20% enriched fuel elements containing 12 wt% uranium in uranium-zirconium hydride fuel material. Experimental conditions to minimize experimental errors and to enhance computer modeling accuracy are described. Uncertainties in multiplication factor due to fuel composition and geometry data are analyzed by sensitivity analysis. The simplifications in the benchmark model compared to the actual geometry are evaluated. Sample benchmark calculations with the MCNP and KENO Monte Carlo codes are given.

  5. Evaluation for 4S core nuclear design method through integration of benchmark data

    SciTech Connect

    Nagata, A.; Tsuboi, Y.; Moriki, Y.; Kawashima, M.

    2012-07-01

    The 4S is a sodium-cooled small fast reactor which is reflector-controlled for operation through core lifetime about 30 years. The nuclear design method has been selected to treat neutron leakage with high accuracy. It consists of a continuous-energy Monte Carlo code, discrete ordinate transport codes and JENDL-3.3. These two types of neutronic analysis codes are used for the design in a complementary manner. The accuracy of the codes has been evaluated by analysis of benchmark critical experiments and the experimental reactor data. The measured data used for the evaluation is critical experimental data of the FCA XXIII as a physics mockup assembly of the 4S core, FCA XVI, FCA XIX, ZPR, and data of experimental reactor JOYO MK-1. Evaluated characteristics are criticality, reflector reactivity worth, power distribution, absorber reactivity worth, and sodium void worth. A multi-component bias method was applied, especially to improve the accuracy of sodium void reactivity worth. As the result, it has been confirmed that the 4S core nuclear design method provides good accuracy, and typical bias factors and their uncertainties are determined. (authors)

  6. Analysis of a benchmark suite to evaluate mixed numeric and symbolic processing

    NASA Technical Reports Server (NTRS)

    Ragharan, Bharathi; Galant, David

    1992-01-01

    The suite of programs that formed the benchmark for a proposed advanced computer is described and analyzed. The features of the processor and its operating system that are tested by the benchmark are discussed. The computer codes and the supporting data for the analysis are given as appendices.

  7. BENCHMARK DOSES FOR CHEMICAL MIXTURES: EVALUATION OF A MIXTURE OF 18 PHAHS.

    EPA Science Inventory

    Benchmark doses (BMDs), defined as doses of a substance that are expected to result in a pre-specified level of "benchmark" response (BMR), have been used for quantifying the risk associated with exposure to environmental hazards. The lower confidence limit of the BMD is used as...

  8. 5 CFR 470.317 - Project evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Project evaluation. 470.317 Section 470... MANAGEMENT RESEARCH PROGRAMS AND DEMONSTRATIONS PROJECTS Regulatory Requirements Pertaining to Demonstration Projects § 470.317 Project evaluation. (a) Compliance evaluation. OPM will review the operation of...

  9. Evaluation of anode (electro)catalytic materials for the direct borohydride fuel cell: Methods and benchmarks

    NASA Astrophysics Data System (ADS)

    Olu, Pierre-Yves; Job, Nathalie; Chatenet, Marian

    2016-09-01

    In this paper, different methods are discussed for the evaluation of the potential of a given catalyst, in view of an application as a direct borohydride fuel cell DBFC anode material. Characterizations results in DBFC configuration are notably analyzed at the light of important experimental variables which influence the performances of the DBFC. However, in many practical DBFC-oriented studies, these various experimental variables prevent one to isolate the influence of the anode catalyst on the cell performances. Thus, the electrochemical three-electrode cell is a widely-employed and useful tool to isolate the DBFC anode catalyst and to investigate its electrocatalytic activity towards the borohydride oxidation reaction (BOR) in the absence of other limitations. This article reviews selected results for different types of catalysts in electrochemical cell containing a sodium borohydride alkaline electrolyte. In particular, propositions of common experimental conditions and benchmarks are given for practical evaluation of the electrocatalytic activity towards the BOR in three-electrode cell configuration. The major issue of gaseous hydrogen generation and escape upon DBFC operation is also addressed through a comprehensive review of various results depending on the anode composition. At last, preliminary concerns are raised about the stability of potential anode catalysts upon DBFC operation.

  10. Integral Benchmark Data for Nuclear Data Testing Through the ICSBEP & IRPhEP

    SciTech Connect

    J. Blair Briggs; John D. Bess; Jim Gulliford; Ian Hill

    2013-10-01

    The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and International Reactor Physics Experiment Evaluation Project (IRPhEP) was last discussed directly with the nuclear data community at ND2007. Since ND2007, integral benchmark data that are available for nuclear data testing have increased significantly. The status of the ICSBEP and the IRPhEP is discussed and selected benchmark configurations that have been added to the ICSBEP and IRPhEP Handbooks since ND2007 are highlighted.

  11. An evaluation, comparison, and accurate benchmarking of several publicly available MS/MS search algorithms: Sensitivity and Specificity analysis.

    SciTech Connect

    Kapp, Eugene; Schutz, Frederick; Connolly, Lisa M.; Chakel, John A.; Meza, Jose E.; Miller, Christine A.; Fenyo, David; Eng, Jimmy K.; Adkins, Joshua N.; Omenn, Gilbert; Simpson, Richard

    2005-08-01

    MS/MS and associated database search algorithms are essential proteomic tools for identifying peptides. Due to their widespread use, it is now time to perform a systematic analysis of the various algorithms currently in use. Using blood specimens used in the HUPO Plasma Proteome Project, we have evaluated five search algorithms with respect to their sensitivity and specificity, and have also accurately benchmarked them based on specified false-positive (FP) rates. Spectrum Mill and SEQUEST performed well in terms of sensitivity, but were inferior to MASCOT, X-Tandem, and Sonar in terms of specificity. Overall, MASCOT, a probabilistic search algorithm, correctly identified most peptides based on a specified FP rate. The rescoring algorithm, Peptide Prophet, enhanced the overall performance of the SEQUEST algorithm, as well as provided predictable FP error rates. Ideally, score thresholds should be calculated for each peptide spectrum or minimally, derived from a reversed-sequence search as demonstrated in this study based on a validated data set. The availability of open-source search algorithms, such as X-Tandem, makes it feasible to further improve the validation process (manual or automatic) on the basis of ''consensus scoring'', i.e., the use of multiple (at least two) search algorithms to reduce the number of FPs. complement.

  12. Public automated web-based evaluation service for watermarking schemes: StirMark benchmark

    NASA Astrophysics Data System (ADS)

    Petitcolas, Fabien A. P.; Steinebach, Martin; Raynal, Frederic; Dittmann, Jana; Fontaine, Caroline; Fates, Nazim

    2001-08-01

    One of the main problems, which darkens the future of digital watermarking technologies, is the lack of detailed evaluation of existing marking schemes. This lack of benchmarking of current algorithms is blatant and confuses rights holders as well as software and hardware manufacturers and prevents them from using the solution appropriate to their needs. Indeed basing long-lived protection schemes on badly tested watermarking technology does not make sense. In this paper we will present the architecture of a public automated evaluation service we have developed for still images, sound and video. We will detail and justify our choice of evaluation profiles, that is the series of tests applied to different types of wa-termarking schemes. These evaluation profiles allow us to measure the reliability of a marking scheme to different levels from low to very high. Beside the known StirMark transformations, we will also detail new tests that will be included in this platform. One of them is intended to measure the real size of the key space. Indeed, if one is not careful, two different watermarking keys may produce interfering watermarks and as a consequence the actual space of keys is much smaller than it appears. Another set of tests is related to audio data and addresses the usual equalisation and normalisation but also time stretching, pitch shifting. Finally we propose a set of tests for fingerprinting applications. This includes: averaging of copies with different fingerprint, random ex-change of part between different copies and comparison between copies with selection of most/less frequently used position differences.

  13. ICSBEP Benchmarks For Nuclear Data Applications

    NASA Astrophysics Data System (ADS)

    Briggs, J. Blair

    2005-05-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organization for Economic Cooperation and Development (OECD) — Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Serbia and Montenegro (formerly Yugoslavia), Kazakhstan, Spain, Israel, Brazil, Poland, and the Czech Republic are now participating. South Africa, India, China, and Germany are considering participation. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled "International Handbook of Evaluated Criticality Safety Benchmark Experiments." The 2004 Edition of the Handbook contains benchmark specifications for 3331 critical or subcritical configurations that are intended for use in validation efforts and for testing basic nuclear data. New to the 2004 Edition of the Handbook is a draft criticality alarm / shielding type benchmark that should be finalized in 2005 along with two other similar benchmarks. The Handbook is being used extensively for nuclear data testing and is expected to be a valuable resource for code and data validation and improvement efforts for decades to come. Specific benchmarks that are useful for testing structural materials such as iron, chromium, nickel, and manganese; beryllium; lead; thorium; and 238U are highlighted.

  14. A Benchmark Data Set to Evaluate the Illumination Robustness of Image Processing Algorithms for Object Segmentation and Classification.

    PubMed

    Khan, Arif Ul Maula; Mikut, Ralf; Reischl, Markus

    2015-01-01

    Developers of image processing routines rely on benchmark data sets to give qualitative comparisons of new image analysis algorithms and pipelines. Such data sets need to include artifacts in order to occlude and distort the required information to be extracted from an image. Robustness, the quality of an algorithm related to the amount of distortion is often important. However, using available benchmark data sets an evaluation of illumination robustness is difficult or even not possible due to missing ground truth data about object margins and classes and missing information about the distortion. We present a new framework for robustness evaluation. The key aspect is an image benchmark containing 9 object classes and the required ground truth for segmentation and classification. Varying levels of shading and background noise are integrated to distort the data set. To quantify the illumination robustness, we provide measures for image quality, segmentation and classification success and robustness. We set a high value on giving users easy access to the new benchmark, therefore, all routines are provided within a software package, but can as well easily be replaced to emphasize other aspects. PMID:26191792

  15. A Benchmark Data Set to Evaluate the Illumination Robustness of Image Processing Algorithms for Object Segmentation and Classification

    PubMed Central

    Khan, Arif ul Maula; Mikut, Ralf; Reischl, Markus

    2015-01-01

    Developers of image processing routines rely on benchmark data sets to give qualitative comparisons of new image analysis algorithms and pipelines. Such data sets need to include artifacts in order to occlude and distort the required information to be extracted from an image. Robustness, the quality of an algorithm related to the amount of distortion is often important. However, using available benchmark data sets an evaluation of illumination robustness is difficult or even not possible due to missing ground truth data about object margins and classes and missing information about the distortion. We present a new framework for robustness evaluation. The key aspect is an image benchmark containing 9 object classes and the required ground truth for segmentation and classification. Varying levels of shading and background noise are integrated to distort the data set. To quantify the illumination robustness, we provide measures for image quality, segmentation and classification success and robustness. We set a high value on giving users easy access to the new benchmark, therefore, all routines are provided within a software package, but can as well easily be replaced to emphasize other aspects. PMID:26191792

  16. Managing for Results in America's Great City Schools 2014: Results from Fiscal Year 2012-13. A Report of the Performance Measurement and Benchmarking Project

    ERIC Educational Resources Information Center

    Council of the Great City Schools, 2014

    2014-01-01

    In 2002 the "Council of the Great City Schools" and its members set out to develop performance measures that could be used to improve business operations in urban public school districts. The Council launched the "Performance Measurement and Benchmarking Project" to achieve these objectives. The purposes of the project was to:…

  17. Ada compiler evaluation on the Space Station Freedom Software Support Environment project

    NASA Technical Reports Server (NTRS)

    Badal, D. L.

    1989-01-01

    This paper describes the work in progress to select the Ada compilers for the Space Station Freedom Program (SSFP) Software Support Environment (SSE) project. The purpose of the SSE Ada compiler evaluation team is to establish the criteria, test suites, and benchmarks to be used for evaluating Ada compilers for the mainframes, workstations, and the realtime target for flight- and ground-based computers. The combined efforts and cooperation of the customer, subcontractors, vendors, academia and SIGAda groups made it possible to acquire the necessary background information, benchmarks, test suites, and criteria used.

  18. Yucca Mountain Project thermal and mechanical codes first benchmark exercise: Part 3, Jointed rock mass analysis; Yucca Mountain Site Characterization Project

    SciTech Connect

    Costin, L.S.; Bauer, S.J.

    1991-10-01

    Thermal and mechanical models for intact and jointed rock mass behavior are being developed, verified, and validated at Sandia National Laboratories for the Yucca Mountain Site Characterization Project. Benchmarking is an essential part of this effort and is one of the tools used to demonstrate verification of engineering software used to solve thermomechanical problems. This report presents the results of the third (and final) phase of the first thermomechanical benchmark exercise. In the first phase of this exercise, nonlinear heat conduction code were used to solve the thermal portion of the benchmark problem. The results from the thermal analysis were then used as input to the second and third phases of the exercise, which consisted of solving the structural portion of the benchmark problem. In the second phase of the exercise, a linear elastic rock mass model was used. In the third phase of the exercise, two different nonlinear jointed rock mass models were used to solve the thermostructural problem. Both models, the Sandia compliant joint model and the RE/SPEC joint empirical model, explicitly incorporate the effect of the joints on the response of the continuum. Three different structural codes, JAC, SANCHO, and SPECTROM-31, were used with the above models in the third phase of the study. Each model was implemented in two different codes so that direct comparisons of results from each model could be made. The results submitted by the participants showed that the finite element solutions using each model were in reasonable agreement. Some consistent differences between the solutions using the two different models were noted but are not considered important to verification of the codes. 9 refs., 18 figs., 8 tabs.

  19. Evaluation and optimization of virtual screening workflows with DEKOIS 2.0--a public library of challenging docking benchmark sets.

    PubMed

    Bauer, Matthias R; Ibrahim, Tamer M; Vogel, Simon M; Boeckler, Frank M

    2013-06-24

    The application of molecular benchmarking sets helps to assess the actual performance of virtual screening (VS) workflows. To improve the efficiency of structure-based VS approaches, the selection and optimization of various parameters can be guided by benchmarking. With the DEKOIS 2.0 library, we aim to further extend and complement the collection of publicly available decoy sets. Based on BindingDB bioactivity data, we provide 81 new and structurally diverse benchmark sets for a wide variety of different target classes. To ensure a meaningful selection of ligands, we address several issues that can be found in bioactivity data. We have improved our previously introduced DEKOIS methodology with enhanced physicochemical matching, now including the consideration of molecular charges, as well as a more sophisticated elimination of latent actives in the decoy set (LADS). We evaluate the docking performance of Glide, GOLD, and AutoDock Vina with our data sets and highlight existing challenges for VS tools. All DEKOIS 2.0 benchmark sets will be made accessible at http://www.dekois.com. PMID:23705874

  20. BioBenchmark Toyama 2012: an evaluation of the performance of triple stores on biological data

    PubMed Central

    2014-01-01

    Background Biological databases vary enormously in size and data complexity, from small databases that contain a few million Resource Description Framework (RDF) triples to large databases that contain billions of triples. In this paper, we evaluate whether RDF native stores can be used to meet the needs of a biological database provider. Prior evaluations have used synthetic data with a limited database size. For example, the largest BSBM benchmark uses 1 billion synthetic e-commerce knowledge RDF triples on a single node. However, real world biological data differs from the simple synthetic data much. It is difficult to determine whether the synthetic e-commerce data is efficient enough to represent biological databases. Therefore, for this evaluation, we used five real data sets from biological databases. Results We evaluated five triple stores, 4store, Bigdata, Mulgara, Virtuoso, and OWLIM-SE, with five biological data sets, Cell Cycle Ontology, Allie, PDBj, UniProt, and DDBJ, ranging in size from approximately 10 million to 8 billion triples. For each database, we loaded all the data into our single node and prepared the database for use in a classical data warehouse scenario. Then, we ran a series of SPARQL queries against each endpoint and recorded the execution time and the accuracy of the query response. Conclusions Our paper shows that with appropriate configuration Virtuoso and OWLIM-SE can satisfy the basic requirements to load and query biological data less than 8 billion or so on a single node, for the simultaneous access of 64 clients. OWLIM-SE performs best for databases with approximately 11 million triples; For data sets that contain 94 million and 590 million triples, OWLIM-SE and Virtuoso perform best. They do not show overwhelming advantage over each other; For data over 4 billion Virtuoso works best. 4store performs well on small data sets with limited features when the number of triples is less than 100 million, and our test shows its

  1. BENCHMARKING SUSTAINABILITY ENGINEERING EDUCATION

    EPA Science Inventory

    The goals of this project are to develop and apply a methodology for benchmarking curricula in sustainability engineering and to identify individuals active in sustainability engineering education.

  2. Criticality experiments and benchmarks for cross section evaluation: the neptunium case

    NASA Astrophysics Data System (ADS)

    Leong, L. S.; Tassan-Got, L.; Audouin, L.; Paradela, C.; Wilson, J. N.; Tarrio, D.; Berthier, B.; Duran, I.; Le Naour, C.; Stéphan, C.

    2013-03-01

    The 237Np neutron-induced fission cross section has been recently measured in a large energy range (from eV to GeV) at the n_TOF facility at CERN. When compared to previous measurement the n_TOF fission cross section appears to be higher by 5-7% beyond the fission threshold. To check the relevance of n_TOF data, we apply a criticality experiment performed at Los Alamos with a 6 kg sphere of 237Np, surrounded by enriched uranium 235U so as to approach criticality with fast neutrons. The multiplication factor ke f f of the calculation is in better agreement with the experiment (the deviation of 750 pcm is reduced to 250 pcm) when we replace the ENDF/B-VII.0 evaluation of the 237Np fission cross section by the n_TOF data. We also explore the hypothesis of deficiencies of the inelastic cross section in 235U which has been invoked by some authors to explain the deviation of 750 pcm. With compare to inelastic large distortion calculation, it is incompatible with existing measurements. Also we show that the v of 237Np can hardly be incriminated because of the high accuracy of the existing data. Fission rate ratios or averaged fission cross sections measured in several fast neutron fields seem to give contradictory results on the validation of the 237Np cross section but at least one of the benchmark experiments, where the active deposits have been well calibrated for the number of atoms, favors the n_TOF data set. These outcomes support the hypothesis of a higher fission cross section of 237Np.

  3. Helical screw expander evaluation project

    NASA Astrophysics Data System (ADS)

    McKay, R.

    1982-03-01

    A one MW helical rotary screw expander power system for electric power generation from geothermal brine was evaluated. The technology explored in the testing is simple, potentially very efficient, and ideally suited to wellhead installations in moderate to high enthalpy, liquid dominated field. A functional one MW geothermal electric power plant that featured a helical screw expander was produced and then tested with a demonstrated average performance of approximately 45% machine efficiency over a wide range of test conditions in noncondensing, operation on two-phase geothermal fluids. The Project also produced a computer equipped data system, an instrumentation and control van, and a 1000 kW variable load bank, all integrated into a test array designed for operation at a variety of remote test sites. Data are presented for the Utah testing and for the noncondensing phases of the testing in Mexico. Test time logged was 437 hours during the Utah tests and 1101 hours during the Mexico tests.

  4. Helical screw expander evaluation project

    NASA Technical Reports Server (NTRS)

    Mckay, R.

    1982-01-01

    A one MW helical rotary screw expander power system for electric power generation from geothermal brine was evaluated. The technology explored in the testing is simple, potentially very efficient, and ideally suited to wellhead installations in moderate to high enthalpy, liquid dominated field. A functional one MW geothermal electric power plant that featured a helical screw expander was produced and then tested with a demonstrated average performance of approximately 45% machine efficiency over a wide range of test conditions in noncondensing, operation on two-phase geothermal fluids. The Project also produced a computer equipped data system, an instrumentation and control van, and a 1000 kW variable load bank, all integrated into a test array designed for operation at a variety of remote test sites. Data are presented for the Utah testing and for the noncondensing phases of the testing in Mexico. Test time logged was 437 hours during the Utah tests and 1101 hours during the Mexico tests.

  5. Benchmarking Clinical Speech Recognition and Information Extraction: New Data, Methods, and Evaluations

    PubMed Central

    Zhou, Liyuan; Hanlen, Leif; Ferraro, Gabriela

    2015-01-01

    Background Over a tenth of preventable adverse events in health care are caused by failures in information flow. These failures are tangible in clinical handover; regardless of good verbal handover, from two-thirds to all of this information is lost after 3-5 shifts if notes are taken by hand, or not at all. Speech recognition and information extraction provide a way to fill out a handover form for clinical proofing and sign-off. Objective The objective of the study was to provide a recorded spoken handover, annotated verbatim transcriptions, and evaluations to support research in spoken and written natural language processing for filling out a clinical handover form. This dataset is based on synthetic patient profiles, thereby avoiding ethical and legal restrictions, while maintaining efficacy for research in speech-to-text conversion and information extraction, based on realistic clinical scenarios. We also introduce a Web app to demonstrate the system design and workflow. Methods We experiment with Dragon Medical 11.0 for speech recognition and CRF++ for information extraction. To compute features for information extraction, we also apply CoreNLP, MetaMap, and Ontoserver. Our evaluation uses cross-validation techniques to measure processing correctness. Results The data provided were a simulation of nursing handover, as recorded using a mobile device, built from simulated patient records and handover scripts, spoken by an Australian registered nurse. Speech recognition recognized 5276 of 7277 words in our 100 test documents correctly. We considered 50 mutually exclusive categories in information extraction and achieved the F1 (ie, the harmonic mean of Precision and Recall) of 0.86 in the category for irrelevant text and the macro-averaged F1 of 0.70 over the remaining 35 nonempty categories of the form in our 101 test documents. Conclusions The significance of this study hinges on opening our data, together with the related performance benchmarks and some

  6. Benchmark simulation Model no 2 in Matlab-simulink: towards plant-wide WWTP control strategy evaluation.

    PubMed

    Vreck, D; Gernaey, K V; Rosen, C; Jeppsson, U

    2006-01-01

    In this paper, implementation of the Benchmark Simulation Model No 2 (BSM2) within Matlab-Simulink is presented. The BSM2 is developed for plant-wide WWTP control strategy evaluation on a long-term basis. It consists of a pre-treatment process, an activated sludge process and sludge treatment processes. Extended evaluation criteria are proposed for plant-wide control strategy assessment. Default open-loop and closed-loop strategies are also proposed to be used as references with which to compare other control strategies. Simulations indicate that the BM2 is an appropriate tool for plant-wide control strategy evaluation. PMID:17163014

  7. Benchmarking HRD.

    ERIC Educational Resources Information Center

    Ford, Donald J.

    1993-01-01

    Discusses benchmarking, the continuous process of measuring one's products, services, and practices against those recognized as leaders in that field to identify areas for improvement. Examines ways in which benchmarking can benefit human resources functions. (JOW)

  8. The PIE Institute Project: Final Evaluation Report

    ERIC Educational Resources Information Center

    St. John, Mark; Carroll, Becky; Helms, Jen; Smith, Anita

    2008-01-01

    The Playful Invention and Exploration (PIE) Institute project was funded in 2005 by the National Science Foundation (NSF). For the past three years, Inverness Research has served as the external evaluator for the PIE project. The authors' evaluation efforts have included extensive observation and documentation of PIE project activities; ongoing…

  9. ARL Physics Web Pages: An Evaluation by Established, Transitional and Emerging Benchmarks.

    ERIC Educational Resources Information Center

    Duffy, Jane C.

    2002-01-01

    Provides an overview of characteristics among Association of Research Libraries (ARL) physics Web pages. Examines current academic Web literature and from that develops six benchmarks to measure physics Web pages: ease of navigation; logic of presentation; representation of all forms of information; engagement of the discipline; interactivity of…

  10. A benchmark system for the evaluation of selected phase retrieval methods

    NASA Astrophysics Data System (ADS)

    Lingel, Christian; Hasler, Malte; Haist, Tobias; Pedrini, Giancarlo; Osten, Wolfgang

    2014-05-01

    In comparison to classical phase measurement methods like interferometry and holography, there are many phase retrieval methods which are able to recover the phase of a complex valued object without the necessity of a reference wave. Due to the large number of different methods, iterative as well as non-iterative ones, it is hard to find the method which is appropriate for a given application or object. We propose a system which is based on different criteria, some of which can be calculated by analyzing the phase retrieval result compared to the original object. Other criteria like the complexity of the optical system are also taken into account. For testing the benchmark system we use a software which is suitable, first, to simulate the acquisition process of the intensity measurements, second, to run the phase retrieval algorithm itself and, third, to calculate the values of the benchmark criteria. Having determined the values of the different criteria we assign points for every criterion which can be weighted by importance and are summed up for getting an overall benchmark score. This final score can be used to compare different phase retrieval methods and by having a closer look at the single criteria it is possible to analyze the strengths and weaknesses of the method. We will show the detailed proceeding of calculating the benchmark value by means of a selected phase retrieval method and a phase only object (USAF target). We have to emphasize that the results strongly depend on the object.

  11. The Education North Evaluation Project. Final Report.

    ERIC Educational Resources Information Center

    Ingram, E. J.; McIntosh, R. G.

    The Education North Evaluation Project monitored operation of the Education North Project (a 1978-82 project aimed at encouraging parents, teachers, and other community members in small, isolated northern Alberta communities to work together in improving the quality of education for school-aged children), assessed project outcomes, and developed…

  12. ICSBEP Criticality Benchmark Eigenvalues with ENDF/B-VII.1 Cross Sections

    SciTech Connect

    Kahler, Albert C. III; MacFarlane, Robert

    2012-06-28

    We review MCNP eigenvalue calculations from a suite of International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbook evaluations with the recently distributed ENDF/B-VII.1 cross section library.

  13. Evaluation of Project Symbiosis: An Interdisciplinary Science Education Project.

    ERIC Educational Resources Information Center

    Altschuld, James W.

    1993-01-01

    The goal of this report is to provide a summary of the evaluation of Project Symbiosis which focused on enhancing the teaching of science principles in high school agriculture courses. The project initially involved 15 teams of science and agriculture teachers and was characterized by an extensive evaluation component consisting of six formal…

  14. Neutron Cross Section Processing Methods for Improved Integral Benchmarking of Unresolved Resonance Region Evaluations

    NASA Astrophysics Data System (ADS)

    Walsh, Jonathan A.; Forget, Benoit; Smith, Kord S.; Brown, Forrest B.

    2016-03-01

    In this work we describe the development and application of computational methods for processing neutron cross section data in the unresolved resonance region (URR). These methods are integrated with a continuous-energy Monte Carlo neutron transport code, thereby enabling their use in high-fidelity analyses. Enhanced understanding of the effects of URR evaluation representations on calculated results is then obtained through utilization of the methods in Monte Carlo integral benchmark simulations of fast spectrum critical assemblies. First, we present a so-called on-the-fly (OTF) method for calculating and Doppler broadening URR cross sections. This method proceeds directly from ENDF-6 average unresolved resonance parameters and, thus, eliminates any need for a probability table generation pre-processing step in which tables are constructed at several energies for all desired temperatures. Significant memory reduction may be realized with the OTF method relative to a probability table treatment if many temperatures are needed. Next, we examine the effects of using a multi-level resonance formalism for resonance reconstruction in the URR. A comparison of results obtained by using the same stochastically-generated realization of resonance parameters in both the single-level Breit-Wigner (SLBW) and multi-level Breit-Wigner (MLBW) formalisms allows for the quantification of level-level interference effects on integrated tallies such as keff and energy group reaction rates. Though, as is well-known, cross section values at any given incident energy may differ significantly between single-level and multi-level formulations, the observed effects on integral results are minimal in this investigation. Finally, we demonstrate the calculation of true expected values, and the statistical spread of those values, through independent Monte Carlo simulations, each using an independent realization of URR cross section structure throughout. It is observed that both probability table

  15. Vermont Rural and Farm Family Rehabilitation Project. A Benchmark Report. Research Report MP73.

    ERIC Educational Resources Information Center

    Tompkins, E. H.; And Others

    The report presents information about client families and their farms during their contact with the Vermont Rural and Farm Family Rehabilitation (RFFR) project from March 1, 1969 to June 30, 1971. Data are from 450 family case histories which include 2,089 members. Most were from northern Vermont. Families averaged 4.64 persons each, about 1 more…

  16. Linking user and staff perspectives in the evaluation of innovative transition projects for youth with disabilities.

    PubMed

    McAnaney, Donal F; Wynne, Richard F

    2016-06-01

    A key challenge in formative evaluation is to gather appropriate evidence to inform the continuous improvement of initiatives. In the absence of outcome data, the programme evaluator often must rely on the perceptions of beneficiaries and staff in generating insight into what is making a difference. The article describes the approach adopted in an evaluation of 15 innovative projects supporting school-leavers with disabilities in making the transition to education, work and life in community settings. Two complementary processes provided an insight into what project staff and leadership viewed as the key project activities and features that facilitated successful transition as well as the areas of quality of life (QOL) that participants perceived as having been impacted positively by the projects. A comparison was made between participants' perceptions of QOL impact with the views of participants in services normally offered by the wider system. This revealed that project participants were significantly more positive in their views than participants in traditional services. In addition, the processes and activities of the more highly rated projects were benchmarked against less highly rated projects and also with usually available services. Even in the context of a range of intervening variables such as level and complexity of participant needs and variations in the stage of development of individual projects, the benchmarking process indicated a number of project characteristics that were highly valued by participants. PMID:26912504

  17. Project TIME. Final Evaluation Report.

    ERIC Educational Resources Information Center

    Schroyer, Connie J.; Payne, David L.

    Project TIME (Training Initiative for Manufacturing Employees) was an 18-month National Workplace Literacy Program conducted by Lord Fairfax Community College in conjunction with an automotive parts plant and Triplett Technical and Business Institute in Virginia. Project TIME had three primary objectives: to help employees obtain the basic…

  18. Comprehensive Evaluation Project. Final Report.

    ERIC Educational Resources Information Center

    1969

    This project sought to develop a set of tests for the assessment of the basic literacy and occupational cognizance of pupils in those public elementary and secondary schools, including vocational schools, receiving services through Federally supported educational programs and projects. The assessment is to produce generalizable average scores for…

  19. Benchmarking studies for the DESCARTES and CIDER codes. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Eslinger, P.W.; Ouderkirk, S.J.; Nichols, W.E.

    1993-01-01

    The Hanford Envirorunental Dose Reconstruction (HEDR) project is developing several computer codes to model the airborne release, transport, and envirormental accumulation of radionuclides resulting from Hanford operations from 1944 through 1972. In order to calculate the dose of radiation a person may have received in any given location, the geographic area addressed by the HEDR Project will be divided into a grid. The grid size suggested by the draft requirements contains 2091 units called nodes. Two of the codes being developed are DESCARTES and CIDER. The DESCARTES code will be used to estimate the concentration of radionuclides in environmental pathways from the output of the air transport code RATCHET. The CIDER code will use information provided by DESCARTES to estimate the dose received by an individual. The requirements that Battelle (BNW) set for these two codes were released to the HEDR Technical Steering Panel (TSP) in a draft document on November 10, 1992. This document reports on the preliminary work performed by the code development team to determine if the requirements could be met.

  20. Competitive Skills Project (CSP). External Evaluator's Report.

    ERIC Educational Resources Information Center

    Wrigley, Heide Spruck

    An external evaluation was made of the Competitive Skills Project, a National Workplace Literacy Program carried out in partnership between El Camino College and BP Chemicals. Among the problems identified were the following: (1) because the original director and his successor left the project, the original evaluation design could not be…

  1. A study on operation efficiency evaluation based on firm's financial index and benchmark selection: take China Unicom as an example

    NASA Astrophysics Data System (ADS)

    Wu, Zu-guang; Tian, Zhan-jun; Liu, Hui; Huang, Rui; Zhu, Guo-hua

    2009-07-01

    Being the only listed telecom operators of A share market, China Unicom has always been attracted many institutional investors under the concept of 3G recent years,which itself is a great technical progress expectation.Do the institutional investors or the concept of technical progress have signficant effect on the improving of firm's operating efficiency?Though reviewing the documentary about operating efficiency we find that schoolars study this problem useing the regress analyzing based on traditional production function and data envelopment analysis(DEA) and financial index anayzing and marginal function and capital labor ratio coefficient etc. All the methods mainly based on macrodata. This paper we use the micro-data of company to evaluate the operating efficiency.Using factor analyzing based on financial index and comparing the factor score of three years from 2005 to 2007, we find that China Unicom's operating efficiency is under the averge level of benchmark corporates and has't improved under the concept of 3G from 2005 to 2007.In other words,institutional investor or the conception of technical progress expectation have faint effect on the changes of China Unicom's operating efficiency. Selecting benchmark corporates as post to evaluate the operating efficiency is a characteristic of this method ,which is basicallly sipmly and direct.This method is suit for the operation efficiency evaluation of agriculture listed companies because agriculture listed also face technical progress and marketing concept such as tax-free etc.

  2. Establishing the Geomagnetic Disturbance Benchmark Event for Evaluation of the Space Weather Hazard on Power Grids

    NASA Astrophysics Data System (ADS)

    Pulkkinen, A. A.; Bernabeu, E.; Eichner, J.

    2014-12-01

    The awareness about potential major impact geomagnetically induced currents (GIC) can have on the North American high-voltage power transmission system has prompted Federal Energy Regulatory Commission (FERC) to launch a geomagnetic disturbances (GMD) standards drafting process. The goals of the GMD standards are to quantify and mitigate the GMD hazard on the North American grid. North American Electric Reliability Corporation's (NERC) is coordinating the standards drafting process that is now entering Phase II involving quantification of the impact GIC can have on individual parts of the North American grid. As a part of the Phase II GMD standards drafting process, substantial effort has been made for generating benchmark GMD scenarios. These scenarios that quantify extreme geoelectric field magnitudes and temporal waveforms of the field fluctuations are the foundation for subsequent engineering and impacts analyses. The engineering analyses will include the transmission system voltage stability and transformer heating assessments. The work on the GMD scenarios has been a major collaboration between a number of international entities involved in GMD research and transmission system operations. We will discuss in this paper the key elements of the benchmark GMD generation process and show the latest results from our work on the topic.

  3. Benchmarking expert system tools

    NASA Technical Reports Server (NTRS)

    Riley, Gary

    1988-01-01

    As part of its evaluation of new technologies, the Artificial Intelligence Section of the Mission Planning and Analysis Div. at NASA-Johnson has made timing tests of several expert system building tools. Among the production systems tested were Automated Reasoning Tool, several versions of OPS5, and CLIPS (C Language Integrated Production System), an expert system builder developed by the AI section. Also included in the test were a Zetalisp version of the benchmark along with four versions of the benchmark written in Knowledge Engineering Environment, an object oriented, frame based expert system tool. The benchmarks used for testing are studied.

  4. Project Aloha. Annual Evaluation Report.

    ERIC Educational Resources Information Center

    Berryessa Union Elementary School District, San Jose, CA.

    This program, included in "Effective Reading Programs...," was begun in 1971 and serves 1,826 children of varying socioeconomic levels in K-4. Project ALOHA is a mainland demonstration of the Hawaii English Program, a total instructional system that provides goals, materials, a management system, and inservice training. The program is highly…

  5. Project HEED. Final Evaluation Report.

    ERIC Educational Resources Information Center

    Hughes, Orval D.

    During 1972-73, Project HEED (Heed Ethnic Educational Depolarization) involved 1,350 Indian students in 60 classrooms at Sells, Topowa, San Carlos, Rice, Many Farms, Hotevilla, Peach Springs, and Sacaton. Primary objectives were: (1) improvement in reading skills, (2) development of cultural awareness, and (3) providing for the Special Education…

  6. Benchmark Evaluation of Fuel Effect and Material Worth Measurements for a Beryllium-Reflected Space Reactor Mockup

    SciTech Connect

    Marshall, Margaret A.; Bess, John D.

    2015-02-01

    The critical configuration of the small, compact critical assembly (SCCA) experiments performed at the Oak Ridge Critical Experiments Facility (ORCEF) in 1962-1965 have been evaluated as acceptable benchmark experiments for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. The initial intent of these experiments was to support the design of the Medium Power Reactor Experiment (MPRE) program, whose purpose was to study “power plants for the production of electrical power in space vehicles.” The third configuration in this series of experiments was a beryllium-reflected assembly of stainless-steel-clad, highly enriched uranium (HEU)-O2 fuel mockup of a potassium-cooled space power reactor. Reactivity measurements cadmium ratio spectral measurements and fission rate measurements were measured through the core and top reflector. Fuel effect worth measurements and neutron moderating and absorbing material worths were also measured in the assembly fuel region. The cadmium ratios, fission rate, and worth measurements were evaluated for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. The fuel tube effect and neutron moderating and absorbing material worth measurements are the focus of this paper. Additionally, a measurement of the worth of potassium filling the core region was performed but has not yet been evaluated Pellets of 93.15 wt.% enriched uranium dioxide (UO2) were stacked in 30.48 cm tall stainless steel fuel tubes (0.3 cm tall end caps). Each fuel tube had 26 pellets with a total mass of 295.8 g UO2 per tube. 253 tubes were arranged in 1.506-cm triangular lattice. An additional 7-tube cluster critical configuration was also measured but not used for any physics measurements. The core was surrounded on all side by a beryllium reflector. The fuel effect worths were measured by removing fuel tubes at various radius. An accident scenario

  7. NHT-1 I/O Benchmarks

    NASA Technical Reports Server (NTRS)

    Carter, Russell; Ciotti, Bob; Fineberg, Sam; Nitzbert, Bill

    1992-01-01

    The NHT-1 benchmarks am a set of three scalable I/0 benchmarks suitable for evaluating the I/0 subsystems of high performance distributed memory computer systems. The benchmarks test application I/0, maximum sustained disk I/0, and maximum sustained network I/0. Sample codes are available which implement the benchmarks.

  8. Evaluating success levels of mega-projects

    NASA Technical Reports Server (NTRS)

    Kumaraswamy, Mohan M.

    1994-01-01

    Today's mega-projects transcend the traditional trajectories traced within national and technological limitations. Powers unleashed by internationalization of initiatives, in for example space exploration and environmental protection, are arguably only temporarily suppressed by narrower national, economic, and professional disagreements as to how best they should be harnessed. While the world gets its act together there is time to develop the technologies of such supra-mega-project management that will synergize truly diverse resources and smoothly mesh their interfaces. Such mega-projects and their management need to be realistically evaluated, when implementing such improvements. This paper examines current approaches to evaluating mega-projects and questions the validity of extrapolations to the supra-mega-projects of the future. Alternatives to improve such evaluations are proposed and described.

  9. Benchmarking foreign electronics technologies

    SciTech Connect

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  10. The DLESE Evaluation Core Services Project

    NASA Astrophysics Data System (ADS)

    Buhr, S. M.; Barker, L. J.; Reeves, T. C.

    2003-12-01

    The DLESE Evaluation Core Service project will conduct evaluation of DLESE and provide evaluation consultation, resources and services to the DLESE community. Through this work we anticipate that we will learn more about the impact and use of digital libraries, and will promote an evaluation mindset within the geoscience education community. Activities of the DLESE Evaluation Service team include 1) evaluation planning for and of DLESE, 2) conducting formative evaluation of DLESE (user needs, data access, collections, outreach), 3) conducting classroom evaluation of DLESE use on teaching practices and learning outcomes, and 4) collection, synthesis, and reporting of evaluation findings garnered from all core teams and major projects. Many opportunities for community involvement exist. A strand group convened during the 2004 DLESE Annual Meeting took DLESE Evaluation as their topic, provided recommendations and will continue their activities through the year. The related Evaluation Toolkit collection is now discoverable through DLESE, and upcoming activities of all the core teams will provide evaluation opportunities to be described. Other community opportunities include consulting with Evaluation Service for education grant proposals, attending an evaluation workshop,and applying for an Evaluation Minigrant (up to \\$5K per award) Progress to date will be discussed, the Evaluation Core Services team members will be introduced, and plans and opportunities will be described in more detail.

  11. Project SPIRIT Evaluation Report: 1987-1988.

    ERIC Educational Resources Information Center

    McAdoo, Harriette P.; Crawford, Vanella A.

    The 1987-1988 Project SPIRIT programs were evaluated for effectiveness from the points of view of the participants, both parents and children. An initiative of the Congress of National Black Churches that was begun in the summer of 1986, Project SPIRIT aims to nurture children's strength, perseverance, imagination, responsibility, integrity, and…

  12. Tellin' Stories Project. Final Evaluation Report.

    ERIC Educational Resources Information Center

    Ziegler, Mary F.

    The Tellin' Stories Project in Washington, DC, was developed to increase the involvement of economically disadvantaged, often limited English-speaking parents in the educational process of their children. The project connected parents, educators, schools, and communities. The third-year evaluation process consisted of these activities: a focus…

  13. INTEGRAL BENCHMARK DATA FOR NUCLEAR DATA TESTING THROUGH THE ICSBEP AND THE NEWLY ORGANIZED IRPHEP

    SciTech Connect

    J. Blair Briggs; Lori Scott; Yolanda Rugama; Enrico Satori

    2007-04-01

    The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) was last reported in a nuclear data conference at the International Conference on Nuclear Data for Science and Technology, ND-2004, in Santa Fe, New Mexico. Since that time the number and type of integral benchmarks have increased significantly. Included in the ICSBEP Handbook are criticality-alarm / shielding and fundamental physic benchmarks in addition to the traditional critical / subcritical benchmark data. Since ND 2004, a reactor physics counterpart to the ICSBEP, the International Reactor Physics Experiment Evaluation Project (IRPhEP) was initiated. The IRPhEP is patterned after the ICSBEP, but focuses on other integral measurements, such as buckling, spectral characteristics, reactivity effects, reactivity coefficients, kinetics measurements, reaction-rate and power distributions, nuclide compositions, and other miscellaneous-type measurements in addition to the critical configuration. The status of these two projects is discussed and selected benchmarks highlighted in this paper.

  14. Teaching Medical Students at a Distance: Using Distance Learning Benchmarks to Plan and Evaluate a Web-Enhanced Medical Student Curriculum

    ERIC Educational Resources Information Center

    Olney, Cynthia A.; Chumley, Heidi; Parra, Juan M.

    2004-01-01

    A team designing a Web-enhanced third-year medical education didactic curriculum based their course planning and evaluation activities on the Institute for Higher Education Policy's (2000) 24 benchmarks for online distance learning. The authors present the team's blueprint for planning and evaluating the Web-enhanced curriculum, which incorporates…

  15. OCTALIS benchmarking: comparison of four watermarking techniques

    NASA Astrophysics Data System (ADS)

    Piron, Laurent; Arnold, Michael; Kutter, Martin; Funk, Wolfgang; Boucqueau, Jean M.; Craven, Fiona

    1999-04-01

    In this paper, benchmarking results of watermarking techniques are presented. The benchmark includes evaluation of the watermark robustness and the subjective visual image quality. Four different algorithms are compared, and exhaustively tested. One goal of these tests is to evaluate the feasibility of a Common Functional Model (CFM) developed in the European Project OCTALIS and determine parameters of this model, such as the length of one watermark. This model solves the problem of image trading over an insecure network, such as Internet, and employs hybrid watermarking. Another goal is to evaluate the resistance of the watermarking techniques when subjected to a set of attacks. Results show that the tested techniques do not have the same behavior and that no tested methods has optimal characteristics. A last conclusion is that, as for the evaluation of compression techniques, clear guidelines are necessary to evaluate and compare watermarking techniques.

  16. Toxicological Benchmarks for Wildlife

    SciTech Connect

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red-tailed hawk

  17. Evaluation Project of a Postvention Program.

    ERIC Educational Resources Information Center

    Simon, Robert; And Others

    A student suicide or parasuicide increases the risk that potentially suicidal teenagers see suicide as an enviable option. The "copycat effect" can be reduced by a postvention program. This proposed evaluative research project will provide an implementation and impact evaluation of a school's postvention program following a suicide or parasuicide.…

  18. Collaborative Writing Project Product Evaluation 1988-1989. Evaluation Report.

    ERIC Educational Resources Information Center

    Saginaw Public Schools, MI. Dept. of Evaluation Services.

    A study was conducted to evaluate the final outcome of the Section 98 writing project, a 3-year collaboration between the School District of the City of Saginaw and the University of Michigan, and to successfully employ the gap reduction design with the pre- to post-test results stemming from the writing project. Students in six sections of…

  19. EVALUATION OF U10MO FUEL PLATE IRRADIATION BEHAVIOR VIA NUMERICAL AND EXPERIMENTAL BENCHMARKING

    SciTech Connect

    Samuel J. Miller; Hakan Ozaltun

    2012-11-01

    This article analyzes dimensional changes due to irradiation of monolithic plate-type nuclear fuel and compares results with finite element analysis of the plates during fabrication and irradiation. Monolithic fuel plates tested in the Advanced Test Reactor (ATR) at Idaho National Lab (INL) are being used to benchmark proposed fuel performance for several high power research reactors. Post-irradiation metallographic images of plates sectioned at the midpoint were analyzed to determine dimensional changes of the fuel and the cladding response. A constitutive model of the fabrication process and irradiation behavior of the tested plates was developed using the general purpose commercial finite element analysis package, Abaqus. Using calculated burn-up profiles of irradiated plates to model the power distribution and including irradiation behaviors such as swelling and irradiation enhanced creep, model simulations allow analysis of plate parameters that are either impossible or infeasible in an experimental setting. The development and progression of fabrication induced stress concentrations at the plate edges was of primary interest, as these locations have a unique stress profile during irradiation. Additionally, comparison between 2D and 3D models was performed to optimize analysis methodology. In particular, the ability of 2D and 3D models account for out of plane stresses which result in 3-dimensional creep behavior that is a product of these components. Results show that assumptions made in 2D models for the out-of-plane stresses and strains cannot capture the 3-dimensional physics accurately and thus 2D approximations are not computationally accurate. Stress-strain fields are dependent on plate geometry and irradiation conditions, thus, if stress based criteria is used to predict plate behavior (as opposed to material impurities, fine micro-structural defects, or sharp power gradients), unique 3D finite element formulation for each plate is required.

  20. Current Reactor Physics Benchmark Activities at the Idaho National Laboratory

    SciTech Connect

    John D. Bess; Margaret A. Marshall; Mackenzie L. Gorham; Joseph Christensen; James C. Turnbull; Kim Clark

    2011-11-01

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) [1] and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) [2] were established to preserve integral reactor physics and criticality experiment data for present and future research. These valuable assets provide the basis for recording, developing, and validating our integral nuclear data, and experimental and computational methods. These projects are managed through the Idaho National Laboratory (INL) and the Organisation for Economic Co-operation and Development Nuclear Energy Agency (OECD-NEA). Staff and students at the Department of Energy - Idaho (DOE-ID) and INL are engaged in the development of benchmarks to support ongoing research activities. These benchmarks include reactors or assemblies that support Next Generation Nuclear Plant (NGNP) research, space nuclear Fission Surface Power System (FSPS) design validation, and currently operational facilities in Southeastern Idaho.

  1. The BOUT Project: Validation and Benchmark of BOUT Code and Experimental Diagnostic Tools for Fusion Boundary Turbulence

    SciTech Connect

    Xu, X Q

    2001-08-09

    A boundary plasma turbulence code BOUT is presented. The preliminary encouraging results have been obtained when comparing with probe measurements for a typical Ohmic discharge in CT-7 tokamak. The validation and benchmark of BOUT code and experimental diagnostic tools for fusion boundary plasma turbulence is proposed.

  2. Training Evaluation Based on Cases of Taiwanese Benchmarked High-Tech Companies

    ERIC Educational Resources Information Center

    Lien, Bella Ya Hui; Hung, Richard Yu Yuan; McLean, Gary N.

    2007-01-01

    Although the influence of workplace practices and employees' experiences with training effectiveness has received considerable attention, less is known of the influence of workplace practices on training evaluation methods. The purposes of this study were to: (1) explore and understand the training evaluation methods used by seven Taiwanese…

  3. Documenting Evaluation Use: Guided Evaluation Decisionmaking. Evaluation Productivity Project.

    ERIC Educational Resources Information Center

    Burry, James

    This paper documents the evaluation use process among districts using the Guide for Evaluation Decision Makers, published by the Center for the Study of Evaluation (CSE) during the 1984-85 school year. Included are the following: (1) a discussion of research that led to conclusions concerning the administrator's role in evaluation use; (2) a…

  4. Medico-economic evaluation of healthcare products. Methodology for defining a significant impact on French health insurance costs and selection of benchmarks for interpreting results.

    PubMed

    Dervaux, Benoît; Baseilhac, Eric; Fagon, Jean-Yves; Biot, Claire; Blachier, Corinne; Braun, Eric; Debroucker, Frédérique; Detournay, Bruno; Ferretti, Carine; Granger, Muriel; Jouan-Flahault, Chrystel; Lussier, Marie-Dominique; Meyer, Arlette; Muller, Sophie; Pigeon, Martine; De Sahb, Rima; Sannié, Thomas; Sapède, Claudine; Vray, Muriel

    2014-01-01

    Decree No. 2012-1116 of 2 October 2012 on medico-economic assignments of the French National Authority for Health (Haute autorité de santé, HAS) significantly alters the conditions for accessing the health products market in France. This paper presents a theoretical framework for interpreting the results of the economic evaluation of health technologies and summarises the facts available in France for developing benchmarks that will be used to interpret incremental cost-effectiveness ratios. This literature review shows that it is difficult to determine a threshold value but it is also difficult to interpret then incremental cost effectiveness ratio (ICER) results without a threshold value. In this context, round table participants favour a pragmatic approach based on "benchmarks" as opposed to a threshold value, based on an interpretative and normative perspective, i.e. benchmarks that can change over time based on feedback. PMID:25230355

  5. Strategic evaluation central to LNG project formation

    SciTech Connect

    Nissen, D.; DiNapoli, R.N.; Yost, C.C.

    1995-07-03

    An efficient-scale, grassroots LNG facility of about 6 million metric tons/year capacity requires a prestart-up outlay of $5 billion or more for the supply facilities--production, feedgas pipeline, liquefaction, and shipping. The demand side of the LNG chain requires a similar outlay, counting the import-regasification terminal and a combination of 5 gigawatts or more of electric power generation or the equivalent in city gas and industrial gas-using facilities. There exist no well-developed commodity markets for free-on-board (fob) or delivered LNG. A new LNG supply project is dedicated to its buyers. Indeed, the buyers` revenue commitment is the project`s only bankable asset. For the buyer to make this commitment, the supply venture`s capability and commitment must be credible: to complete the project and to deliver the LNG reliably over the 20+ years required to recover capital committed on both sides. This requirement has technical, economic, and business dimensions. In this article the authors describe a LNG project evaluation system and show its application to typical tasks: project cost of service and participant shares; LNG project competition; alternative project structures; and market competition for LNG-supplied electric power generation.

  6. Evaluation of various LandFlux evapotranspiration algorithms using the LandFlux-EVAL synthesis benchmark products and observational data

    NASA Astrophysics Data System (ADS)

    Michel, Dominik; Hirschi, Martin; Jimenez, Carlos; McCabe, Mathew; Miralles, Diego; Wood, Eric; Seneviratne, Sonia

    2014-05-01

    Research on climate variations and the development of predictive capabilities largely rely on globally available reference data series of the different components of the energy and water cycles. Several efforts aimed at producing large-scale and long-term reference data sets of these components, e.g. based on in situ observations and remote sensing, in order to allow for diagnostic analyses of the drivers of temporal variations in the climate system. Evapotranspiration (ET) is an essential component of the energy and water cycle, which can not be monitored directly on a global scale by remote sensing techniques. In recent years, several global multi-year ET data sets have been derived from remote sensing-based estimates, observation-driven land surface model simulations or atmospheric reanalyses. The LandFlux-EVAL initiative presented an ensemble-evaluation of these data sets over the time periods 1989-1995 and 1989-2005 (Mueller et al. 2013). Currently, a multi-decadal global reference heat flux data set for ET at the land surface is being developed within the LandFlux initiative of the Global Energy and Water Cycle Experiment (GEWEX). This LandFlux v0 ET data set comprises four ET algorithms forced with a common radiation and surface meteorology. In order to estimate the agreement of this LandFlux v0 ET data with existing data sets, it is compared to the recently available LandFlux-EVAL synthesis benchmark product. Additional evaluation of the LandFlux v0 ET data set is based on a comparison to in situ observations of a weighing lysimeter from the hydrological research site Rietholzbach in Switzerland. These analyses serve as a test bed for similar evaluation procedures that are envisaged for ESA's WACMOS-ET initiative (http://wacmoset.estellus.eu). Reference: Mueller, B., Hirschi, M., Jimenez, C., Ciais, P., Dirmeyer, P. A., Dolman, A. J., Fisher, J. B., Jung, M., Ludwig, F., Maignan, F., Miralles, D. G., McCabe, M. F., Reichstein, M., Sheffield, J., Wang, K

  7. HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks

    SciTech Connect

    Paulson, Patrick R.; Purohit, Sumit; Rodriguez, Luke R.

    2015-05-01

    This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.

  8. Benchmarking Using Basic DBMS Operations

    NASA Astrophysics Data System (ADS)

    Crolotte, Alain; Ghazal, Ahmad

    The TPC-H benchmark proved to be successful in the decision support area. Many commercial database vendors and their related hardware vendors used these benchmarks to show the superiority and competitive edge of their products. However, over time, the TPC-H became less representative of industry trends as vendors keep tuning their database to this benchmark-specific workload. In this paper, we present XMarq, a simple benchmark framework that can be used to compare various software/hardware combinations. Our benchmark model is currently composed of 25 queries that measure the performance of basic operations such as scans, aggregations, joins and index access. This benchmark model is based on the TPC-H data model due to its maturity and well-understood data generation capability. We also propose metrics to evaluate single-system performance and compare two systems. Finally we illustrate the effectiveness of this model by showing experimental results comparing two systems under different conditions.

  9. In response to an open invitation for comments on AAAS project 2061's Benchmark books on science. Part 1: documentation of serious errors in cell biology.

    PubMed

    Ling, Gilbert

    2006-01-01

    Project 2061 was founded by the American Association for the Advancement of Science (AAAS) to improve secondary school science education. An in-depth study of ten 9 to 12th grade biology textbooks led to the verdict that none conveyed "Big Ideas" that would give coherence and meaning to the profusion of lavishly illustrated isolated details. However, neither the Project report itself nor the Benchmark books put out earlier by the Project carries what deserves the designation of "Big Ideas." Worse, in the two earliest-published Benchmark books, the basic unit of all life forms--the living cell--is described as a soup enclosed by a cell membrane, that determines what can enter or leave the cell. This is astonishing since extensive experimental evidence has unequivocally disproved this idea 60 years ago. A "new" version of the membrane theory brought in to replace the discredited (sieve) version is the pump model--currently taught as established truth in all high-school and college biology textbooks--was also unequivocally disproved 40 years ago. This comment is written partly in response to Bechmark's gracious open invitation for ideas to improve the books and through them, to improve US secondary school science education. PMID:17405412

  10. Experimental benchmark data and systematic evaluation of two a posteriori, polarizable-continuum corrections for vertical excitation energies in solution.

    PubMed

    Mewes, Jan-Michael; You, Zhi-Qiang; Wormit, Michael; Kriesche, Thomas; Herbert, John M; Dreuw, Andreas

    2015-05-28

    We report the implementation and evaluation of a perturbative, density-based correction scheme for vertical excitation energies calculated in the framework of a polarizable continuum model (PCM). Because the proposed first-order correction terms depend solely on the zeroth-order excited-state density, a transfer of the approach to any configuration interaction-type excited-state method is straightforward. Employing the algebraic-diagrammatic construction (ADC) scheme of up to third order as well as time-dependent density-functional theory (TD-DFT), we demonstrate and evaluate the approach. For this purpose, we assembled a set of experimental benchmark data for solvatochromism in molecules (xBDSM) containing 44 gas-phase to solvent shifts for 17 molecules. These data are compared to solvent shifts calculated at the ADC(1), ADC(2), ADC(3/2), and TD-DFT/LRC-ωPBE levels of theory in combination with state-specific as well as linear-response type PCM-based correction schemes. Some unexpected trends and differences between TD-DFT, the levels of ADC, and variants of the PCM are observed and discussed. The most accurate combinations reproduce experimental solvent shifts resulting from the bulk electrostatic interaction with maximum errors in the order of 50 meV and a mean absolute deviation of 20-30 meV for the xBDSM set. PMID:25629414

  11. Project Return: 1985-1986. Evaluation Report.

    ERIC Educational Resources Information Center

    Grice, Michael

    This evaluation report of an attendance project in Portland, Oregon, public schools describes goals, methods, and results for 1985-86. The introduction states objectives of identifying, contacting, and counseling students leaving school or attending irregularly, with the purpose of guiding them into school or alternative educational programs. A…

  12. Federal Workplace Literacy Project. Internal Evaluation Report.

    ERIC Educational Resources Information Center

    Matuszak, David J.

    This report describes the following components of the Nestle Workplace Literacy Project: six job task analyses, curricula for six workplace basic skills training programs, delivery of courses using these curricula, and evaluation of the process. These six job categories were targeted for training: forklift loader/checker, BB's processing systems…

  13. CORRESPONDENCE STUDY EVALUATION PROJECT, STAGE 1.

    ERIC Educational Resources Information Center

    BALL, SANDRA J.; AND OTHERS

    AN ANALYSIS OF DATA COLLECTED FROM STUDENT REGISTRATION CARDS AND THE FORMULATION OF A STUDENT QUESTIONNAIRE CONSTITUTE THE FIRST PART OF A THREE-STAGE LONG-RANGE RESEARCH PROJECT TO EVALUATE A UNIVERSITY CORRESPONDENCE STUDY PROGRAM. THE DATA ANALYSIS DESCRIBES THE POPULATION OF CORRESPONDENCE STUDENTS IN TERMS OF RELEVANT INDIVIDUAL AND SOCIAL…

  14. Project BACSTOP Evaluation Report 1974-1975.

    ERIC Educational Resources Information Center

    Nelson, Neil; Martin, William

    Designed to observe changes in biracial student behavior brought about by Project BACSTOP (a series of structured experiences in a variety of wilderness settings meant to bring students of different races together in stressful adventure activities geared to promote interaction, communication, and cooperation), this evaluation studied five…

  15. Evaluation in the Anthropology Curriculum Project.

    ERIC Educational Resources Information Center

    Rice, Marion J.

    Reviewed in this summary are the seven evaluations completed by the Anthropology Curriculum Project (ACP) of their own materials for grades 1-7. These seven are: 1) cognitive achievement within the premises of a single discipline approach and differential teacher preparation; 2) differential cognitive achievement by grade level holding teatment by…

  16. Implementing Cognitive Behavioral Therapy for Chronic Fatigue Syndrome in a Mental Health Center: A Benchmarking Evaluation

    ERIC Educational Resources Information Center

    Scheeres, Korine; Wensing, Michel; Knoop, Hans; Bleijenberg, Gijs

    2008-01-01

    Objective: This study evaluated the success of implementing cognitive behavioral therapy (CBT) for chronic fatigue syndrome (CFS) in a representative clinical practice setting and compared the patient outcomes with those of previously published randomized controlled trials (RCTs) of CBT for CFS. Method: The implementation interventions were the…

  17. Examining Benchmark Indicator Systems for the Evaluation of Higher Education Institutions

    ERIC Educational Resources Information Center

    Garcia-Aracil, Adela; Palomares-Montero, Davinia

    2010-01-01

    Higher Education Institutions are undergoing important changes involving the development of new roles and missions, with implications for their structure. Governments and institutions are implementing strategies to ensure the proper performance of universities and several studies have investigated evaluation of universities through the development…

  18. Self-benchmarking Guide for Data Centers: Metrics, Benchmarks, Actions

    SciTech Connect

    Mathew, Paul; Ganguly, Srirupa; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in data centers. This guide is primarily intended for personnel who have responsibility for managing energy use in existing data centers - including facilities managers, energy managers, and their engineering consultants. Additionally, data center designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior data center benchmarking studies supported by the California Energy Commission. Much of the benchmarking data are drawn from the LBNL data center benchmarking database that was developed from these studies. Additional benchmark data were obtained from engineering experts including facility designers and energy managers. This guide also builds on recent research supported by the U.S. Department of Energy's Save Energy Now program.

  19. The International Reactor Physics Experiment Evaluation Project (IRPhEP)

    SciTech Connect

    Blair Briggs, J.; Sartori, E.; Scott, L.

    2006-07-01

    Since the beginning of the Nuclear Power industry, numerous experiments concerned with nuclear energy and technology have been performed at different research laboratories, worldwide. These experiments required a large investment in terms of infrastructure, expertise, and cost; however, many were performed without a high degree of attention to archival of results for future use. The degree and quality of documentation varies greatly. There is an urgent need to preserve integral reactor physics experimental data, including measurement methods, techniques, and separate or special effects data for nuclear energy and technology applications and the knowledge and competence contained therein. If the data are compromised, it is unlikely that any of these experiments will be repeated again in the future. The International Reactor Physics Evaluation Project (IRPhEP) was initiated, as a pilot activity in 1999 by the by the Organization of Economic Cooperation and Development (OECD) Nuclear Energy Agency (NEA) Nuclear Science Committee (NSC). The project was endorsed as an official activity of the NSC in June of 2003. The purpose of the IRPhEP is to provide an extensively peer reviewed set of reactor physics related integral benchmark data that can be used by reactor designers and safety analysts to validate the analytical tools used to design next generation reactors and establish the safety basis for operation of these reactors. A short history of the IRPhEP is presented and its purposes are discussed in this paper. Accomplishments of the IRPhEP, including the first publication of the IRPhEP Handbook, are highlighted and the future of the project outlined. (authors)

  20. The International Reactor Physics Experiment Evaluation Project (IRPHEP)

    SciTech Connect

    J. Blair Briggs; Enrico Sartori; Lori Scott

    2006-09-01

    Since the beginning of the Nuclear Power industry, numerous experiments concerned with nuclear energy and technology have been performed at different research laboratories, worldwide. These experiments required a large investment in terms of infrastructure, expertise, and cost; however, many were performed without a high degree of attention to archival of results for future use. The degree and quality of documentation varies greatly. There is an urgent need to preserve integral reactor physics experimental data, including measurement methods, techniques, and separate or special effects data for nuclear energy and technology applications and the knowledge and competence contained therein. If the data are compromised, it is unlikely that any of these experiments will be repeated again in the future. The International Reactor Physics Evaluation Project (IRPhEP) was initiated, as a pilot activity in 1999 by the by the Organization of Economic Cooperation and Development (OECD) Nuclear Energy Agency (NEA) Nuclear Science Committee (NSC). The project was endorsed as an official activity of the NSC in June of 2003. The purpose of the IRPhEP is to provide an extensively peer reviewed set of reactor physics related integral benchmark data that can be used by reactor designers and safety analysts to validate the analytical tools used to design next generation reactors and establish the safety basis for operation of these reactors. A short history of the IRPhEP is presented and its purposes are discussed in this paper. Accomplishments of the IRPhEP, including the first publication of the IRPhEP Handbook, are highlighted and the future of the project outlined.

  1. Kenya's Radio Language Arts Project: evaluation results.

    PubMed

    Oxford, R L

    1985-01-01

    The Kenya Radio Language Arts Project (RLAP), which has just been completed, documents the effectiveness of interactive radio-based educational instruction. Analyses in the areas of listening, reading, speaking, and writing show that children in radio classrooms consistently scored better than children in nonradio classrooms in every test. An evaluation of the project was conducted with the assistance of the Center for Applied Linguistics (CAL). Evaluation results came from a variety of sources, including language tests, observations, interviews, demographic and administrative records, and an attitude survey. A large proportion of the project's students were considerably transient. Only 22% of the total student population of 3908 were "normal progression" students -- that is, they advanced regularly through their education during the life of the project. Students who moved from the area, failed a standard (grade), dropped out, or were otherwise untrackable, comprised the remaining 78% of the total. 7 districts were included in the project. Tests were developed for listening and reading in Standards 1, 2, and 3 and in speaking and writing in Standards 2 and 3. The achievement tests were based on the official Kenya curriculum for those standards, so as to measure achievement against the curriculum. Nearly all the differences were highly significant statistically, with a probability of less than 1 in 1000 that the findings could have occurred by chance. Standard 1 radio students scored nearly 8 points higher than did their counterparts in the control group. Standard 2 and 3 radio students outperformed the control students by 4 points. The radio group consistently outperformed the control group in reading, writing, and speaking. Unstructured interviews and observations were conducted by the RLAP field staff. Overwhelmingly positive attitudes about the project prevailed among project teachers and headmasters. The data demonstrate that RLAP works. In fact, it works so

  2. CSAR Benchmark Exercise 2011–2012: Evaluation of Results from Docking and Relative Ranking of Blinded Congeneric Series

    PubMed Central

    2013-01-01

    The Community Structure–Activity Resource (CSAR) recently held its first blinded exercise based on data provided by Abbott, Vertex, and colleagues at the University of Michigan, Ann Arbor. A total of 20 research groups submitted results for the benchmark exercise where the goal was to compare different improvements for pose prediction, enrichment, and relative ranking of congeneric series of compounds. The exercise was built around blinded high-quality experimental data from four protein targets: LpxC, Urokinase, Chk1, and Erk2. Pose prediction proved to be the most straightforward task, and most methods were able to successfully reproduce binding poses when the crystal structure employed was co-crystallized with a ligand from the same chemical series. Multiple evaluation metrics were examined, and we found that RMSD and native contact metrics together provide a robust evaluation of the predicted poses. It was notable that most scoring functions underpredicted contacts between the hetero atoms (i.e., N, O, S, etc.) of the protein and ligand. Relative ranking was found to be the most difficult area for the methods, but many of the scoring functions were able to properly identify Urokinase actives from the inactives in the series. Lastly, we found that minimizing the protein and correcting histidine tautomeric states positively trended with low RMSD for pose prediction but minimizing the ligand negatively trended. Pregenerated ligand conformations performed better than those that were generated on the fly. Optimizing docking parameters and pretraining with the native ligand had a positive effect on the docking performance as did using restraints, substructure fitting, and shape fitting. Lastly, for both sampling and ranking scoring functions, the use of the empirical scoring function appeared to trend positively with the RMSD. Here, by combining the results of many methods, we hope to provide a statistically relevant evaluation and elucidate specific shortcomings

  3. CSAR Benchmark Exercise 2013: Evaluation of Results from a Combined Computational Protein Design, Docking, and Scoring/Ranking Challenge.

    PubMed

    Smith, Richard D; Damm-Ganamet, Kelly L; Dunbar, James B; Ahmed, Aqeel; Chinnaswamy, Krishnapriya; Delproposto, James E; Kubish, Ginger M; Tinberg, Christine E; Khare, Sagar D; Dou, Jiayi; Doyle, Lindsey; Stuckey, Jeanne A; Baker, David; Carlson, Heather A

    2016-06-27

    Community Structure-Activity Resource (CSAR) conducted a benchmark exercise to evaluate the current computational methods for protein design, ligand docking, and scoring/ranking. The exercise consisted of three phases. The first phase required the participants to identify and rank order which designed sequences were able to bind the small molecule digoxigenin. The second phase challenged the community to select a near-native pose of digoxigenin from a set of decoy poses for two of the designed proteins. The third phase investigated the ability of current methods to rank/score the binding affinity of 10 related steroids to one of the designed proteins (pKd = 4.1 to 6.7). We found that 11 of 13 groups were able to correctly select the sequence that bound digoxigenin, with most groups providing the correct three-dimensional structure for the backbone of the protein as well as all atoms of the active-site residues. Eleven of the 14 groups were able to select the appropriate pose from a set of plausible decoy poses. The ability to predict absolute binding affinities is still a difficult task, as 8 of 14 groups were able to correlate scores to affinity (Pearson-r > 0.7) of the designed protein for congeneric steroids and only 5 of 14 groups were able to correlate the ranks of the 10 related ligands (Spearman-ρ > 0.7). PMID:26419257

  4. Small Commercial Program DOE Project: Impact evaluation

    SciTech Connect

    Bathgate, R.; Faust, S. )

    1992-08-12

    In 1991, Washington Electric Cooperative (WEC) implemented a Department of Energy grant to conduct a small commercial energy conservation project. The small commercial Mom, and Pop'' grocery stores within WEC's service territory were selected as the target market for the project. Energy Solid Waste Consultant's (E SWC) Impact Evaluation is documented here. The evaluation was based on data gathered from a variety of sources, including load profile metering, kWh submeters, elapsed time indicators, and billing histories. Five stores were selected to receive measures under this program: Waits River General Store, Joe's Pond Store, Hastings Store, Walden General Store, and Adamant Cooperative. Specific measures installed in each store and description of each are included.

  5. HANFORD DST THERMAL & SEISMIC PROJECT ANSYS BENCHMARK ANALYSIS OF SEISMIC INDUCED FLUID STRUCTURE INTERACTION IN A HANFORD DOUBLE SHELL PRIMARY TANK

    SciTech Connect

    MACKEY, T.C.

    2006-03-14

    M&D Professional Services, Inc. (M&D) is under subcontract to Pacific Northwest National Laboratories (PNNL) to perform seismic analysis of the Hanford Site Double-Shell Tanks (DSTs) in support of a project entitled ''Double-Shell Tank (DSV Integrity Project-DST Thermal and Seismic Analyses)''. The overall scope of the project is to complete an up-to-date comprehensive analysis of record of the DST System at Hanford in support of Tri-Party Agreement Milestone M-48-14. The work described herein was performed in support of the seismic analysis of the DSTs. The thermal and operating loads analysis of the DSTs is documented in Rinker et al. (2004). The overall seismic analysis of the DSTs is being performed with the general-purpose finite element code ANSYS. The overall model used for the seismic analysis of the DSTs includes the DST structure, the contained waste, and the surrounding soil. The seismic analysis of the DSTs must address the fluid-structure interaction behavior and sloshing response of the primary tank and contained liquid. ANSYS has demonstrated capabilities for structural analysis, but the capabilities and limitations of ANSYS to perform fluid-structure interaction are less well understood. The purpose of this study is to demonstrate the capabilities and investigate the limitations of ANSYS for performing a fluid-structure interaction analysis of the primary tank and contained waste. To this end, the ANSYS solutions are benchmarked against theoretical solutions appearing in BNL 1995, when such theoretical solutions exist. When theoretical solutions were not available, comparisons were made to theoretical solutions of similar problems and to the results from Dytran simulations. The capabilities and limitations of the finite element code Dytran for performing a fluid-structure interaction analysis of the primary tank and contained waste were explored in a parallel investigation (Abatt 2006). In conjunction with the results of the global ANSYS analysis

  6. Testing (Validating?) Cross Sections with ICSBEP Benchmarks

    SciTech Connect

    Kahler, Albert C. III

    2012-06-28

    We discuss how to use critical benchmarks from the International Handbook of Evaluated Criticality Safety Benchmark Experiments to determine the applicability of specific cross sections to the end-user's problem of interest. Particular attention is paid to making sure the selected suite of benchmarks includes the user's range of applicability (ROA).

  7. Lessons Learned from Evaluating African Agricultural Training Projects.

    ERIC Educational Resources Information Center

    Jones, Stephen P.

    Since all Agency for International Development (AID) projects require an evaluation component, AID's emphasis on assistance to agriculture and rural development projects ensures a continuing need for skilled and expert personnel to evaluate those projects. Intended for potential AID project evaluators, this guide uses experience gained from field…

  8. Multivariate dynamical systems-based estimation of causal brain interactions in fMRI: Group-level validation using benchmark data, neurophysiological models and human connectome project data

    PubMed Central

    Ryali, Srikanth; Chen, Tianwen; Supekar, Kaustubh; Tu, Tao; Kochlka, John; Cai, Weidong; Menon, Vinod

    2016-01-01

    Background Causal estimation methods are increasingly being used to investigate functional brain networks in fMRI, but there are continuing concerns about the validity of these methods. New Method Multivariate Dynamical Systems (MDS) is a state-space method for estimating dynamic causal interactions in fMRI data. Here we validate MDS using benchmark simulations as well as simulations from a more realistic stochastic neurophysiological model. Finally, we applied MDS to investigate dynamic casual interactions in a fronto-cingulate-parietal control network using Human Connectome Project (HCP) data acquired during performance of a working memory task. Crucially, since the ground truth in experimental data is unknown, we conducted novel stability analysis to determine robust causal interactions within this network. Results MDS accurately recovered dynamic causal interactions with an area under receiver operating characteristic (AUC) above 0.7 for benchmark datasets and AUC above 0.9 for datasets generated using the neurophysiological model. In experimental fMRI data, bootstrap procedures revealed a stable pattern of causal influences from the anterior insula to other nodes of the fronto-cingulate-parietal network. Comparison with Existing Methods MDS is effective in estimating dynamic causal interactions in both the benchmark and neurophysiological model based datasets in terms of AUC, sensitivity and false positive rates. Conclusions Our findings demonstrate that MDS can accurately estimate causal interactions in fMRI data. Neurophysiological models and stability analysis provide a general framework for validating computational methods designed to estimate causal interactions in fMRI. The right anterior insula functions as a causal hub during working memory. PMID:27015792

  9. NASA Countermeasures Evaluation and Validation Project

    NASA Technical Reports Server (NTRS)

    Lundquist, Charlie M.; Paloski, William H. (Technical Monitor)

    2000-01-01

    To support its ISS and exploration class mission objectives, NASA has developed a Countermeasure Evaluation and Validation Project (CEVP). The goal of this project is to evaluate and validate the optimal complement of countermeasures required to maintain astronaut health, safety, and functional ability during and after short- and long-duration space flight missions. The CEVP is the final element of the process in which ideas and concepts emerging from basic research evolve into operational countermeasures. The CEVP is accomplishing these objectives by conducting operational/clinical research to evaluate and validate countermeasures to mitigate these maladaptive responses. Evaluation is accomplished by testing in space flight analog facilities, and validation is accomplished by space flight testing. Both will utilize a standardized complement of integrated physiological and psychological tests, termed the Integrated Testing Regimen (ITR) to examine candidate countermeasure efficacy and intersystem effects. The CEVP emphasis is currently placed on validating the initial complement of ISS countermeasures targeting bone, muscle, and aerobic fitness; followed by countermeasures for neurological, psychological, immunological, nutrition and metabolism, and radiation risks associated with space flight. This presentation will review the processes, plans, and procedures that will enable CEVP to play a vital role in transitioning promising research results into operational countermeasures necessary to maintain crew health and performance during long duration space flight.

  10. The NIEHS Predictive-Toxicology Evaluation Project.

    PubMed

    Bristol, D W; Wachsman, J T; Greenwell, A

    1996-10-01

    The Predictive-Toxicology Evaluation (PTE) project conducts collaborative experiments that subject the performance of predictive-toxicology (PT) methods to rigorous, objective evaluation in a uniquely informative manner. Sponsored by the National Institute of Environmental Health Sciences, it takes advantage of the ongoing testing conducted by the U.S. National Toxicology Program (NTP) to estimate the true error of models that have been applied to make prospective predictions on previously untested, noncongeneric-chemical substances. The PTE project first identifies a group of standardized NTP chemical bioassays either scheduled to be conducted or are ongoing, but not yet complete. The project then announces and advertises the evaluation experiment, disseminates information about the chemical bioassays, and encourages researchers from a wide variety of disciplines to publish their predictions in peer-reviewed journals, using whatever approaches and methods they feel are best. A collection of such papers is published in this Environmental Health Perspectives Supplement, providing readers the opportunity to compare and contrast PT approaches and models, within the context of their prospective application to an actual-use situation. This introduction to this collection of papers on predictive toxicology summarizes the predictions made and the final results obtained for the 44 chemical carcinogenesis bioassays of the first PTE experiment (PTE-1) and presents information that identifies the 30 chemical carcinogenesis bioassays of PTE-2, along with a table of prediction sets that have been published to date. It also provides background about the origin and goals of the PTE project, outlines the special challenge associated with estimating the true error of models that aspire to predict open-system behavior, and summarizes what has been learned to date. PMID:8933048

  11. Self-benchmarking Guide for Laboratory Buildings: Metrics, Benchmarks, Actions

    SciTech Connect

    Mathew, Paul; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  12. Self-benchmarking Guide for Cleanrooms: Metrics, Benchmarks, Actions

    SciTech Connect

    Mathew, Paul; Sartor, Dale; Tschudi, William

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  13. How Good Is Our School? Hungry for Success: Benchmarks for Self-Evaluation. Self-Evaluation Series

    ERIC Educational Resources Information Center

    Her Majesty's Inspectorate of Education, 2006

    2006-01-01

    This document is intended to build on the advice given in the publication "How good is our school?" It is intended to be of use to staff in local authorities and schools who are involved in implementing the recommendations of "Hungry for Success." This guide can be used to support in evaluating effectiveness in implementing "Hungry for Success."…

  14. Wildlife habitat evaluation demonstration project. [Michigan

    NASA Technical Reports Server (NTRS)

    Burgoyne, G. E., Jr.; Visser, L. G.

    1981-01-01

    To support the deer range improvement project in Michigan, the capability of LANDSAT data in assessing deer habitat in terms of areas and mixes of species and age classes of vegetation is being examined to determine whether such data could substitute for traditional cover type information sources. A second goal of the demonstration project is to determine whether LANDSAT data can be used to supplement and improve the information normally used for making deer habitat management decisions, either by providing vegetative cover for private land or by providing information about the interspersion and juxtaposition of valuable vegetative cover types. The procedure to be used for evaluating in LANDSAT data of the Lake County test site is described.

  15. Color back projection for fruit maturity evaluation

    NASA Astrophysics Data System (ADS)

    Zhang, Dong; Lee, Dah-Jye; Desai, Alok

    2013-12-01

    In general, fruits and vegetables such as tomatoes and dates are harvested before they fully ripen. After harvesting, they continue to ripen and their color changes. Color is a good indicator of fruit maturity. For example, tomatoes change color from dark green to light green and then pink, light red, and dark red. Assessing tomato maturity helps maximize its shelf life. Color is used to determine the length of time the tomatoes can be transported. Medjool dates change color from green to yellow, and the orange, light red and dark red. Assessing date maturity helps determine the length of drying process to help ripen the dates. Color evaluation is an important step in the processing and inventory control of fruits and vegetables that directly affects profitability. This paper presents an efficient color back projection and image processing technique that is designed specifically for real-time maturity evaluation of fruits. This color processing method requires very simple training procedure to obtain the frequencies of colors that appear in each maturity stage. This color statistics is used to back project colors to predefined color indexes. Fruit maturity is then evaluated by analyzing the reprojected color indexes. This method has been implemented and used for commercial production.

  16. Photovoltaic systems development and evaluation projects

    SciTech Connect

    Stevens, J.W.

    1985-02-01

    The Sixth Annual Photovoltaic Systems Development Projects Integrated Meeting was held at the Sheraton Old Town, March 5, 6, and 7, 1985, in Albuquerque, New Mexico. The meeting was sponsored by Sandia National Laboratories and the United States Department of Energy. This document contains abstracts and visual materials used for the presentations as well as current contract summaries. The topics of the presentations covered System Research, Utility Interface, Power Conditioning Development, Array Field Designs, and the Evaluation of Systems Level Experiments. A panel discussion held on the final day focused on the government role in PV system development.

  17. NASA teleconference pilot project evaluation for 1975

    NASA Technical Reports Server (NTRS)

    Fordyce, S. W.

    1976-01-01

    Tabular data were given to summarize the results of the NASA teleconferencing network pilot project for 1975. The 1,241 evaluation reports received indicate that almost 100,000 man-hours of teleconferences took place. The travel funds reported saved total about $1.44 million, which is about 10% of the NASA travel costs. Subtracting the cost of providing the teleconferencing networks, the net savings reported are $1.28 million (about 9% of the travel costs). The teleconferencing network has proved to be successful in conducting many management meetings and reviews within NASA and its contractors. In spite of difficulties caused by inexperience in teleconferencing and some equipment and circuit problems, the evaluation reports indicated the system was satisfactory in an overwhelming majority of cases.

  18. Mark 4A project training evaluation

    NASA Technical Reports Server (NTRS)

    Stephenson, S. N.

    1985-01-01

    A participant evaluation of a Deep Space Network (DSN) is described. The Mark IVA project is an implementation to upgrade the tracking and data acquisition systems of the dSN. Approximately six hundred DSN operations and engineering maintenance personnel were surveyed. The survey obtained a convenience sample including trained people within the population in order to learn what training had taken place and to what effect. The survey questionnaire used modifications of standard rating scales to evaluate over one hundred items in four training dimensions. The scope of the evaluation included Mark IVA vendor training, a systems familiarization training seminar, engineering training classes, a on-the-job training. Measures of central tendency were made from participant rating responses. Chi square tests of statistical significance were performed on the data. The evaluation results indicated that the effects of different Mark INA training methods could be measured according to certain ratings of technical training effectiveness, and that the Mark IVA technical training has exhibited positive effects on the abilities of DSN personnel to operate and maintain new Mark IVA equipment systems.

  19. ELAPSE - NASA AMES LISP AND ADA BENCHMARK SUITE: EFFICIENCY OF LISP AND ADA PROCESSING - A SYSTEM EVALUATION

    NASA Technical Reports Server (NTRS)

    Davis, G. J.

    1994-01-01

    One area of research of the Information Sciences Division at NASA Ames Research Center is devoted to the analysis and enhancement of processors and advanced computer architectures, specifically in support of automation and robotic systems. To compare systems' abilities to efficiently process Lisp and Ada, scientists at Ames Research Center have developed a suite of non-parallel benchmarks called ELAPSE. The benchmark suite was designed to test a single computer's efficiency as well as alternate machine comparisons on Lisp, and/or Ada languages. ELAPSE tests the efficiency with which a machine can execute the various routines in each environment. The sample routines are based on numeric and symbolic manipulations and include two-dimensional fast Fourier transformations, Cholesky decomposition and substitution, Gaussian elimination, high-level data processing, and symbol-list references. Also included is a routine based on a Bayesian classification program sorting data into optimized groups. The ELAPSE benchmarks are available for any computer with a validated Ada compiler and/or Common Lisp system. Of the 18 routines that comprise ELAPSE, provided within this package are 14 developed or translated at Ames. The others are readily available through literature. The benchmark that requires the most memory is CHOLESKY.ADA. Under VAX/VMS, CHOLESKY.ADA requires 760K of main memory. ELAPSE is available on either two 5.25 inch 360K MS-DOS format diskettes (standard distribution) or a 9-track 1600 BPI ASCII CARD IMAGE format magnetic tape. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. The ELAPSE benchmarks were written in 1990. VAX and VMS are trademarks of Digital Equipment Corporation. MS-DOS is a registered trademark of Microsoft Corporation.

  20. Evaluation in Adult Literacy Research. Project ALERT. Phase II.

    ERIC Educational Resources Information Center

    Ntiri, Daphne Williams, Ed.

    This document contains an evaluation handbook for adult literacy programs and feedback from/regarding the evaluation instruments developed during the project titled Adult Literacy and Evaluation Research Team (also known as Project ALERT), a two-phase project initiated by the Detroit Literacy Coalition (DLC) for the purpose of developing and…

  1. Evaluation of Title I ESEA Projects: 1975-76.

    ERIC Educational Resources Information Center

    Philadelphia School District, PA. Office of Research and Evaluation.

    Evaluation services to be provided during 1975-76 to projects funded under the Elementary and Secondary Education Act Title I are listed in this annual booklet. For each project, the following information is provided: goals to be assessed, evaluation techniques (design), and evaluation milestones. Regular term and summer term projects reported on…

  2. Toxicological Benchmarks for Screening Potential Contaminants of Concern for Effects on Sediment-Associated Biota

    SciTech Connect

    Hull, R.N.

    1993-01-01

    A hazardous waste site may contain hundreds of chemicals; therefore, it is important to screen contaminants of potential concern for the ecological risk assessment. Often this screening is done as part of a screening assessment, the purpose of which is to evaluate the available data, identify data gaps, and screen contaminants of potential concern. Screening may be accomplished by using a set of toxicological benchmarks. These benchmarks are helpful in determining whether contaminants warrant further assessment or are at a level that requires no further attention. If a chemical concentration or the reported detection limit exceeds a proposed lower benchmark, further analysis is needed to determine the hazards posed by that chemical. If, however, the chemical concentration falls below the lower benchmark value, the chemical may be eliminated from further study. The use of multiple benchmarks is recommended for screening chemicals of concern in sediments. Integrative benchmarks developed for the National Oceanic and Atmospheric Administration and the Florida Department of Environmental Protection are included for inorganic and organic chemicals. Equilibrium partitioning benchmarks are included for screening nonionic organic chemicals. Freshwater sediment effect concentrations developed as part of the U.S. Environmental Protection Agency's (EPA's) Assessment and Remediation of Contaminated Sediment Project are included for inorganic and organic chemicals (EPA 1996). Field survey benchmarks developed for the Ontario Ministry of the Environment are included for inorganic and organic chemicals. In addition, EPA-proposed sediment quality criteria are included along with screening values from EPA Region IV and Ecotox Threshold values from the EPA Office of Solid Waste and Emergency Response. Pore water analysis is recommended for ionic organic compounds; comparisons are then made against water quality benchmarks. This report is an update of three prior reports (Jones et al

  3. Robust Multivariable Flutter Suppression for the Benchmark Active Control Technology (BACT) Wind-Tunnel Model

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.

    1997-01-01

    The Benchmark Active Controls Technology (BACT) project is part of NASA Langley Research Center s Benchmark Models Program for studying transonic aeroelastic phenomena. In January of 1996 the BACT wind-tunnel model was used to successfully demonstrate the application of robust multivariable control design methods (H and -synthesis) to flutter suppression. This paper addresses the design and experimental evaluation of robust multivariable flutter suppression control laws with particular attention paid to the degree to which stability and performance robustness was achieved.

  4. Nuclear Data Performance Testing Using Sensitive, but Less Frequently Used ICSBEP Benchmarks

    SciTech Connect

    J. Blair Briggs; John D. Bess

    2011-08-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) has published the International Handbook of Evaluated Criticality Safety Benchmark Experiments annually since 1995. The Handbook now spans over 51,000 pages with benchmark specifications for 4,283 critical, near critical, or subcritical configurations; 24 criticality alarm placement/shielding configurations with multiple dose points for each; and 200 configurations that have been categorized as fundamental physics measurements relevant to criticality safety applications. Benchmark data in the ICSBEP Handbook were originally intended for validation of criticality safety methods and data; however, the benchmark specifications are now used extensively for nuclear data testing. There are several, less frequently used benchmarks within the Handbook that are very sensitive to thorium and certain key structural and moderating materials. Calculated results for many of those benchmarks using modern nuclear data libraries suggest there is still room for improvement. These and other highly sensitive, but rarely quoted benchmarks are highlighted and data testing results provided using the Monte Carlo N-Particle Version 5 (MCNP5) code and continuous energy ENDF/B-V, VI.8, and VII.0, JEFF-3.1, and JENDL-3.3 nuclear data libraries.

  5. Benchmarking in Student Affairs.

    ERIC Educational Resources Information Center

    Mosier, Robert E.; Schwarzmueller, Gary J.

    2002-01-01

    Discusses the use of benchmarking in student affairs, focusing on issues related to student housing. Provides examples of how benchmarking has influenced administrative practice at many institutions. (EV)

  6. Radiography benchmark 2014

    NASA Astrophysics Data System (ADS)

    Jaenisch, G.-R.; Deresch, A.; Bellon, C.; Schumm, A.; Lucet-Sanchez, F.; Guerin, P.

    2015-03-01

    The purpose of the 2014 WFNDEC RT benchmark study was to compare predictions of various models of radiographic techniques, in particular those that predict the contribution of scattered radiation. All calculations were carried out for homogenous materials and a mono-energetic X-ray point source in the energy range between 100 keV and 10 MeV. The calculations were to include the best physics approach available considering electron binding effects. Secondary effects like X-ray fluorescence and bremsstrahlung production were to be taken into account if possible. The problem to be considered had two parts. Part I examined the spectrum and the spatial distribution of radiation behind a single iron plate. Part II considered two equally sized plates, made of iron and aluminum respectively, only evaluating the spatial distribution. Here we present the results of above benchmark study, comparing them to MCNP as the assumed reference model. The possible origins of the observed deviations are discussed.

  7. Radiography benchmark 2014

    SciTech Connect

    Jaenisch, G.-R. Deresch, A. Bellon, C.; Schumm, A.; Lucet-Sanchez, F.; Guerin, P.

    2015-03-31

    The purpose of the 2014 WFNDEC RT benchmark study was to compare predictions of various models of radiographic techniques, in particular those that predict the contribution of scattered radiation. All calculations were carried out for homogenous materials and a mono-energetic X-ray point source in the energy range between 100 keV and 10 MeV. The calculations were to include the best physics approach available considering electron binding effects. Secondary effects like X-ray fluorescence and bremsstrahlung production were to be taken into account if possible. The problem to be considered had two parts. Part I examined the spectrum and the spatial distribution of radiation behind a single iron plate. Part II considered two equally sized plates, made of iron and aluminum respectively, only evaluating the spatial distribution. Here we present the results of above benchmark study, comparing them to MCNP as the assumed reference model. The possible origins of the observed deviations are discussed.

  8. Evaluation and comparison of benchmark QSAR models to predict a relevant REACH endpoint: The bioconcentration factor (BCF)

    SciTech Connect

    Gissi, Andrea; Lombardo, Anna; Roncaglioni, Alessandra; Gadaleta, Domenico; Mangiatordi, Giuseppe Felice; Nicolotti, Orazio; Benfenati, Emilio

    2015-02-15

    }=0.85) and sensitivity (average>0.70) for new compounds in the AD but not present in the training set. However, no single optimal model exists and, thus, it would be wise a case-by-case assessment. Yet, integrating the wealth of information from multiple models remains the winner approach. - Highlights: • REACH encourages the use of in silico methods in the assessment of chemicals safety. • The performances of nine BCF models were evaluated on a benchmark database of 851 chemicals. • We compared the models on the basis of both regression and classification performance. • Statistics on chemicals out of the training set and/or within the applicability domain were compiled. • The results show that QSAR models are useful as weight-of-evidence in support to other methods.

  9. Managing for Results in America's Great City Schools. A Report of the Performance Measurement and Benchmarking Project

    ERIC Educational Resources Information Center

    Council of the Great City Schools, 2012

    2012-01-01

    "Managing for Results in America's Great City Schools, 2012" is presented by the Council of the Great City Schools to its members and the public. The purpose of the project was and is to develop performance measures that can improve the business operations of urban public school districts nationwide. This year's report includes data from 61 of the…

  10. Peso Bilingual Language Development Project. Project Evaluation, June 30, 1970.

    ERIC Educational Resources Information Center

    Peso Education Service Center Region 16, Amarillo, TX.

    The "PESO" Bilingual Language Development Project was a 1-year pilot study in 4 West Texas county school districts involving 451 Anglo and Mexican American 1st- and 2nd-grade students. The project contained 3 components: (1) the development of bilingual oral and written language skills--instruction in the Spanish language, and the concomitant…

  11. Framework for the Evaluation of an IT Project Portfolio

    ERIC Educational Resources Information Center

    Tai, W. T.

    2010-01-01

    The basis for evaluating projects in an organizational IT project portfolio includes complexity factors, arguments/criteria, and procedures, with various implications. The purpose of this research was to develop a conceptual framework for IT project proposal evaluation. The research involved using a heuristic roadmap and the mind-mapping method to…

  12. Modelling in Evaluating a Working Life Project in Higher Education

    ERIC Educational Resources Information Center

    Sarja, Anneli; Janhonen, Sirpa; Havukainen, Pirjo; Vesterinen, Anne

    2012-01-01

    This article describes an evaluation method based on collaboration between the higher education, a care home and university, in a R&D project. The aim of the project was to elaborate modelling as a tool of developmental evaluation for innovation and competence in project cooperation. The approach was based on activity theory. Modelling enabled a…

  13. 23 CFR 505.11 - Project evaluation and rating.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 23 Highways 1 2014-04-01 2014-04-01 false Project evaluation and rating. 505.11 Section 505.11 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION TRANSPORTATION INFRASTRUCTURE MANAGEMENT PROJECTS OF NATIONAL AND REGIONAL SIGNIFICANCE EVALUATION AND RATING § 505.11 Project...

  14. 23 CFR 505.11 - Project evaluation and rating.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 23 Highways 1 2013-04-01 2013-04-01 false Project evaluation and rating. 505.11 Section 505.11 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION TRANSPORTATION INFRASTRUCTURE MANAGEMENT PROJECTS OF NATIONAL AND REGIONAL SIGNIFICANCE EVALUATION AND RATING § 505.11 Project...

  15. 23 CFR 505.11 - Project evaluation and rating.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 23 Highways 1 2012-04-01 2012-04-01 false Project evaluation and rating. 505.11 Section 505.11 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION TRANSPORTATION INFRASTRUCTURE MANAGEMENT PROJECTS OF NATIONAL AND REGIONAL SIGNIFICANCE EVALUATION AND RATING § 505.11 Project...

  16. 23 CFR 505.11 - Project evaluation and rating.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 23 Highways 1 2010-04-01 2010-04-01 false Project evaluation and rating. 505.11 Section 505.11 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION TRANSPORTATION INFRASTRUCTURE MANAGEMENT PROJECTS OF NATIONAL AND REGIONAL SIGNIFICANCE EVALUATION AND RATING § 505.11 Project...

  17. 23 CFR 505.11 - Project evaluation and rating.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 23 Highways 1 2011-04-01 2011-04-01 false Project evaluation and rating. 505.11 Section 505.11 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION TRANSPORTATION INFRASTRUCTURE MANAGEMENT PROJECTS OF NATIONAL AND REGIONAL SIGNIFICANCE EVALUATION AND RATING § 505.11 Project...

  18. Community Based Child Advocacy Projects: A Study in Evaluation.

    ERIC Educational Resources Information Center

    Kamerman, Sheila B.

    This report describes a study of 23 community-based child advocacy projects, located in 14 states and 20 cities, and outlines a strategy for evaluating such projects. Data on each project's history, development, and current activities were obtained. Data were analyzed to (1) determine how such projects are started and become operational, (2)…

  19. Towards Systematic Benchmarking of Climate Model Performance

    NASA Astrophysics Data System (ADS)

    Gleckler, P. J.

    2014-12-01

    The process by which climate models are evaluated has evolved substantially over the past decade, with the Coupled Model Intercomparison Project (CMIP) serving as a centralizing activity for coordinating model experimentation and enabling research. Scientists with a broad spectrum of expertise have contributed to the CMIP model evaluation process, resulting in many hundreds of publications that have served as a key resource for the IPCC process. For several reasons, efforts are now underway to further systematize some aspects of the model evaluation process. First, some model evaluation can now be considered routine and should not require "re-inventing the wheel" or a journal publication simply to update results with newer models. Second, the benefit of CMIP research to model development has not been optimal because the publication of results generally takes several years and is usually not reproducible for benchmarking newer model versions. And third, there are now hundreds of model versions and many thousands of simulations, but there is no community-based mechanism for routinely monitoring model performance changes. An important change in the design of CMIP6 can help address these limitations. CMIP6 will include a small set standardized experiments as an ongoing exercise (CMIP "DECK": ongoing Diagnostic, Evaluation and Characterization of Klima), so that modeling groups can submit them at any time and not be overly constrained by deadlines. In this presentation, efforts to establish routine benchmarking of existing and future CMIP simulations will be described. To date, some benchmarking tools have been made available to all CMIP modeling groups to enable them to readily compare with CMIP5 simulations during the model development process. A natural extension of this effort is to make results from all CMIP simulations widely available, including the results from newer models as soon as the simulations become available for research. Making the results from routine

  20. Design Alternatives for Evaluating the Impact of Conservation Projects

    ERIC Educational Resources Information Center

    Margoluis, Richard; Stem, Caroline; Salafsky, Nick; Brown, Marcia

    2009-01-01

    Historically, examples of project evaluation in conservation were rare. In recent years, however, conservation professionals have begun to recognize the importance of evaluation both for accountability and for improving project interventions. Even with this growing interest in evaluation, the conservation community has paid little attention to…

  1. Processor Emulator with Benchmark Applications

    SciTech Connect

    Lloyd, G. Scott; Pearce, Roger; Gokhale, Maya

    2015-11-13

    A processor emulator and a suite of benchmark applications have been developed to assist in characterizing the performance of data-centric workloads on current and future computer architectures. Some of the applications have been collected from other open source projects. For more details on the emulator and an example of its usage, see reference [1].

  2. Benchmark of 3D halo neutral simulation in TRANSP and FIDASIM and application to projected neutral-beam-heated NSTX-U plasmas

    NASA Astrophysics Data System (ADS)

    Liu, D.; Medley, S. S.; Gorelenkova, M. V.; Heidbrink, W. W.; Stagner, L.

    2014-10-01

    A cloud of halo neutrals is created in the vicinity of beam footprint during the neutral beam injection and the halo neutral density can be comparable with beam neutral density. Proper modeling of halo neutrals is critical to correctly interpret neutral particle analyzers (NPA) and fast ion D-alpha (FIDA) signals since these signals strongly depend on local beam and halo neutral density. A 3D halo neutral model has been recently developed and implemented inside TRANSP code. The 3D halo neutral code uses a ``beam-in-a-box'' model that encompasses both injected beam neutrals and resulting halo neutrals. Upon deposition by charge exchange, a subset of the full, one-half and one-third beam energy components produce thermal halo neutrals that are tracked through successive halo neutral generations until an ionization event occurs or a descendant halo exits the box. A benchmark between 3D halo neural model in TRANSP and in FIDA/NPA synthetic diagnostic code FIDASIM is carried out. Detailed comparison of halo neutral density profiles from two codes will be shown. The NPA and FIDA simulations with and without 3D halos are applied to projections of plasma performance for the National Spherical Tours eXperiment-Upgrade (NSTX-U) and the effects of halo neutral density on NPA and FIDA signal amplitude and profile will be presented. Work supported by US DOE.

  3. Computational Chemistry Comparison and Benchmark Database

    National Institute of Standards and Technology Data Gateway

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  4. Social Studies Project Evaluation: Case Study and Recommendations.

    ERIC Educational Resources Information Center

    Napier, John

    1982-01-01

    Describes the development and application of a model for social studies program evaluations. A case study showing how the model's three-step process was used to evaluate the Improving Citizenship Education Project in Fulton County, Georgia is included. (AM)

  5. Evaluation as Arbitration: External Evaluation of a Multilateral Development Project in a Third World Country.

    ERIC Educational Resources Information Center

    Bhola, H. S.

    Evaluation as a political arbitration entity is discussed in the case of a multilateral literacy development project in the fourth year of operation in a Third World country. An external evaluation team was invited to evaluate the project when conflict appeared between the funding agency (A) and the technical agency (B) over a project-related…

  6. Wais-III norms for working-age adults: a benchmark for conducting vocational, career, and employment-related evaluations.

    PubMed

    Fjordbak, Timothy; Fjordbak, Bess Sirmon

    2005-02-01

    The Wechsler Intelligence Scales are routinely used to assess threshold variables which correlate with subsequent job performance. Intellectual testing within educational and clinical settings accommodates natural developmental changes by referencing results to restricted age-band norms. However, accuracy in vocational and career consultation, as well as equity in hiring and promotion requires the application of a single normative benchmark unbiased by chronological age. Such unitary norms for working-age adults (18- to 64-yr.-olds) were derived from the WAIS-III standardization sample in accord with the proportional representation of the seven age-bands subsumed within this age range. Tabular summaries of results are given for the conversion of raw scores to scaled scores for the working-age population which can be used to derive IQ values and Index Scores. PMID:15825898

  7. Global and local scale flood discharge simulations in the Rhine River basin for flood risk reduction benchmarking in the Flagship Project

    NASA Astrophysics Data System (ADS)

    Gädeke, Anne; Gusyev, Maksym; Magome, Jun; Sugiura, Ai; Cullmann, Johannes; Takeuchi, Kuniyoshi

    2015-04-01

    The global flood risk assessment is prerequisite to set global measurable targets of post-Hyogo Framework for Action (HFA) that mobilize international cooperation and national coordination towards disaster risk reduction (DRR) and requires the establishment of a uniform flood risk assessment methodology on various scales. To address these issues, the International Flood Initiative (IFI) has initiated a Flagship Project, which was launched in year 2013, to support flood risk reduction benchmarking at global, national and local levels. In the Flagship Project road map, it is planned to identify the original risk (1), to identify the reduced risk (2), and to facilitate the risk reduction actions (3). In order to achieve this goal at global, regional and local scales, international research collaboration is absolutely necessary involving domestic and international institutes, academia and research networks such as UNESCO International Centres. The joint collaboration by ICHARM and BfG was the first attempt that produced the first step (1a) results on the flood discharge estimates with inundation maps under way. As a result of this collaboration, we demonstrate the outcomes of the first step of the IFI Flagship Project to identify flood hazard in the Rhine river basin on the global and local scale. In our assessment, we utilized a distributed hydrological Block-wise TOP (BTOP) model on 20-km and 0.5-km scales with local precipitation and temperature input data between 1980 and 2004. We utilized existing 20-km BTOP model, which is applied globally, and constructed the local scale 0.5-km BTOP model for the Rhine River basin. For the BTOP model results, both calibrated 20-km and 0.5-km BTOP models had similar statistical performance and represented observed flood river discharges, epecially for 1993 and 1995 floods. From 20-km and 0.5-km BTOP simulation, the flood discharges of the selected return period were estimated using flood frequency analysis and were comparable to

  8. Evaluation of the Matrix Project. Interchange 77.

    ERIC Educational Resources Information Center

    McIvor, Gill; Moodie, Kristina

    The Matrix Project is a program that has been established in central Scotland with the aim of reducing the risk of offending and anti-social behavior among vulnerable children. The project provides a range of services to children between eight and 11 years of age who are at risk in the local authority areas of Clackmannanshire, Falkirk and…

  9. Case Decision Project. Final Report (Process Evaluation).

    ERIC Educational Resources Information Center

    McDaniel, Garry

    The goal of the Case Decision Project (CDP) was to develop a method to improve the efficiency and effectiveness of program management in child protective services in Texas. At the onset of the project, workers across the state had no uniform method of obtaining case information. Therefore, an automated case investigation system was developed.…

  10. ELT in Albania: Project Evaluation and Change.

    ERIC Educational Resources Information Center

    Dushku, S.

    1998-01-01

    Discusses the design and implementation of the British Council English-language-teaching (ELT) project at the University of Tirana in Albania. Through analysis of the project and discussion of the appropriateness of its methodology to the Albanian social and professional context, factors are highlighted that account for the ephemeral nature of…

  11. Small Business Learning through Mentoring: Evaluating a Project

    ERIC Educational Resources Information Center

    Barrett, Rowena

    2006-01-01

    Purpose: The purpose of this paper is to evaluate a small business-mentoring project, which was delivered in regional Australia. Design/methodology/approach: This paper contains a case study of the mentoring project and focuses on the process and the outcomes of that project from different perspectives. Data collected in semi structured telephone…

  12. PLATO across the Curriculum: An Evaluation of a Project.

    ERIC Educational Resources Information Center

    Freer, David

    1986-01-01

    A project at the University of Witwatersrand examined the implications of introducing a centrally controlled system of computer-based learning in which 13 university departments utilized PLATO to supplement teaching programs and encourage computer literacy. Department project descriptions and project evaluations (which reported positive student…

  13. Fine Arts Educational Improvement Project. Evaluation Record 1969-1970.

    ERIC Educational Resources Information Center

    Baber, Eric

    This document is an evaluation and record of the Fine Arts Educational Improvement Project, a Title III, E.S.E.A. "PACE" project administered in the state of Illinois. The project functioned primarily in the subject fields of art, drama, and music. Within the general purpose of improving educational opportunities in the arts, the project…

  14. Project Aprendizaje. 1990-91 Final Evaluation Profile. OREA Report.

    ERIC Educational Resources Information Center

    New York City Board of Education, Brooklyn, NY. Office of Research, Evaluation, and Assessment.

    An evaluation was done of New York City Public Schools' Project Aprendizaje, which served disadvantaged, immigrant, Spanish-speaking high school students at Seward Park High School in Manhattan. The Project enrolled 290 students in grades 9 through 12, 93.1 percent of whom were eligible for the Free Lunch Program. The Project provided students of…

  15. What NSF Expects in Project Evaluations for Educational Innovations.

    ERIC Educational Resources Information Center

    Hannah, Judith L.

    1996-01-01

    The National Science Foundation (NSF) sponsors a range of programs to fund innovative approaches to teaching and learning. Focuses on NSF's expectations for project evaluation beginning with a definition of evaluation and a discussion of why evaluation is needed. Also describes planning, formative, and summative evaluation stages and concludes…

  16. 43 CFR 10005.20 - Project evaluation procedures.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    .... (e) Using best professional judgement, Commission staff will evaluate each project according to the... professional judgement using quantitative and/or qualitative rating techniques as appropriate. A given project..., the evaluation will be conducted using best professional judgement and may involve a variety...

  17. The Program Evaluator's Role in Cross-Project Pollination.

    ERIC Educational Resources Information Center

    Yasgur, Bruce J.

    An expanded duties role of the multiple-program evaluator as an integral part of the ongoing decision-making process in all projects served is defended. Assumptions discussed included that need for projects with related objectives to pool resources and avoid duplication of effort and the evaluator's unique ability to provide an objective…

  18. Evaluation in Adult Literacy Research. Project ALERT. [Phase I.

    ERIC Educational Resources Information Center

    Ntiri, Daphne Williams, Ed.

    The Adult Literacy and Evaluation Research Team (also known as Project ALERT) was a project conducted by the Detroit Literacy Coalition (DLC) at Wayne State University in 1993-1994 to develop and pilot a user-friendly program model for evaluating literacy operations of community-based organizations throughout Michigan under the provisions of…

  19. Student Assistance Program Demonstration Project Evaluation. Final Report.

    ERIC Educational Resources Information Center

    Pollard, John A.; Houle, Denise M.

    This document presents the final report on the evaluation of California's model student assistance program (SAP) demonstration projects implemented in five locations across the state from July 1989 through June 1992. The report provides an overall, integrated review of the evaluation of the SAP demonstration projects, summarizes important findings…

  20. Project for Faculty Development Program Evaluation: Final Report.

    ERIC Educational Resources Information Center

    Blackburn, Robert T.; And Others

    The project of faculty development program evaluation, developed by the Center for the Study of Higher Education of the University of Michigan, is described. Project thrusts were: to develop assessment instruments for judging the success of faculty development programs; to provide formative and summative evaluation for the programs of the 24…

  1. Evaluation of Service Station Attendant-Auto Care Project.

    ERIC Educational Resources Information Center

    Cress, Ronald J.

    The project described offers an approach to providing occupational skills to socially and educationally handicapped youth, specifically the skills necessary for a service station attendant in driveway salesmanship and auto care. The 10-page evaluation report presents project goals and objectives with evaluation data (represented graphically) and…

  2. Kentucky Migrant Technology Project: External Evaluation Report, 1997-98.

    ERIC Educational Resources Information Center

    Popp, Robert J.

    During its first year of operation (1997-98), the Kentucky Migrant Technology Project successfully implemented its model, used internal and external evaluations to inform improvement of the model, and began plans for expansion into new service areas. This evaluation report is organized around five questions that focus on the project model and its…

  3. Container evaluation for microwave solidification project

    SciTech Connect

    Smith, J.A.

    1994-08-01

    This document discusses the development and testing of a suitable waste container and packaging arrangement to be used with the Microwave Solidification System (MSS) and Bagless Posting System (BPS). The project involves the Rocky Flats Plant.

  4. Fuzzy Present Value Analysis Model for Evaluating Information System Projects

    SciTech Connect

    Omitaomu, Olufemi A; Badiru, Adedeji B

    2007-01-01

    In this article, the economic evaluation of information system projects using present value is analyzed based on triangular fuzzy numbers. Information system projects usually have numerous uncertainties and several conditions of risk that make their economic evaluation a challenging task. Each year, several information system projects are cancelled before completion as a result of budget overruns at a cost of several billions of dollars to industry. Although engineering economic analysis offers tools and techniques for evaluating risky projects, the tools are not enough to place information system projects on a safe budget/selection track. There is a need for an integrative economic analysis model that will account for the uncertainties in estimating project costs, benefits, and useful lives of uncertain and risky projects. In this study, we propose an approximate method of computing project present value using the concept of fuzzy modeling with special reference to information system projects. This proposed model has the potential of enhancing the project selection process by capturing a better economic picture of the project alternatives. The proposed methodology can also be used for other real-life projects with high degree of uncertainty and risk.

  5. Benchmarking for Higher Education.

    ERIC Educational Resources Information Center

    Jackson, Norman, Ed.; Lund, Helen, Ed.

    The chapters in this collection explore the concept of benchmarking as it is being used and developed in higher education (HE). Case studies and reviews show how universities in the United Kingdom are using benchmarking to aid in self-regulation and self-improvement. The chapters are: (1) "Introduction to Benchmarking" (Norman Jackson and Helen…

  6. Decay Data Evaluation Project (DDEP): evaluation of the main 233Pa decay characteristics.

    PubMed

    Chechev, Valery P; Kuzmenko, Nikolay K

    2006-01-01

    The results of a decay data evaluation are presented for 233Pa (beta-) decay to nuclear levels in 233U. These evaluated data have been obtained within the Decay Data Evaluation Project using information published up to 2005. PMID:16574422

  7. National Writing Project Report. Evaluation of the Bay Area Writing Project. Technical Report.

    ERIC Educational Resources Information Center

    Stahlecker, James; And Others

    Prepared as part of the evaluation of the Bay Area Writing Project (BAWP), this report examines the National Writing Project (NWP) network, a group of teacher training projects designed to replicate the core model of the BAWP. The information provided in this report is divided into three sections. The first section summarizes information regarding…

  8. Closed-Loop Neuromorphic Benchmarks.

    PubMed

    Stewart, Terrence C; DeWolf, Travis; Kleinhans, Ashley; Eliasmith, Chris

    2015-01-01

    Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is even more difficult when the task of interest is a closed-loop task; that is, a task where the output from the neuromorphic hardware affects some environment, which then in turn affects the hardware's future input. However, closed-loop situations are one of the primary potential uses of neuromorphic hardware. To address this, we present a methodology for generating closed-loop benchmarks that makes use of a hybrid of real physical embodiment and a type of "minimal" simulation. Minimal simulation has been shown to lead to robust real-world performance, while still maintaining the practical advantages of simulation, such as making it easy for the same benchmark to be used by many researchers. This method is flexible enough to allow researchers to explicitly modify the benchmarks to identify specific task domains where particular hardware excels. To demonstrate the method, we present a set of novel benchmarks that focus on motor control for an arbitrary system with unknown external forces. Using these benchmarks, we show that an error-driven learning rule can consistently improve motor control performance across a randomly generated family of closed-loop simulations, even when there are up to 15 interacting joints to be controlled. PMID:26696820

  9. Closed-Loop Neuromorphic Benchmarks

    PubMed Central

    Stewart, Terrence C.; DeWolf, Travis; Kleinhans, Ashley; Eliasmith, Chris

    2015-01-01

    Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is even more difficult when the task of interest is a closed-loop task; that is, a task where the output from the neuromorphic hardware affects some environment, which then in turn affects the hardware's future input. However, closed-loop situations are one of the primary potential uses of neuromorphic hardware. To address this, we present a methodology for generating closed-loop benchmarks that makes use of a hybrid of real physical embodiment and a type of “minimal” simulation. Minimal simulation has been shown to lead to robust real-world performance, while still maintaining the practical advantages of simulation, such as making it easy for the same benchmark to be used by many researchers. This method is flexible enough to allow researchers to explicitly modify the benchmarks to identify specific task domains where particular hardware excels. To demonstrate the method, we present a set of novel benchmarks that focus on motor control for an arbitrary system with unknown external forces. Using these benchmarks, we show that an error-driven learning rule can consistently improve motor control performance across a randomly generated family of closed-loop simulations, even when there are up to 15 interacting joints to be controlled. PMID:26696820

  10. International benchmarking of specialty hospitals. A series of case studies on comprehensive cancer centres

    PubMed Central

    2010-01-01

    Background Benchmarking is one of the methods used in business that is applied to hospitals to improve the management of their operations. International comparison between hospitals can explain performance differences. As there is a trend towards specialization of hospitals, this study examines the benchmarking process and the success factors of benchmarking in international specialized cancer centres. Methods Three independent international benchmarking studies on operations management in cancer centres were conducted. The first study included three comprehensive cancer centres (CCC), three chemotherapy day units (CDU) were involved in the second study and four radiotherapy departments were included in the final study. Per multiple case study a research protocol was used to structure the benchmarking process. After reviewing the multiple case studies, the resulting description was used to study the research objectives. Results We adapted and evaluated existing benchmarking processes through formalizing stakeholder involvement and verifying the comparability of the partners. We also devised a framework to structure the indicators to produce a coherent indicator set and better improvement suggestions. Evaluating the feasibility of benchmarking as a tool to improve hospital processes led to mixed results. Case study 1 resulted in general recommendations for the organizations involved. In case study 2, the combination of benchmarking and lean management led in one CDU to a 24% increase in bed utilization and a 12% increase in productivity. Three radiotherapy departments of case study 3, were considering implementing the recommendations. Additionally, success factors, such as a well-defined and small project scope, partner selection based on clear criteria, stakeholder involvement, simple and well-structured indicators, analysis of both the process and its results and, adapt the identified better working methods to the own setting, were found. Conclusions The improved