Science.gov

Sample records for benchmark evaluation project

  1. Criticality safety benchmark evaluation project: Recovering the past

    SciTech Connect

    Trumble, E.F.

    1997-06-01

    A very brief summary of the Criticality Safety Benchmark Evaluation Project of the Westinghouse Savannah River Company is provided in this paper. The purpose of the project is to provide a source of evaluated criticality safety experiments in an easily usable format. Another project goal is to search for any experiments that may have been lost or contain discrepancies, and to determine if they can be used. Results of evaluated experiments are being published as US DOE handbooks.

  2. INTEGRAL BENCHMARKS AVAILABLE THROUGH THE INTERNATIONAL REACTOR PHYSICS EXPERIMENT EVALUATION PROJECT AND THE INTERNATIONAL CRITICALITY SAFETY BENCHMARK EVALUATION PROJECT

    SciTech Connect

    J. Blair Briggs; Lori Scott; Enrico Sartori; Yolanda Rugama

    2008-09-01

    Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) continue to expand their efforts and broaden their scope to identify, evaluate, and provide integral benchmark data for method and data validation. Benchmark model specifications provided by these two projects are used heavily by the international reactor physics, nuclear data, and criticality safety communities. Thus far, 14 countries have contributed to the IRPhEP, and 20 have contributed to the ICSBEP. The status of the IRPhEP and ICSBEP is discussed in this paper, and the future of the two projects is outlined and discussed. Selected benchmarks that have been added to the IRPhEP and ICSBEP handbooks since PHYSOR’06 are highlighted, and the future of the two projects is discussed.

  3. Benchmark Data Through The International Reactor Physics Experiment Evaluation Project (IRPHEP)

    SciTech Connect

    J. Blair Briggs; Dr. Enrico Sartori

    2005-09-01

    The International Reactor Physics Experiments Evaluation Project (IRPhEP) was initiated by the Organization for Economic Cooperation and Development (OECD) Nuclear Energy Agency’s (NEA) Nuclear Science Committee (NSC) in June of 2002. The IRPhEP focus is on the derivation of internationally peer reviewed benchmark models for several types of integral measurements, in addition to the critical configuration. While the benchmarks produced by the IRPhEP are of primary interest to the Reactor Physics Community, many of the benchmarks can be of significant value to the Criticality Safety and Nuclear Data Communities. Benchmarks that support the Next Generation Nuclear Plant (NGNP), for example, also support fuel manufacture, handling, transportation, and storage activities and could challenge current analytical methods. The IRPhEP is patterned after the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and is closely coordinated with the ICSBEP. This paper highlights the benchmarks that are currently being prepared by the IRPhEP that are also of interest to the Criticality Safety Community. The different types of measurements and associated benchmarks that can be expected in the first publication and beyond are described. The protocol for inclusion of IRPhEP benchmarks as ICSBEP benchmarks and for inclusion of ICSBEP benchmarks as IRPhEP benchmarks is detailed. The format for IRPhEP benchmark evaluations is described as an extension of the ICSBEP format. Benchmarks produced by the IRPhEP add new dimension to criticality safety benchmarking efforts and expand the collection of available integral benchmarks for nuclear data testing. The first publication of the "International Handbook of Evaluated Reactor Physics Benchmark Experiments" is scheduled for January of 2006.

  4. The Activities of the International Criticality Safety Benchmark Evaluation Project (ICSBEP)

    SciTech Connect

    Briggs, Joseph Blair

    2001-10-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organization for Economic Cooperation and Development (OECD) – Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Yugoslavia, Kazakhstan, Spain, and Israel are now participating. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled “International Handbook of Evaluated Criticality Safety Benchmark Experiments”. The 2001 Edition of the Handbook contains benchmark specifications for 2642 critical or subcritical configurations that are intended for use in validation efforts and for testing basic nuclear data.

  5. Growth and Expansion of the International Criticality Safety Benchmark Evaluation Project and the Newly Organized International Reactor Physics Experiment Evaluation Project

    SciTech Connect

    J. Blair Briggs; Lori Scott; Yolanda Rugama; Enrico Satori

    2007-05-01

    Since ICNC 2003, the International Criticality Safety Benchmark Evaluation Project (ICSBEP) has continued to expand its efforts and broaden its scope. Criticality-alarm / shielding type benchmarks and fundamental physics measurements that are relevant to criticality safety applications are not only included in the scope of the project, but benchmark data are also included in the latest version of the handbook. A considerable number of improvements have been made to the searchable database, DICE and the criticality-alarm / shielding benchmarks and fundamental physics measurements have been included in the database. There were 12 countries participating on the ICSBEP in 2003. That number has increased to 18 with recent contributions of data and/or resources from Brazil, Czech Republic, Poland, India, Canada, and China. South Africa, Germany, Argentina, and Australia have been invited to participate. Since ICNC 2003, the contents of the “International Handbook of Evaluated Criticality Safety Benchmark Experiments” have increased from 350 evaluations (28,000 pages) containing benchmark specifications for 3070 critical or subcritical configurations to 442 evaluations (over 38,000 pages) containing benchmark specifications for 3957 critical or subcritical configurations, 23 criticality-alarm-placement / shielding configurations with multiple dose points for each, and 20 configurations that have been categorized as fundamental physics measurements that are relevant to criticality safety applications in the 2006 Edition of the ICSBEP Handbook. Approximately 30 new evaluations and 250 additional configurations are expected to be added to the 2007 Edition of the Handbook. Since ICNC 2003, a reactor physics counterpart to the ICSBEP, The International Reactor Physics Experiment Evaluation Project (IRPhEP) was initiated. Beginning in 1999, the IRPhEP was conducted as a pilot activity by the by the Organization of Economic Cooperation and Development (OECD) Nuclear Energy

  6. National healthcare capital project benchmarking--an owner's perspective.

    PubMed

    Kahn, Noah

    2009-01-01

    Few sectors of the economy have been left unscathed in these economic times. Healthcare construction has been less affected than residential and nonresidential construction sectors, but driven by re-evaluation of healthcare system capital plans, projects are now being put on hold or canceled. The industry is searching for ways to improve the value proposition for project delivery and process controls. In other industries, benchmarking component costs has led to significant, sustainable reductions in costs and cost variations. Kaiser Permanente and the Construction Industry Institute (CII), a research component of the University of Texas at Austin, an industry leader in benchmarking, have joined with several other organizations to work on a national benchmarking and metrics program to gauge the performance of healthcare facility projects. This initiative will capture cost, schedule, delivery method, change, functional, operational, and best practice metrics. This program is the only one of its kind. The CII Web-based interactive reporting system enables a company to view its information and mine industry data. Benchmarking is a tool for continuous improvement that is capable not only of grading outcomes; it can inform all aspects of the healthcare design and construction process and ultimately help moderate the increasing cost of delivering healthcare.

  7. [Results of the evaluation of German benchmarking networks funded by the Ministry of Health].

    PubMed

    de Cruppé, Werner; Blumenstock, Gunnar; Fischer, Imma; Selbmann, Hans-Konrad; Geraedts, Max

    2011-01-01

    Nine out of ten demonstration projects on clinical benchmarking funded by the German Ministry of Health were evaluated. Project reports and interviews were uniformly analysed using a list of criteria and a scheme to categorize the realized benchmarking approach. At the end of the funding period four benchmarking networks had implemented all benchmarking steps, and six were continued after funding had expired. The improvement of outcome quality cannot yet be assessed. Factors promoting the introduction of benchmarking networks with regard to organisational and process aspects of benchmarking implementation were derived.

  8. Providing Nuclear Criticality Safety Analysis Education through Benchmark Experiment Evaluation

    SciTech Connect

    John D. Bess; J. Blair Briggs; David W. Nigg

    2009-11-01

    One of the challenges that today's new workforce of nuclear criticality safety engineers face is the opportunity to provide assessment of nuclear systems and establish safety guidelines without having received significant experience or hands-on training prior to graduation. Participation in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and/or the International Reactor Physics Experiment Evaluation Project (IRPhEP) provides students and young professionals the opportunity to gain experience and enhance critical engineering skills.

  9. Benchmark Evaluation of Plutonium Nitrate Solution Arrays

    SciTech Connect

    M. A. Marshall; J. D. Bess

    2011-09-01

    In October and November of 1981 thirteen approach-to-critical experiments were performed on a remote split table machine (RSTM) in the Critical Mass Laboratory of Pacific Northwest Laboratory (PNL) in Richland, Washington, using planar arrays of polyethylene bottles filled with plutonium (Pu) nitrate solution. Arrays of up to sixteen bottles were used to measure the critical number of bottles and critical array spacing with a tight fitting Plexiglas{reg_sign} reflector on all sides of the arrays except the top. Some experiments used Plexiglas shells fitted around each bottles to determine the effect of moderation on criticality. Each bottle contained approximately 2.4 L of Pu(NO3)4 solution with a Pu content of 105 g Pu/L and a free acid molarity H+ of 5.1. The plutonium was of low 240Pu (2.9 wt.%) content. These experiments were performed to fill a gap in experimental data regarding criticality limits for storing and handling arrays of Pu solution in reprocessing facilities. Of the thirteen approach-to-critical experiments eleven resulted in extrapolations to critical configurations. Four of the approaches were extrapolated to the critical number of bottles; these were not evaluated further due to the large uncertainty associated with the modeling of a fraction of a bottle. The remaining seven approaches were extrapolated to critical array spacing of 3-4 and 4-4 arrays; these seven critical configurations were evaluation for inclusion as acceptable benchmark experiments in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbook. Detailed and simple models of these configurations were created and the associated bias of these simplifications was determined to range from 0.00116 and 0.00162 {+-} 0.00006 ?keff. Monte Carlo analysis of all models was completed using MCNP5 with ENDF/BVII.0 neutron cross section libraries. A thorough uncertainty analysis of all critical, geometric, and material parameters was performed using parameter

  10. Performance Evaluation and Benchmarking of Intelligent Systems

    SciTech Connect

    Madhavan, Raj; Messina, Elena; Tunstel, Edward

    2009-09-01

    To design and develop capable, dependable, and affordable intelligent systems, their performance must be measurable. Scientific methodologies for standardization and benchmarking are crucial for quantitatively evaluating the performance of emerging robotic and intelligent systems technologies. There is currently no accepted standard for quantitatively measuring the performance of these systems against user-defined requirements; and furthermore, there is no consensus on what objective evaluation procedures need to be followed to understand the performance of these systems. The lack of reproducible and repeatable test methods has precluded researchers working towards a common goal from exchanging and communicating results, inter-comparing system performance, and leveraging previous work that could otherwise avoid duplication and expedite technology transfer. Currently, this lack of cohesion in the community hinders progress in many domains, such as manufacturing, service, healthcare, and security. By providing the research community with access to standardized tools, reference data sets, and open source libraries of solutions, researchers and consumers will be able to evaluate the cost and benefits associated with intelligent systems and associated technologies. In this vein, the edited book volume addresses performance evaluation and metrics for intelligent systems, in general, while emphasizing the need and solutions for standardized methods. To the knowledge of the editors, there is not a single book on the market that is solely dedicated to the subject of performance evaluation and benchmarking of intelligent systems. Even books that address this topic do so only marginally or are out of date. The research work presented in this volume fills this void by drawing from the experiences and insights of experts gained both through theoretical development and practical implementation of intelligent systems in a variety of diverse application domains. The book presents

  11. Monte Carlo Eigenvalue Calculations with ENDF/B-VI.8, JEFF-3.0, and JENDL-3.3 Cross Sections for a Selection of International Criticality Safety Benchmark Evaluation Project Handbook Benchmarks

    SciTech Connect

    Kahler, A.C

    2003-10-15

    Continuous-energy Monte Carlo eigenvalue calculations have been performed for a selection of HEU-MET-FAST, IEU-MET-FAST, HEU-SOL-THERM, LEU-COMP-THERM, and LEU-SOL-THERM benchmarks using ENDF/B (primarily VI.8), JEFF-3.0, and JENDL-3.3 cross sections. These benchmarks allow for testing the cross-section data for both common reactor nuclides such as {sup 1}H, {sup 16}O, and {sup 235,238}U and structural and shielding elements such as Al, Ti, Fe, Ni, and Pb. The latest cross-section libraries yield near-unity eigenvalues for unreflected or water-reflected HEU-SOL-THERM and LEU-SOL-THERM systems. Near-unity eigenvalues are also obtained for bare HEU-MET-FAST and IEU-MET-FAST systems, but small deviations from unity are observed in both FAST and THERM benchmarks as a function of nonhydrogenous reflector material and thickness. The long-standing problem of lower eigenvalues in water-reflected low-enriched-uranium fuel lattice systems remains, regardless of cross-section library.

  12. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; Fatoohi, Rod

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  13. Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program

    SciTech Connect

    Bess, John D.; Montierth, Leland; Köberl, Oliver; Snoj, Luka

    2014-10-09

    Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the ²³⁵U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of keff with MCNP5 and ENDF/B-VII.0 neutron nuclear data are greater than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of keff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.

  14. Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program

    DOE PAGES

    Bess, John D.; Montierth, Leland; Köberl, Oliver; ...

    2014-10-09

    Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the ²³⁵U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of keff with MCNP5 and ENDF/B-VII.0 neutron nuclear data are greatermore » than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of keff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  15. Benchmark Evaluation of the NRAD Reactor LEU Core Startup Measurements

    SciTech Connect

    J. D. Bess; T. L. Maddock; M. A. Marshall

    2011-09-01

    The Neutron Radiography (NRAD) reactor is a 250-kW TRIGA-(Training, Research, Isotope Production, General Atomics)-conversion-type reactor at the Idaho National Laboratory; it is primarily used for neutron radiography analysis of irradiated and unirradiated fuels and materials. The NRAD reactor was converted from HEU to LEU fuel with 60 fuel elements and brought critical on March 31, 2010. This configuration of the NRAD reactor has been evaluated as an acceptable benchmark experiment and is available in the 2011 editions of the International Handbook of Evaluated Criticality Safety Benchmark Experiments (ICSBEP Handbook) and the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook). Significant effort went into precisely characterizing all aspects of the reactor core dimensions and material properties; detailed analyses of reactor parameters minimized experimental uncertainties. The largest contributors to the total benchmark uncertainty were the 234U, 236U, Er, and Hf content in the fuel; the manganese content in the stainless steel cladding; and the unknown level of water saturation in the graphite reflector blocks. A simplified benchmark model of the NRAD reactor was prepared with a keff of 1.0012 {+-} 0.0029 (1s). Monte Carlo calculations with MCNP5 and KENO-VI and various neutron cross section libraries were performed and compared with the benchmark eigenvalue for the 60-fuel-element core configuration; all calculated eigenvalues are between 0.3 and 0.8% greater than the benchmark value. Benchmark evaluations of the NRAD reactor are beneficial in understanding biases and uncertainties affecting criticality safety analyses of storage, handling, or transportation applications with LEU-Er-Zr-H fuel.

  16. NASA Indexing Benchmarks: Evaluating Text Search Engines

    NASA Technical Reports Server (NTRS)

    Esler, Sandra L.; Nelson, Michael L.

    1997-01-01

    The current proliferation of on-line information resources underscores the requirement for the ability to index collections of information and search and retrieve them in a convenient manner. This study develops criteria for analytically comparing the index and search engines and presents results for a number of freely available search engines. A product of this research is a toolkit capable of automatically indexing, searching, and extracting performance statistics from each of the focused search engines. This toolkit is highly configurable and has the ability to run these benchmark tests against other engines as well. Results demonstrate that the tested search engines can be grouped into two levels. Level one engines are efficient on small to medium sized data collections, but show weaknesses when used for collections 100MB or larger. Level two search engines are recommended for data collections up to and beyond 100MB.

  17. Quality in E-Learning--A Conceptual Framework Based on Experiences from Three International Benchmarking Projects

    ERIC Educational Resources Information Center

    Ossiannilsson, E.; Landgren, L.

    2012-01-01

    Between 2008 and 2010, Lund University took part in three international benchmarking projects, "E-xcellence+," the "eLearning Benchmarking Exercise 2009," and the "First Dual-Mode Distance Learning Benchmarking Club." A comparison of these models revealed a rather high level of correspondence. From this finding and…

  18. Benchmarking Evaluation Results for Prototype Extravehicular Activity Gloves

    NASA Technical Reports Server (NTRS)

    Aitchison, Lindsay; McFarland, Shane

    2012-01-01

    The Space Suit Assembly (SSA) Development Team at NASA Johnson Space Center has invested heavily in the advancement of rear-entry planetary exploration suit design but largely deferred development of extravehicular activity (EVA) glove designs, and accepted the risk of using the current flight gloves, Phase VI, for unique mission scenarios outside the Space Shuttle and International Space Station (ISS) Program realm of experience. However, as design reference missions mature, the risks of using heritage hardware have highlighted the need for developing robust new glove technologies. To address the technology gap, the NASA Game-Changing Technology group provided start-up funding for the High Performance EVA Glove (HPEG) Project in the spring of 2012. The overarching goal of the HPEG Project is to develop a robust glove design that increases human performance during EVA and creates pathway for future implementation of emergent technologies, with specific aims of increasing pressurized mobility to 60% of barehanded capability, increasing the durability by 100%, and decreasing the potential of gloves to cause injury during use. The HPEG Project focused initial efforts on identifying potential new technologies and benchmarking the performance of current state of the art gloves to identify trends in design and fit leading to establish standards and metrics against which emerging technologies can be assessed at both the component and assembly levels. The first of the benchmarking tests evaluated the quantitative mobility performance and subjective fit of four prototype gloves developed by Flagsuit LLC, Final Frontier Designs, LLC Dover, and David Clark Company as compared to the Phase VI. All of the companies were asked to design and fabricate gloves to the same set of NASA provided hand measurements (which corresponded to a single size of Phase Vi glove) and focus their efforts on improving mobility in the metacarpal phalangeal and carpometacarpal joints. Four test

  19. Data Testing CIELO Evaluations with ICSBEP Benchmarks

    SciTech Connect

    Kahler, Albert Comstock

    2016-03-09

    We review criticality data testing performed at Los Alamos with a combination of ENDF/B-VII.1 + potential CIELO nuclear data evaluations. The goal of CIELO is to develop updated, best available evaluated nuclear data files for 1H, 16O, 56Fe, 235,238U and 239Pu. because the major international evaluated nuclear data libraries don’t agree on the internal cross section details of these most important nuclides.

  20. Benchmarking: A Tool for Web Site Evaluation and Improvement.

    ERIC Educational Resources Information Center

    Misic, Mark M.; Johnson, Kelsey L.

    1999-01-01

    This paper presents a case study on how benchmarking was used to determine how one organization's Web site compared to Web sites of related schools and professional organizations. Highlights include application of metrics, the Web site evaluation form, functional/navigational issues, content and style, and top site generalizations. (Author/LRW)

  1. A comprehensive benchmarking system for evaluating global vegetation models

    NASA Astrophysics Data System (ADS)

    Kelley, D. I.; Prentice, I. C.; Harrison, S. P.; Wang, H.; Simard, M.; Fisher, J. B.; Willis, K. O.

    2013-05-01

    We present a benchmark system for global vegetation models. This system provides a quantitative evaluation of multiple simulated vegetation properties, including primary production; seasonal net ecosystem production; vegetation cover; composition and height; fire regime; and runoff. The benchmarks are derived from remotely sensed gridded datasets and site-based observations. The datasets allow comparisons of annual average conditions and seasonal and inter-annual variability, and they allow the impact of spatial and temporal biases in means and variability to be assessed separately. Specifically designed metrics quantify model performance for each process, and are compared to scores based on the temporal or spatial mean value of the observations and a "random" model produced by bootstrap resampling of the observations. The benchmark system is applied to three models: a simple light-use efficiency and water-balance model (the Simple Diagnostic Biosphere Model: SDBM), the Lund-Potsdam-Jena (LPJ) and Land Processes and eXchanges (LPX) dynamic global vegetation models (DGVMs). In general, the SDBM performs better than either of the DGVMs. It reproduces independent measurements of net primary production (NPP) but underestimates the amplitude of the observed CO2 seasonal cycle. The two DGVMs show little difference for most benchmarks (including the inter-annual variability in the growth rate and seasonal cycle of atmospheric CO2), but LPX represents burnt fraction demonstrably more accurately. Benchmarking also identified several weaknesses common to both DGVMs. The benchmarking system provides a quantitative approach for evaluating how adequately processes are represented in a model, identifying errors and biases, tracking improvements in performance through model development, and discriminating among models. Adoption of such a system would do much to improve confidence in terrestrial model predictions of climate change impacts and feedbacks.

  2. A comprehensive benchmarking system for evaluating global vegetation models

    NASA Astrophysics Data System (ADS)

    Kelley, D. I.; Prentice, I. Colin; Harrison, S. P.; Wang, H.; Simard, M.; Fisher, J. B.; Willis, K. O.

    2012-11-01

    We present a benchmark system for global vegetation models. This system provides a quantitative evaluation of multiple simulated vegetation properties, including primary production; seasonal net ecosystem production; vegetation cover, composition and height; fire regime; and runoff. The benchmarks are derived from remotely sensed gridded datasets and site-based observations. The datasets allow comparisons of annual average conditions and seasonal and inter-annual variability, and they allow the impact of spatial and temporal biases in means and variability to be assessed separately. Specifically designed metrics quantify model performance for each process, and are compared to scores based on the temporal or spatial mean value of the observations and a "random" model produced by bootstrap resampling of the observations. The benchmark system is applied to three models: a simple light-use efficiency and water-balance model (the Simple Diagnostic Biosphere Model: SDBM), and the Lund-Potsdam-Jena (LPJ) and Land Processes and eXchanges (LPX) dynamic global vegetation models (DGVMs). SDBM reproduces observed CO2 seasonal cycles, but its simulation of independent measurements of net primary production (NPP) is too high. The two DGVMs show little difference for most benchmarks (including the inter-annual variability in the growth rate and seasonal cycle of atmospheric CO2), but LPX represents burnt fraction demonstrably more accurately. Benchmarking also identified several weaknesses common to both DGVMs. The benchmarking system provides a quantitative approach for evaluating how adequately processes are represented in a model, identifying errors and biases, tracking improvements in performance through model development, and discriminating among models. Adoption of such a system would do much to improve confidence in terrestrial model predictions of climate change impacts and feedbacks.

  3. A benchmark for fault tolerant flight control evaluation

    NASA Astrophysics Data System (ADS)

    Smaili, H.; Breeman, J.; Lombaerts, T.; Stroosma, O.

    2013-12-01

    A large transport aircraft simulation benchmark (REconfigurable COntrol for Vehicle Emergency Return - RECOVER) has been developed within the GARTEUR (Group for Aeronautical Research and Technology in Europe) Flight Mechanics Action Group 16 (FM-AG(16)) on Fault Tolerant Control (2004 2008) for the integrated evaluation of fault detection and identification (FDI) and reconfigurable flight control strategies. The benchmark includes a suitable set of assessment criteria and failure cases, based on reconstructed accident scenarios, to assess the potential of new adaptive control strategies to improve aircraft survivability. The application of reconstruction and modeling techniques, based on accident flight data, has resulted in high-fidelity nonlinear aircraft and fault models to evaluate new Fault Tolerant Flight Control (FTFC) concepts and their real-time performance to accommodate in-flight failures.

  4. COVE 2A Benchmarking calculations using NORIA; Yucca Mountain Site Characterization Project

    SciTech Connect

    Carrigan, C.R.; Bixler, N.E.; Hopkins, P.L.; Eaton, R.R.

    1991-10-01

    Six steady-state and six transient benchmarking calculations have been performed, using the finite element code NORIA, to simulate one-dimensional infiltration into Yucca Mountain. These calculations were made to support the code verification (COVE 2A) activity for the Yucca Mountain Site Characterization Project. COVE 2A evaluates the usefulness of numerical codes for analyzing the hydrology of the potential Yucca Mountain site. Numerical solutions for all cases were found to be stable. As expected, the difficulties and computer-time requirements associated with obtaining solutions increased with infiltration rate. 10 refs., 128 figs., 5 tabs.

  5. Performance Evaluation and Benchmarking of Next Intelligent Systems

    SciTech Connect

    del Pobil, Angel; Madhavan, Raj; Bonsignorio, Fabio

    2009-10-01

    Performance Evaluation and Benchmarking of Intelligent Systems presents research dedicated to the subject of performance evaluation and benchmarking of intelligent systems by drawing from the experiences and insights of leading experts gained both through theoretical development and practical implementation of intelligent systems in a variety of diverse application domains. This contributed volume offers a detailed and coherent picture of state-of-the-art, recent developments, and further research areas in intelligent systems. The chapters cover a broad range of applications, such as assistive robotics, planetary surveying, urban search and rescue, and line tracking for automotive assembly. Subsystems or components described in this book include human-robot interaction, multi-robot coordination, communications, perception, and mapping. Chapters are also devoted to simulation support and open source software for cognitive platforms, providing examples of the type of enabling underlying technologies that can help intelligent systems to propagate and increase in capabilities. Performance Evaluation and Benchmarking of Intelligent Systems serves as a professional reference for researchers and practitioners in the field. This book is also applicable to advanced courses for graduate level students and robotics professionals in a wide range of engineering and related disciplines including computer science, automotive, healthcare, manufacturing, and service robotics.

  6. 239Pu Resonance Evaluation for Thermal Benchmark System Calculations

    SciTech Connect

    Leal, Luiz C; Noguere, G; De Saint Jean, C; Kahler, A.

    2013-01-01

    Analyses of thermal plutonium solution critical benchmark systems have indicated a deciency in the 239Pu resonance evaluation. To investigate possible solutions to this issue, the Organisation for Economic Co-operation and Development (OECD) Nuclear Energy Agency (NEA) Working Party for Evaluation Cooperation (WPEC) established Subgroup 34 to focus on the reevaluation of the 239Pu resolved resonance parameters. In addition, the impacts of the prompt neutron multiplication (nubar) and the prompt neutron ssion spectrum (PFNS) have been investigated. The objective of this paper is to present the results of the 239Pu resolved resonance evaluation eort.

  7. 239Pu Resonance Evaluation for Thermal Benchmark System Calculations

    NASA Astrophysics Data System (ADS)

    Leal, L. C.; Noguere, G.; de Saint Jean, C.; Kahler, A. C.

    2014-04-01

    Analyses of thermal plutonium solution critical benchmark systems have indicated a deficiency in the 239Pu resonance evaluation. To investigate possible solutions to this issue, the Organisation for Economic Co-operation and Development (OECD) Nuclear Energy Agency (NEA) Working Party for Evaluation Cooperation (WPEC) established Subgroup 34 to focus on the reevaluation of the 239Pu resolved resonance parameters. In addition, the impacts of the prompt neutron multiplicity (νbar) and the prompt neutron fission spectrum (PFNS) have been investigated. The objective of this paper is to present the results of the 239Pu resolved resonance evaluation effort.

  8. Limitations of Community College Benchmarking and Benchmarks

    ERIC Educational Resources Information Center

    Bers, Trudy H.

    2006-01-01

    This chapter distinguishes between benchmarks and benchmarking, describes a number of data and cultural limitations to benchmarking projects, and suggests that external demands for accountability are the dominant reason for growing interest in benchmarking among community colleges.

  9. Evaluation of the HTR-10 Reactor as a Benchmark for Physics Code QA

    SciTech Connect

    William K. Terry; Soon Sam Kim; Leland M. Montierth; Joshua J. Cogliati; Abderrafi M. Ougouag

    2006-09-01

    The HTR-10 is a small (10 MWt) pebble-bed research reactor intended to develop pebble-bed reactor (PBR) technology in China. It will be used to test and develop fuel, verify PBR safety features, demonstrate combined electricity production and co-generation of heat, and provide experience in PBR design, operation, and construction. As the only currently operating PBR in the world, the HTR-10 can provide data of great interest to everyone involved in PBR technology. In particular, if it yields data of sufficient quality, it can be used as a benchmark for assessing the accuracy of computer codes proposed for use in PBR analysis. This paper summarizes the evaluation for the International Reactor Physics Experiment Evaluation Project (IRPhEP) of data obtained in measurements of the HTR-10’s initial criticality experiment for use as benchmarks for reactor physics codes.

  10. Thermal Performance Benchmarking (Presentation)

    SciTech Connect

    Moreno, G.

    2014-11-01

    This project will benchmark the thermal characteristics of automotive power electronics and electric motor thermal management systems. Recent vehicle systems will be benchmarked to establish baseline metrics, evaluate advantages and disadvantages of different thermal management systems, and identify areas of improvement to advance the state-of-the-art.

  11. Using Web-Based Peer Benchmarking to Manage the Client-Based Project

    ERIC Educational Resources Information Center

    Raska, David; Keller, Eileen Weisenbach; Shaw, Doris

    2013-01-01

    The complexities of integrating client-based projects into marketing courses provide challenges for the instructor but produce richness of context and active learning for the student. This paper explains the integration of Web-based peer benchmarking as a means of improving student performance on client-based projects within a single semester in…

  12. Middleware Evaluation and Benchmarking for Use in Mission Operations Centers

    NASA Technical Reports Server (NTRS)

    Antonucci, Rob; Waktola, Waka

    2005-01-01

    Middleware technologies have been promoted as timesaving, cost-cutting alternatives to the point-to-point communication used in traditional mission operations systems. However, missions have been slow to adopt the new technology. The lack of existing middleware-based missions has given rise to uncertainty about middleware's ability to perform in an operational setting. Most mission architects are also unfamiliar with the technology and do not know the benefits and detriments to architectural choices - or even what choices are available. We will present the findings of a study that evaluated several middleware options specifically for use in a mission operations system. We will address some common misconceptions regarding the applicability of middleware-based architectures, and we will identify the design decisions and tradeoffs that must be made when choosing a middleware solution. The Middleware Comparison and Benchmark Study was conducted at NASA Goddard Space Flight Center to comprehensively evaluate candidate middleware products, compare and contrast the performance of middleware solutions with the traditional point- to-point socket approach, and assess data delivery and reliability strategies. The study focused on requirements of the Global Precipitation Measurement (GPM) mission, validating the potential use of middleware in the GPM mission ground system. The study was jointly funded by GPM and the Goddard Mission Services Evolution Center (GMSEC), a virtual organization for providing mission enabling solutions and promoting the use of appropriate new technologies for mission support. The study was broken into two phases. To perform the generic middleware benchmarking and performance analysis, a network was created with data producers and consumers passing data between themselves. The benchmark monitored the delay, throughput, and reliability of the data as the characteristics were changed. Measurements were taken under a variety of topologies, data demands

  13. Model evaluation using a community benchmarking system for land surface models

    NASA Astrophysics Data System (ADS)

    Mu, M.; Hoffman, F. M.; Lawrence, D. M.; Riley, W. J.; Keppel-Aleks, G.; Kluzek, E. B.; Koven, C. D.; Randerson, J. T.

    2014-12-01

    Evaluation of atmosphere, ocean, sea ice, and land surface models is an important step in identifying deficiencies in Earth system models and developing improved estimates of future change. For the land surface and carbon cycle, the design of an open-source system has been an important objective of the International Land Model Benchmarking (ILAMB) project. Here we evaluated CMIP5 and CLM models using a benchmarking system that enables users to specify models, data sets, and scoring systems so that results can be tailored to specific model intercomparison projects. Our scoring system used information from four different aspects of global datasets, including climatological mean spatial patterns, seasonal cycle dynamics, interannual variability, and long-term trends. Variable-to-variable comparisons enable investigation of the mechanistic underpinnings of model behavior, and allow for some control of biases in model drivers. Graphics modules allow users to evaluate model performance at local, regional, and global scales. Use of modular structures makes it relatively easy for users to add new variables, diagnostic metrics, benchmarking datasets, or model simulations. Diagnostic results are automatically organized into HTML files, so users can conveniently share results with colleagues. We used this system to evaluate atmospheric carbon dioxide, burned area, global biomass and soil carbon stocks, net ecosystem exchange, gross primary production, ecosystem respiration, terrestrial water storage, evapotranspiration, and surface radiation from CMIP5 historical and ESM historical simulations. We found that the multi-model mean often performed better than many of the individual models for most variables. We plan to publicly release a stable version of the software during fall of 2014 that has land surface, carbon cycle, hydrology, radiation and energy cycle components.

  14. BENCHMARK EVALUATION OF THE START-UP CORE REACTOR PHYSICS MEASUREMENTS OF THE HIGH TEMPERATURE ENGINEERING TEST REACTOR

    SciTech Connect

    John Darrell Bess

    2010-05-01

    The benchmark evaluation of the start-up core reactor physics measurements performed with Japan’s High Temperature Engineering Test Reactor, in support of the Next Generation Nuclear Plant Project and Very High Temperature Reactor Program activities at the Idaho National Laboratory, has been completed. The evaluation was performed using MCNP5 with ENDF/B-VII.0 nuclear data libraries and according to guidelines provided for inclusion in the International Reactor Physics Experiment Evaluation Project Handbook. Results provided include updated evaluation of the initial six critical core configurations (five annular and one fully-loaded). The calculated keff eigenvalues agree within 1s of the benchmark values. Reactor physics measurements that were evaluated include reactivity effects measurements such as excess reactivity during the core loading process and shutdown margins for the fully-loaded core, four isothermal temperature reactivity coefficient measurements for the fully-loaded core, and axial reaction rate measurements in the instrumentation columns of three core configurations. The calculated values agree well with the benchmark experiment measurements. Fully subcritical and warm critical configurations of the fully-loaded core were also assessed. The calculated keff eigenvalues for these two configurations also agree within 1s of the benchmark values. The reactor physics measurement data can be used in the validation and design development of future High Temperature Gas-cooled Reactor systems.

  15. Benchmarking Data Sets for the Evaluation of Virtual Ligand Screening Methods: Review and Perspectives.

    PubMed

    Lagarde, Nathalie; Zagury, Jean-François; Montes, Matthieu

    2015-07-27

    Virtual screening methods are commonly used nowadays in drug discovery processes. However, to ensure their reliability, they have to be carefully evaluated. The evaluation of these methods is often realized in a retrospective way, notably by studying the enrichment of benchmarking data sets. To this purpose, numerous benchmarking data sets were developed over the years, and the resulting improvements led to the availability of high quality benchmarking data sets. However, some points still have to be considered in the selection of the active compounds, decoys, and protein structures to obtain optimal benchmarking data sets.

  16. BENCHMARK EVALUATION OF THE INITIAL ISOTHERMAL PHYSICS MEASUREMENTS AT THE FAST FLUX TEST FACILITY

    SciTech Connect

    John Darrell Bess

    2010-05-01

    The benchmark evaluation of the initial isothermal physics tests performed at the Fast Flux Test Facility, in support of Fuel Cycle Research and Development and Generation-IV activities at the Idaho National Laboratory, has been completed. The evaluation was performed using MCNP5 with ENDF/B-VII.0 nuclear data libraries and according to guidelines provided for inclusion in the International Reactor Physics Experiment Evaluation Project Handbook. Results provided include evaluation of the initial fully-loaded core critical, two neutron spectra measurements near the axial core center, 32 reactivity effects measurements (21 control rod worths, two control rod bank worths, six differential control rod worths, two shutdown margins, and one excess reactivity), isothermal temperature coefficient, and low-energy electron and gamma spectra measurements at the core center. All measurements were performed at 400 ºF. There was good agreement between the calculated and benchmark values for the fully-loaded core critical eigenvalue, reactivity effects measurements, and isothermal temperature coefficient. General agreement between benchmark experiment measurements and calculated spectra for neutrons and low-energy gammas at the core midplane exists, but calculations of the neutron spectra below the core and the low-energy gamma spectra at core midplane did not agree well. Homogenization of core components may have had a significant impact upon computational assessment of these effects. Future work includes development of a fully-heterogeneous model for comprehensive evaluation. The reactor physics measurement data can be used in nuclear data adjustment and validation of computational methods for advanced fuel cycle and nuclear reactor systems using Liquid Metal Fast Reactor technology.

  17. Iowa's Adult Literacy Program Benchmark Projection Report. Program Year 2007, July 1, 2006-June 30, 2007

    ERIC Educational Resources Information Center

    Division of Community Colleges and Workforce Preparation, Iowa Department of Education, 2007

    2007-01-01

    The purpose of this publication is to present Iowa's adult literacy program approved projected benchmark percentage levels for Program Year 2006 (July 1, 2005-June 30, 2006). The passage of the Workforce Investment Act of 1998 (WIA) [Public Law 105-220] by the 105th Congress has ushered in a new era of collaboration, coordinator, cooperation and…

  18. Learning from Follow Up Surveys of Graduates: The Austin Teacher Program and the Benchmark Project. A Discussion Paper.

    ERIC Educational Resources Information Center

    Baker, Thomas E.

    This paper describes Austin College's (Texas) participation in the Benchmark Project, a collaborative followup study of teacher education graduates and their principals, focusing on the second round of data collection. The Benchmark Project was a collaboration of 11 teacher preparation programs that gathered and analyzed data comparing graduates…

  19. Monitoring Based Commissioning: Benchmarking Analysis of 24 UC/CSU/IOU Projects

    SciTech Connect

    Mills, Evan; Mathew, Paul

    2009-04-01

    Buildings rarely perform as intended, resulting in energy use that is higher than anticipated. Building commissioning has emerged as a strategy for remedying this problem in non-residential buildings. Complementing traditional hardware-based energy savings strategies, commissioning is a 'soft' process of verifying performance and design intent and correcting deficiencies. Through an evaluation of a series of field projects, this report explores the efficacy of an emerging refinement of this practice, known as monitoring-based commissioning (MBCx). MBCx can also be thought of as monitoring-enhanced building operation that incorporates three components: (1) Permanent energy information systems (EIS) and diagnostic tools at the whole-building and sub-system level; (2) Retro-commissioning based on the information from these tools and savings accounting emphasizing measurement as opposed to estimation or assumptions; and (3) On-going commissioning to ensure efficient building operations and measurement-based savings accounting. MBCx is thus a measurement-based paradigm which affords improved risk-management by identifying problems and opportunities that are missed with periodic commissioning. The analysis presented in this report is based on in-depth benchmarking of a portfolio of MBCx energy savings for 24 buildings located throughout the University of California and California State University systems. In the course of the analysis, we developed a quality-control/quality-assurance process for gathering and evaluating raw data from project sites and then selected a number of metrics to use for project benchmarking and evaluation, including appropriate normalizations for weather and climate, accounting for variations in central plant performance, and consideration of differences in building types. We performed a cost-benefit analysis of the resulting dataset, and provided comparisons to projects from a larger commissioning 'Meta-analysis' database. A total of 1120

  20. TPC-V: A Benchmark for Evaluating the Performance of Database Applications in Virtual Environments

    NASA Astrophysics Data System (ADS)

    Sethuraman, Priya; Reza Taheri, H.

    For two decades, TPC benchmarks have been the gold standards for evaluating the performance of database servers. An area that TPC benchmarks had not addressed until now was virtualization. Virtualization is now a major technology in use in data centers, and is the number one technology on Gartner Group's Top Technologies List. In 2009, the TPC formed a Working Group to develop a benchmark specifically intended for virtual environments that run database applications. We will describe the characteristics of this benchmark, and provide a status update on its development.

  1. Putting Data to Work: Interim Recommendations from The Benchmarking Project

    ERIC Educational Resources Information Center

    Miles, Marty; Maguire, Sheila; Woodruff-Bolte, Stacy; Clymer, Carol

    2010-01-01

    As public and private funders have focused on evaluating the effectiveness of workforce development programs, a myriad of data collection systems and reporting processes have taken shape. Navigating these systems takes significant time and energy and often saps frontline providers' capacity to use data internally for program improvement.…

  2. Ready to Retrofit: The Process of Project Team Selection, Building Benchmarking, and Financing Commercial Building Energy Retrofit Projects

    SciTech Connect

    Sanders, Mark D.; Parrish, Kristen; Mathew, Paul

    2012-05-01

    This guide presents a process for three key activities for the building owner in preparing to retrofit existing commercial buildings: selecting project teams, benchmarking the existing building, and financing the retrofit work. Although there are other essential steps in the retrofit process, the three activities presented in this guide are the critical elements where the building owner has the greatest influence on the outcome of the project.

  3. Benchmarking Spike-Based Visual Recognition: A Dataset and Evaluation

    PubMed Central

    Liu, Qian; Pineda-García, Garibaldi; Stromatias, Evangelos; Serrano-Gotarredona, Teresa; Furber, Steve B.

    2016-01-01

    Today, increasing attention is being paid to research into spike-based neural computation both to gain a better understanding of the brain and to explore biologically-inspired computation. Within this field, the primate visual pathway and its hierarchical organization have been extensively studied. Spiking Neural Networks (SNNs), inspired by the understanding of observed biological structure and function, have been successfully applied to visual recognition and classification tasks. In addition, implementations on neuromorphic hardware have enabled large-scale networks to run in (or even faster than) real time, making spike-based neural vision processing accessible on mobile robots. Neuromorphic sensors such as silicon retinas are able to feed such mobile systems with real-time visual stimuli. A new set of vision benchmarks for spike-based neural processing are now needed to measure progress quantitatively within this rapidly advancing field. We propose that a large dataset of spike-based visual stimuli is needed to provide meaningful comparisons between different systems, and a corresponding evaluation methodology is also required to measure the performance of SNN models and their hardware implementations. In this paper we first propose an initial NE (Neuromorphic Engineering) dataset based on standard computer vision benchmarksand that uses digits from the MNIST database. This dataset is compatible with the state of current research on spike-based image recognition. The corresponding spike trains are produced using a range of techniques: rate-based Poisson spike generation, rank order encoding, and recorded output from a silicon retina with both flashing and oscillating input stimuli. In addition, a complementary evaluation methodology is presented to assess both model-level and hardware-level performance. Finally, we demonstrate the use of the dataset and the evaluation methodology using two SNN models to validate the performance of the models and their hardware

  4. Benchmarking Spike-Based Visual Recognition: A Dataset and Evaluation.

    PubMed

    Liu, Qian; Pineda-García, Garibaldi; Stromatias, Evangelos; Serrano-Gotarredona, Teresa; Furber, Steve B

    2016-01-01

    Today, increasing attention is being paid to research into spike-based neural computation both to gain a better understanding of the brain and to explore biologically-inspired computation. Within this field, the primate visual pathway and its hierarchical organization have been extensively studied. Spiking Neural Networks (SNNs), inspired by the understanding of observed biological structure and function, have been successfully applied to visual recognition and classification tasks. In addition, implementations on neuromorphic hardware have enabled large-scale networks to run in (or even faster than) real time, making spike-based neural vision processing accessible on mobile robots. Neuromorphic sensors such as silicon retinas are able to feed such mobile systems with real-time visual stimuli. A new set of vision benchmarks for spike-based neural processing are now needed to measure progress quantitatively within this rapidly advancing field. We propose that a large dataset of spike-based visual stimuli is needed to provide meaningful comparisons between different systems, and a corresponding evaluation methodology is also required to measure the performance of SNN models and their hardware implementations. In this paper we first propose an initial NE (Neuromorphic Engineering) dataset based on standard computer vision benchmarksand that uses digits from the MNIST database. This dataset is compatible with the state of current research on spike-based image recognition. The corresponding spike trains are produced using a range of techniques: rate-based Poisson spike generation, rank order encoding, and recorded output from a silicon retina with both flashing and oscillating input stimuli. In addition, a complementary evaluation methodology is presented to assess both model-level and hardware-level performance. Finally, we demonstrate the use of the dataset and the evaluation methodology using two SNN models to validate the performance of the models and their hardware

  5. Applications of Integral Benchmark Data

    SciTech Connect

    Giuseppe Palmiotti; Teruhiko Kugo; Fitz Trumble; Albert C. Kahler; Dale Lancaster

    2014-10-09

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) provide evaluated integral benchmark data that may be used for validation of reactor physics / nuclear criticality safety analytical methods and data, nuclear data testing, advanced modeling and simulation, and safety analysis licensing activities. The handbooks produced by these programs are used in over 30 countries. Five example applications are presented in this paper: (1) Use of IRPhEP Data in Uncertainty Analyses and Cross Section Adjustment, (2) Uncertainty Evaluation Methods for Reactor Core Design at JAEA Using Reactor Physics Experimental Data, (3) Application of Benchmarking Data to a Broad Range of Criticality Safety Problems, (4) Cross Section Data Testing with ICSBEP Benchmarks, and (5) Use of the International Handbook of Evaluated Reactor Physics Benchmark Experiments to Support the Power Industry.

  6. Development of an ICSBEP Benchmark Evaluation, Nearly 20 Years of Experience

    SciTech Connect

    J. Blair Briggs; John D. Bess

    2011-06-01

    The basic structure of all ICSBEP benchmark evaluations is essentially the same and includes (1) a detailed description of the experiment; (2) an evaluation of the experiment, including an exhaustive effort to quantify the effects of uncertainties on measured quantities; (3) a concise presentation of benchmark-model specifications; (4) sample calculation results; and (5) a summary of experimental references. Computer code input listings and other relevant information are generally preserved in appendixes. Details of an ICSBEP evaluation is presented.

  7. State Education Agency Communications Process: Benchmark and Best Practices Project. Benchmark and Best Practices Project. Issue No. 01

    ERIC Educational Resources Information Center

    Zavadsky, Heather

    2014-01-01

    The role of state education agencies (SEAs) has shifted significantly from low-profile, compliance activities like managing federal grants to engaging in more complex and politically charged tasks like setting curriculum standards, developing accountability systems, and creating new teacher evaluation systems. The move from compliance-monitoring…

  8. Benchmark for evaluation and validation of reactor simulations (BEAVRS)

    SciTech Connect

    Horelik, N.; Herman, B.; Forget, B.; Smith, K.

    2013-07-01

    Advances in parallel computing have made possible the development of high-fidelity tools for the design and analysis of nuclear reactor cores, and such tools require extensive verification and validation. This paper introduces BEAVRS, a new multi-cycle full-core Pressurized Water Reactor (PWR) depletion benchmark based on two operational cycles of a commercial nuclear power plant that provides a detailed description of fuel assemblies, burnable absorbers, in-core fission detectors, core loading patterns, and numerous in-vessel components. This benchmark enables analysts to develop extremely detailed reactor core models that can be used for testing and validation of coupled neutron transport, thermal-hydraulics, and fuel isotopic depletion. The benchmark also provides measured reactor data for Hot Zero Power (HZP) physics tests, boron letdown curves, and three-dimensional in-core flux maps from fifty-eight instrumented assemblies. Initial comparisons between calculations performed with MIT's OpenMC Monte Carlo neutron transport code and measured cycle 1 HZP test data are presented, and these results display an average deviation of approximately 100 pcm for the various critical configurations and control rod worth measurements. Computed HZP radial fission detector flux maps also agree reasonably well with the available measured data. All results indicate that this benchmark will be extremely useful in validation of coupled-physics codes and uncertainty quantification of in-core physics computational predictions. The detailed BEAVRS specification and its associated data package is hosted online at the MIT Computational Reactor Physics Group web site (http://crpg.mit.edu/), where future revisions and refinements to the benchmark specification will be made publicly available. (authors)

  9. IAEA coordinated research projects on core physics benchmarks for high temperature gas-cooled reactors

    SciTech Connect

    Methnani, M.

    2006-07-01

    High-temperature Gas-Cooled Reactor (HTGR) designs present special computational challenges related to their core physics characteristics, in particular neutron streaming, double heterogeneities, impurities and the random distribution of coated fuel particles in the graphite matrix. In recent years, two consecutive IAEA Coordinated Research Projects (CRP 1 and CRP 5) have focused on code-to-code and code-to-experiment comparisons of representative benchmarks run by several participating international institutes. While the PROTEUS critical HTR experiments provided the test data reference for CRP-1, the more recent CRP-5 data has been made available by the HTTR, HTR-10 and ASTRA test facilities. Other benchmark cases are being considered for the GT-MHR and PBMR core designs. This paper overviews the scope and some sample results of both coordinated research projects. (authors)

  10. Specifications for the Large Core Code Evaluation Working Group Benchmark Problem Four. [LMFBR

    SciTech Connect

    Cowan, C.L.; Protsik, R.

    1981-09-01

    Benchmark studies have been carried out by the members of the Large Core Code Evaluation Working Group (LCCEWG) as part of a broad effort to systematically evaluate the important steps in the reactor design and analysis process for large fast breeder reactors. The specific objectives of the LCCEWG benchmark studies have been: to quantify the accuracy and efficiency of current neutronics methods for large cores; to identify neutronic design problems unique to large breeder reactors; to identify computer code development requirements; and to provide support for large core critical benchmark experiments.

  11. Evaluating the Information Power Grid using the NAS Grid Benchmarks

    NASA Technical Reports Server (NTRS)

    VanderWijngaartm Rob F.; Frumkin, Michael A.

    2004-01-01

    The NAS Grid Benchmarks (NGB) are a collection of synthetic distributed applications designed to rate the performance and functionality of computational grids. We compare several implementations of the NGB to determine programmability and efficiency of NASA's Information Power Grid (IPG), whose services are mostly based on the Globus Toolkit. We report on the overheads involved in porting existing NGB reference implementations to the IPG. No changes were made to the component tasks of the NGB can still be improved.

  12. Key findings of the US Cystic Fibrosis Foundation's clinical practice benchmarking project.

    PubMed

    Boyle, Michael P; Sabadosa, Kathryn A; Quinton, Hebe B; Marshall, Bruce C; Schechter, Michael S

    2014-04-01

    Benchmarking is the process of using outcome data to identify high-performing centres and determine practices associated with their outstanding performance. The US Cystic Fibrosis Foundation (CFF) Patient Registry contains centre-specific outcomes data for all CFF-certified paediatric and adult cystic fibrosis (CF) care programmes in the USA. The CFF benchmarking project analysed these registry data, adjusting for differences in patient case mix known to influence outcomes, and identified the top-performing US paediatric and adult CF care programmes for pulmonary and nutritional outcomes. Separate multidisciplinary paediatric and adult benchmarking teams each visited 10 CF care programmes, five in the top quintile for pulmonary outcomes and five in the top quintile for nutritional outcomes. Key practice patterns and approaches present in both paediatric and adult programmes with outstanding clinical outcomes were identified and could be summarised as systems, attitudes, practices, patient/family empowerment and projects. These included: (1) the presence of strong leadership and a well-functioning care team working with a systematic approach to providing consistent care; (2) high expectations for outcomes among providers and families; (3) early and aggressive management of clinical declines, avoiding reliance on 'rescues'; and (4) patients/families that were engaged, empowered and well informed on disease management and its rationale. In summary, assessment of practice patterns at CF care centres with top-quintile pulmonary and nutritional outcomes provides insight into characteristic practices that may aid in optimising patient outcomes.

  13. An Overview of the International Reactor Physics Experiment Evaluation Project

    SciTech Connect

    Briggs, J. Blair; Gulliford, Jim

    2014-10-09

    Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties associated with advanced modeling and simulation accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. Two Organization for Economic Cooperation and Development (OECD) Nuclear Energy Agency (NEA) activities, the International Criticality Safety Benchmark Evaluation Project (ICSBEP), initiated in 1992, and the International Reactor Physics Experiment Evaluation Project (IRPhEP), initiated in 2003, have been identifying existing integral experiment data, evaluating those data, and providing integral benchmark specifications for methods and data validation for nearly two decades. Data provided by those two projects will be of use to the international reactor physics, criticality safety, and nuclear data communities for future decades. An overview of the IRPhEP and a brief update of the ICSBEP are provided in this paper.

  14. Preliminary Benchmark Evaluation of Japan’s High Temperature Engineering Test Reactor

    SciTech Connect

    John Darrell Bess

    2009-05-01

    A benchmark model of the initial fully-loaded start-up core critical of Japan’s High Temperature Engineering Test Reactor (HTTR) was developed to provide data in support of ongoing validation efforts of the Very High Temperature Reactor Program using publicly available resources. The HTTR is a 30 MWt test reactor utilizing graphite moderation, helium coolant, and prismatic TRISO fuel. The benchmark was modeled using MCNP5 with various neutron cross-section libraries. An uncertainty evaluation was performed by perturbing the benchmark model and comparing the resultant eigenvalues. The calculated eigenvalues are approximately 2-3% greater than expected with an uncertainty of ±0.70%. The primary sources of uncertainty are the impurities in the core and reflector graphite. The release of additional HTTR data could effectively reduce the benchmark model uncertainties and bias. Sensitivity of the results to the graphite impurity content might imply that further evaluation of the graphite content could significantly improve calculated results. Proper characterization of graphite for future Next Generation Nuclear Power reactor designs will improve computational modeling capabilities. Current benchmarking activities include evaluation of the annular HTTR cores and assessment of the remaining start-up core physics experiments, including reactivity effects, reactivity coefficient, and reaction-rate distribution measurements. Long term benchmarking goals might include analyses of the hot zero-power critical, rise-to-power tests, and other irradiation, safety, and technical evaluations performed with the HTTR.

  15. Concept of using a benchmark part to evaluate rapid prototype processes

    NASA Technical Reports Server (NTRS)

    Cariapa, Vikram

    1994-01-01

    A conceptual benchmark part for guiding manufacturers and users of rapid prototyping technologies is proposed. This is based on a need to have some tool to evaluate the development of this technology and to assist the user in judiciously selecting a process. The benchmark part is designed to have unique product details and features. The extent to which a rapid prototyping process can reproduce these features becomes a measure of the capability of the process. Since rapid prototyping is a dynamic technology, this benchmark part should be used to continuously monitor process capability of existing and developing technologies. Development of this benchmark part is, therefore, based on an understanding of the properties required from prototypes and characteristics of various rapid prototyping processes and measuring equipment that is used for evaluation.

  16. Evaluation of microfinance projects.

    PubMed

    Johnson, S

    1999-08-01

    This paper criticizes the quick system proposed by Henk Moll for evaluating microfinance projects in the article ¿How to Pre-Evaluate Credit Projects in Ten Minutes¿. The author contended that there is a need to emphasize the objectives of the project. The procedure used by Moll, he contended, is applicable only to projects that have only two key objectives, such as credit operations, and the provision of services. Arguments are presented on the three specific questions proposed by Moll, ranging from the availability of externally audited financial reports, the performance of interest rate on loans vis-a-vis the inflation rate, and the provision of loans according to the individual requirements of the borrowers. Lastly, the author emphasizes that the overall approach is not useful and suggests that careful considerations should be observed in the use or abuse of a simple scoring system or checklist such as the one proposed by Moll.

  17. Benchmark Evaluation of Uranium Metal Annuli and Cylinders with Beryllium Reflectors

    SciTech Connect

    John D. Bess

    2010-06-01

    An extensive series of delayed critical experiments were performed at the Oak Ridge Critical Experiments Facility using enriched uranium metal during the 1960s and 1970s in support of criticality safety operations at the Y-12 Plant. These experiments were designed to evaluate the storage, casting, and handling limits of the Y-12 Plant and to provide data for the verification of cross sections and calculation methods utilized in nuclear criticality safety applications. Many of these experiments have already been evaluated and included in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbook: unreflected (HEU-MET-FAST-051), graphite-reflected (HEU-MET-FAST-071), and polyethylene-reflected (HEU-MET-FAST-076). Three of the experiments consisted of highly-enriched uranium (HEU, ~93.2% 235U) metal parts reflected by beryllium metal discs. The first evaluated experiment was constructed from a stack of 7-in.-diameter, 4-1/8-in.-high stack of HEU discs top-reflected by a 7-in.-diameter, 5-9/16-in.-high stack of beryllium discs. The other two experiments were formed from stacks of concentric HEU metal annular rings surrounding a 7-in.diameter beryllium core. The nominal outer diameters were 13 and 15 in. with a nominal stack height of 5 and 4 in., respectively. These experiments have been evaluated for inclusion in the ICSBEP Handbook.

  18. Towards a benchmark simulation model for plant-wide control strategy performance evaluation of WWTPs.

    PubMed

    Jeppsson, U; Rosen, C; Alex, J; Copp, J; Gernaey, K V; Pons, M N; Vanrolleghem, P A

    2006-01-01

    The COST/IWA benchmark simulation model has been available for seven years. Its primary purpose has been to create a platform for control strategy benchmarking of activated sludge processes. The fact that the benchmark has resulted in more than 100 publications, not only in Europe but also worldwide, demonstrates the interest in such a tool within the research community In this paper, an extension of the benchmark simulation model no 1 (BSM1) is proposed. This extension aims at facilitating control strategy development and performance evaluation at a plant-wide level and, consequently, includes both pre-treatment of wastewater as well as the processes describing sludge treatment. The motivation for the extension is the increasing interest and need to operate and control wastewater treatment systems not only at an individual process level but also on a plant-wide basis. To facilitate the changes, the evaluation period has been extended to one year. A prolonged evaluation period allows for long-term control strategies to be assessed and enables the use of control handles that cannot be evaluated in a realistic fashion in the one-week BSM1 evaluation period. In the paper, the extended plant layout is proposed and the new suggested process models are described briefly. Models for influent file design, the benchmarking procedure and the evaluation criteria are also discussed. And finally, some important remaining topics, for which consensus is required, are identified.

  19. Overview of the 2014 Edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook)

    SciTech Connect

    John D. Bess; J. Blair Briggs; Jim Gulliford; Ian Hill

    2014-10-01

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) is a widely recognized world class program. The work of the IRPhEP is documented in the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook). Integral data from the IRPhEP Handbook is used by reactor safety and design, nuclear data, criticality safety, and analytical methods development specialists, worldwide, to perform necessary validations of their calculational techniques. The IRPhEP Handbook is among the most frequently quoted reference in the nuclear industry and is expected to be a valuable resource for future decades.

  20. TOSPAC calculations in support of the COVE 2A benchmarking activity; Yucca Mountain Site Characterization Project

    SciTech Connect

    Gauthier, J.H.; Zieman, N.B.; Miller, W.B.

    1991-10-01

    The purpose of the the Code Verification (COVE) 2A benchmarking activity is to assess the numerical accuracy of several computer programs for the Yucca Mountain Site Characterization Project of the Department of Energy. This paper presents a brief description of the computer program TOSPAC and a discussion of the calculational effort and results generated by TOSPAC for the COVE 2A problem set. The calculations were performed twice. The initial calculations provided preliminary results for comparison with the results from other COVE 2A participants. TOSPAC was modified in response to the comparison and the final calculations included a correction and several enhancements to improve efficiency. 8 refs.

  1. Evaluation of Project Trend.

    ERIC Educational Resources Information Center

    Unco, Inc., Washington, DC.

    This report is a descriptive evaluation of the five pilot sites of Project TREND (Targeting Resources on the Educational Needs of the Disadvantaged). The five Local Education Agency (LEA) pilot sites are the educational systems of: (1) Akron, Ohio; (2) El Paso, Texas; (3) Newark, New Jersey; (4) Portland, Oregon; and, (5) San Jose (Unified),…

  2. Benchmark Evaluation of the Neutron Radiography (NRAD) Reactor Upgraded LEU-Fueled Core

    SciTech Connect

    John D. Bess

    2001-09-01

    Benchmark models were developed to evaluate the cold-critical start-up measurements performed during the fresh core reload of the Neutron Radiography (NRAD) reactor with Low Enriched Uranium (LEU) fuel. The final upgraded core configuration with 64 fuel elements has been completed. Evaluated benchmark measurement data include criticality, control-rod worth measurements, shutdown margin, and excess reactivity. Dominant uncertainties in keff include the manganese content and impurities contained within the stainless steel cladding of the fuel and the 236U and erbium poison content in the fuel matrix. Calculations with MCNP5 and ENDF/B-VII.0 nuclear data are approximately 1.4% greater than the benchmark model eigenvalue, supporting contemporary research regarding errors in the cross section data necessary to simulate TRIGA-type reactors. Uncertainties in reactivity effects measurements are estimated to be ~10% with calculations in agreement with benchmark experiment values within 2s. The completed benchmark evaluation de-tails are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Experiments (IRPhEP Handbook). Evaluation of the NRAD LEU cores containing 56, 60, and 62 fuel elements have also been completed, including analysis of their respective reactivity effects measurements; they are also available in the IRPhEP Handbook but will not be included in this summary paper.

  3. The ORSphere Benchmark Evaluation and Its Potential Impact on Nuclear Criticality Safety

    SciTech Connect

    John D. Bess; Margaret A. Marshall; J. Blair Briggs

    2013-10-01

    In the early 1970’s, critical experiments using an unreflected metal sphere of highly enriched uranium (HEU) were performed with the focus to provide a “very accurate description…as an ideal benchmark for calculational methods and cross-section data files.” Two near-critical configurations of the Oak Ridge Sphere (ORSphere) were evaluated as acceptable benchmark experiments for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments (ICSBEP Handbook). The results from those benchmark experiments were then compared with additional unmoderated and unreflected HEU metal benchmark experiment configurations currently found in the ICSBEP Handbook. For basic geometries (spheres, cylinders, and slabs) the eigenvalues calculated using MCNP5 and ENDF/B-VII.0 were within 3 of their respective benchmark values. There appears to be generally a good agreement between calculated and benchmark values for spherical and slab geometry systems. Cylindrical geometry configurations tended to calculate low, including more complex bare HEU metal systems containing cylinders. The ORSphere experiments do not calculate within their 1s uncertainty and there is a possibility that the effect of the measured uncertainties for the GODIVA I benchmark may need reevaluated. There is significant scatter in the calculations for the highly-correlated ORCEF cylinder experiments, which are constructed from close-fitting HEU discs and annuli. Selection of a nuclear data library can have a larger impact on calculated eigenvalue results than the variation found within calculations of a given experimental series, such as the ORCEF cylinders, using a single nuclear data set.

  4. Benchmark Testing of a New 56Fe Evaluation for Criticality Safety Applications

    SciTech Connect

    Leal, Luiz C; Ivanov, E.

    2015-01-01

    The SAMMY code was used to evaluate resonance parameters of the 56Fe cross section in the resolved resonance energy range of 0–2 MeV using transmission data, capture, elastic, inelastic, and double differential elastic cross sections. The resonance analysis was performed with the code SAMMY that fits R-matrix resonance parameters using the generalized least-squares technique (Bayes’ theory). The evaluation yielded a set of resonance parameters that reproduced the experimental data very well, along with a resonance parameter covariance matrix for data uncertainty calculations. Benchmark tests were conducted to assess the evaluation performance in benchmark calculations.

  5. Reactor Physics and Criticality Benchmark Evaluations for Advanced Nuclear Fuel - Final Technical Report

    SciTech Connect

    William Anderson; James Tulenko; Bradley Rearden; Gary Harms

    2008-09-11

    The nuclear industry interest in advanced fuel and reactor design often drives towards fuel with uranium enrichments greater than 5 wt% 235U. Unfortunately, little data exists, in the form of reactor physics and criticality benchmarks, for uranium enrichments ranging between 5 and 10 wt% 235U. The primary purpose of this project is to provide benchmarks for fuel similar to what may be required for advanced light water reactors (LWRs). These experiments will ultimately provide additional information for application to the criticality-safety bases for commercial fuel facilities handling greater than 5 wt% 235U fuel.

  6. Evaluation of the Aleph PIC Code on Benchmark Simulations

    NASA Astrophysics Data System (ADS)

    Boerner, Jeremiah; Pacheco, Jose; Grillet, Anne

    2016-09-01

    Aleph is a massively parallel, 3D unstructured mesh, Particle-in-Cell (PIC) code, developed to model low temperature plasma applications. In order to verify and validate performance, Aleph is benchmarked against a series of canonical problems to demonstrate statistical indistinguishability in the results. Here, a series of four problems is studied: Couette flows over a range of Knudsen number, sheath formation in an undriven plasma, the two-stream instability, and a capacitive discharge. These problems respectively exercise collisional processes, particle motion in electrostatic fields, electrostatic field solves coupled to particle motion, and a fully coupled reacting plasma. Favorable comparison with accepted results establishes confidence in Aleph's capability and accuracy as a general purpose PIC code. Finally, Aleph is used to investigate the sensitivity of a triggered vacuum gap switch to the particle injection conditions associated with arc breakdown at the trigger. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  7. [Benchmarking in health care: conclusions and recommendations].

    PubMed

    Geraedts, Max; Selbmann, Hans-Konrad

    2011-01-01

    The German Health Ministry funded 10 demonstration projects and accompanying research of benchmarking in health care. The accompanying research work aimed to infer generalisable findings and recommendations. We performed a meta-evaluation of the demonstration projects and analysed national and international approaches to benchmarking in health care. It was found that the typical benchmarking sequence is hardly ever realised. Most projects lack a detailed analysis of structures and processes of the best performers as a starting point for the process of learning from and adopting best practice. To tap the full potential of benchmarking in health care, participation in voluntary benchmarking projects should be promoted that have been demonstrated to follow all the typical steps of a benchmarking process.

  8. Extension of the IWA/COST simulation benchmark to include expert reasoning for system performance evaluation.

    PubMed

    Comas, J; Rodríguez-Roda, I; Poch, M; Gernaey, K V; Rosen, C; Jeppsson, U

    2006-01-01

    In this paper the development of an extension module to the IWA/COST simulation benchmark to include expert reasoning is presented. This module enables the detection of suitable conditions for the development of settling problems of biological origin (filamentous bulking, foaming and rising sludge) when applying activated sludge control strategies to the simulation benchmark. Firstly, a flow diagram is proposed for each settling problem, and secondly, the outcome of its application is shown. Results of the benchmark for two evaluated control strategies illustrate that, once applied to the simulation outputs, this module provides supplementary criteria for plant performance assessment. Therefore, simulated control strategies can be evaluated in a more realistic framework, and results can be recognised as more realistic and satisfactory from the point of view of operators and real facilities.

  9. Benchmark Evaluation of Start-Up and Zero-Power Measurements at the High-Temperature Engineering Test Reactor

    SciTech Connect

    Bess, John D.; Fujimoto, Nozomu

    2014-10-09

    Benchmark models were developed to evaluate six cold-critical and two warm-critical, zero-power measurements of the HTTR. Additional measurements of a fully-loaded subcritical configuration, core excess reactivity, shutdown margins, six isothermal temperature coefficients, and axial reaction-rate distributions were also evaluated as acceptable benchmark experiments. Insufficient information is publicly available to develop finely-detailed models of the HTTR as much of the design information is still proprietary. However, the uncertainties in the benchmark models are judged to be of sufficient magnitude to encompass any biases and bias uncertainties incurred through the simplification process used to develop the benchmark models. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the impurity content of the various graphite blocks that comprise the HTTR. Monte Carlo calculations of keff are between approximately 0.9 % and 2.7 % greater than the benchmark values. Reevaluation of the HTTR models as additional information becomes available could improve the quality of this benchmark and possibly reduce the computational biases. High-quality characterization of graphite impurities would significantly improve the quality of the HTTR benchmark assessment. Simulation of the other reactor physics measurements are in good agreement with the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.

  10. Benchmark Evaluation of Start-Up and Zero-Power Measurements at the High-Temperature Engineering Test Reactor

    DOE PAGES

    Bess, John D.; Fujimoto, Nozomu

    2014-10-09

    Benchmark models were developed to evaluate six cold-critical and two warm-critical, zero-power measurements of the HTTR. Additional measurements of a fully-loaded subcritical configuration, core excess reactivity, shutdown margins, six isothermal temperature coefficients, and axial reaction-rate distributions were also evaluated as acceptable benchmark experiments. Insufficient information is publicly available to develop finely-detailed models of the HTTR as much of the design information is still proprietary. However, the uncertainties in the benchmark models are judged to be of sufficient magnitude to encompass any biases and bias uncertainties incurred through the simplification process used to develop the benchmark models. Dominant uncertainties in themore » experimental keff for all core configurations come from uncertainties in the impurity content of the various graphite blocks that comprise the HTTR. Monte Carlo calculations of keff are between approximately 0.9 % and 2.7 % greater than the benchmark values. Reevaluation of the HTTR models as additional information becomes available could improve the quality of this benchmark and possibly reduce the computational biases. High-quality characterization of graphite impurities would significantly improve the quality of the HTTR benchmark assessment. Simulation of the other reactor physics measurements are in good agreement with the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  11. Benchmarking the Remote-Handled Waste Facility at the West Valley Demonstration Project

    SciTech Connect

    O. P. Mendiratta; D. K. Ploetz

    2000-02-29

    ABSTRACT Facility decontamination activities at the West Valley Demonstration Project (WVDP), the site of a former commercial nuclear spent fuel reprocessing facility near Buffalo, New York, have resulted in the removal of radioactive waste. Due to high dose and/or high contamination levels of this waste, it needs to be handled remotely for processing and repackaging into transport/disposal-ready containers. An initial conceptual design for a Remote-Handled Waste Facility (RHWF), completed in June 1998, was estimated to cost $55 million and take 11 years to process the waste. Benchmarking the RHWF with other facilities around the world, completed in November 1998, identified unique facility design features and innovative waste pro-cessing methods. Incorporation of the benchmarking effort has led to a smaller yet fully functional, $31 million facility. To distinguish it from the June 1998 version, the revised design is called the Rescoped Remote-Handled Waste Facility (RRHWF) in this topical report. The conceptual design for the RRHWF was completed in June 1999. A design-build contract was approved by the Department of Energy in September 1999.

  12. Benchmarks for evaluation and comparison of udder health status using monthly individual somatic cell count.

    PubMed

    Fauteux, Véronique; Roy, Jean-Philippe; Scholl, Daniel T; Bouchard, Émile

    2014-08-01

    The objectives of this study were to propose benchmarks for the interpretation of herd udder health using monthly individual somatic cell counts (SCC) from dairy herds in Quebec, Canada and to evaluate the association of risk factors with intramammary infection (IMI) dynamics relative to these benchmarks. The mean and percentiles of indices related to udder infection status [e.g., proportion of healthy or chronically infected cows, cows cured and new IMI (NIMI) rate] during lactation and over the dry period were calculated using a threshold of ≥ 200 000 cells/mL at test day. Mean NIMI proportion and proportion of cows cured during lactation were 0.11 and 0.27. Benchmarks of 0.70 and 0.03 for healthy and chronically infected cows over the dry period were proposed. Season and herd mean SCC were risk factors influencing IMI dynamics during lactation and over the dry period.

  13. Benchmark Evaluation of the HTR-PROTEUS Absorber Rod Worths (Core 4)

    SciTech Connect

    John D. Bess; Leland M. Montierth

    2014-06-01

    PROTEUS was a zero-power research reactor at the Paul Scherrer Institute (PSI) in Switzerland. The critical assembly was constructed from a large graphite annulus surrounding a central cylindrical cavity. Various experimental programs were investigated in PROTEUS; during the years 1992 through 1996, it was configured as a pebble-bed reactor and designated HTR-PROTEUS. Various critical configurations were assembled with each accompanied by an assortment of reactor physics experiments including differential and integral absorber rod measurements, kinetics, reaction rate distributions, water ingress effects, and small sample reactivity effects [1]. Four benchmark reports were previously prepared and included in the March 2013 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook) [2] evaluating eleven critical configurations. A summary of that effort was previously provided [3] and an analysis of absorber rod worth measurements for Cores 9 and 10 have been performed prior to this analysis and included in PROTEUS-GCR-EXP-004 [4]. In the current benchmark effort, absorber rod worths measured for Core Configuration 4, which was the only core with a randomly-packed pebble loading, have been evaluated for inclusion as a revision to the HTR-PROTEUS benchmark report PROTEUS-GCR-EXP-002.

  14. Evaluation of mobile phone camera benchmarking using objective camera speed and image quality metrics

    NASA Astrophysics Data System (ADS)

    Peltoketo, Veli-Tapani

    2014-11-01

    When a mobile phone camera is tested and benchmarked, the significance of image quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. However, the speed or rapidity metrics of the mobile phone's camera system has not been used with the quality metrics even if the camera speed has become a more and more important camera performance feature. There are several tasks in this work. First, the most important image quality and speed-related metrics of a mobile phone's camera system are collected from the standards and papers and, also, novel speed metrics are identified. Second, combinations of the quality and speed metrics are validated using mobile phones on the market. The measurements are done toward application programming interface of different operating systems. Finally, the results are evaluated and conclusions are made. The paper defines a solution to combine different image quality and speed metrics to a single benchmarking score. A proposal of the combined benchmarking metric is evaluated using measurements of 25 mobile phone cameras on the market. The paper is a continuation of a previous benchmarking work expanded with visual noise measurement and updates of the latest mobile phone versions.

  15. Epitope prediction based on random peptide library screening: benchmark dataset and prediction tools evaluation.

    PubMed

    Sun, Pingping; Chen, Wenhan; Huang, Yanxin; Wang, Hongyan; Ma, Zhiqiang; Lv, Yinghua

    2011-06-16

    Epitope prediction based on random peptide library screening has become a focus as a promising method in immunoinformatics research. Some novel software and web-based servers have been proposed in recent years and have succeeded in given test cases. However, since the number of available mimotopes with the relevant structure of template-target complex is limited, a systematic evaluation of these methods is still absent. In this study, a new benchmark dataset was defined. Using this benchmark dataset and a representative dataset, five examples of the most popular epitope prediction software products which are based on random peptide library screening have been evaluated. Using the benchmark dataset, in no method did performance exceed a 0.42 precision and 0.37 sensitivity, and the MCC scores suggest that the epitope prediction results of these software programs are greater than random prediction about 0.09-0.13; while using the representative dataset, most of the values of these performance measures are slightly improved, but the overall performance is still not satisfactory. Many test cases in the benchmark dataset cannot be applied to these pieces of software due to software limitations. Moreover chances are that these software products are overfitted to the small dataset and will fail in other cases. Therefore finding the correlation between mimotopes and genuine epitope residues is still far from resolved and much larger dataset for mimotope-based epitope prediction is desirable.

  16. Evaluation methods for hospital projects.

    PubMed

    Buelow, Janet R; Zuckweiler, Kathryn M; Rosacker, Kirsten M

    2010-01-01

    The authors report the findings of a survey of hospital managers on the utilization of various project selection and evaluation methodologies. The focus of the analysis was the empirical relationship between a portfolio of project evaluation(1) methods actually utilized for a given project and several measures of perceived project success. The analysis revealed that cost-benefit analysis and top management support were the two project evaluation methods used most often by the hospital managers. The authors' empirical assessment provides evidence that top management support is associated with overall project success.

  17. RESULTS FOR THE INTERMEDIATE-SPECTRUM ZEUS BENCHMARK OBTAINED WITH NEW 63,65Cu CROSS-SECTION EVALUATIONS

    SciTech Connect

    Sobes, Vladimir; Leal, Luiz C

    2014-01-01

    The four HEU, intermediate-spectrum, copper-reflected Zeus experiments have shown discrepant results between measurement and calculation for the last several major releases of the ENDF library. The four benchmarks show a trend in reported C/E values with increasing energy of average lethargy causing fission. Recently, ORNL has made improvements to the evaluations of three key isotopes involved in the benchmark cases in question. Namely, an updated evaluation for 235U and evaluations of 63,65Cu. This paper presents the benchmarking results of the four intermediate-spectrum Zeus cases using the three updated evaluations.

  18. Evaluation of HEU-Beryllium Benchmark Experiments to Improve Computational Analysis of Space Reactors

    SciTech Connect

    Bess, John; Bledsoe, Keith C; Rearden, Bradley T

    2011-01-01

    An assessment was previously performed to evaluate modeling capabilities and quantify preliminary biases and uncertainties associated with the modeling methods and data utilized in designing a nuclear reactor such as a beryllium-reflected, highly-enriched-uranium (HEU)-O2 fission surface power (FSP) system for space nuclear power. The conclusion of the previous study was that current capabilities could preclude the necessity of a cold critical test of the FSP; however, additional testing would reduce uncertainties in the beryllium and uranium cross-section data and the overall uncertainty in the computational models. A series of critical experiments using HEU metal were performed in the 1960s and 1970s in support of criticality safety operations at the Y-12 Plant. Of the hundreds of experiments, three were identified as fast-fission configurations reflected by beryllium metal. These experiments have been evaluated as benchmarks for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE). Further evaluation of the benchmark experiments was performed using the sensitivity and uncertainty analysis capabilities of SCALE 6. The data adjustment methods of SCALE 6 have been employed in the validation of an example FSP design model to reduce the uncertainty due to the beryllium cross section data.

  19. Evaluation of HEU-Beryllium Benchmark Experiments to Improve Computational Analysis of Space Reactors

    SciTech Connect

    John D. Bess; Keith C. Bledsoe; Bradley T. Rearden

    2011-02-01

    An assessment was previously performed to evaluate modeling capabilities and quantify preliminary biases and uncertainties associated with the modeling methods and data utilized in designing a nuclear reactor such as a beryllium-reflected, highly-enriched-uranium (HEU)-O2 fission surface power (FSP) system for space nuclear power. The conclusion of the previous study was that current capabilities could preclude the necessity of a cold critical test of the FSP; however, additional testing would reduce uncertainties in the beryllium and uranium cross-section data and the overall uncertainty in the computational models. A series of critical experiments using HEU metal were performed in the 1960s and 1970s in support of criticality safety operations at the Y-12 Plant. Of the hundreds of experiments, three were identified as fast-fission configurations reflected by beryllium metal. These experiments have been evaluated as benchmarks for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE). Further evaluation of the benchmark experiments was performed using the sensitivity and uncertainty analysis capabilities of SCALE 6. The data adjustment methods of SCALE 6 have been employed in the validation of an example FSP design model to reduce the uncertainty due to the beryllium cross section data.

  20. Development of Conceptual Benchmark Models to Evaluate Complex Hydrologic Model Calibration in Managed Basins Using Python

    NASA Astrophysics Data System (ADS)

    Hughes, J. D.; White, J.

    2013-12-01

    For many numerical hydrologic models it is a challenge to quantitatively demonstrate that complex models are preferable to simpler models. Typically, a decision is made to develop and calibrate a complex model at the beginning of a study. The value of selecting a complex model over simpler models is commonly inferred from use of a model with fewer simplifications of the governing equations because it can be time consuming to develop another numerical code with data processing and parameter estimation functionality. High-level programming languages like Python can greatly reduce the effort required to develop and calibrate simple models that can be used to quantitatively demonstrate the increased value of a complex model. We have developed and calibrated a spatially-distributed surface-water/groundwater flow model for managed basins in southeast Florida, USA, to (1) evaluate the effect of municipal groundwater pumpage on surface-water/groundwater exchange, (2) investigate how the study area will respond to sea-level rise, and (3) explore combinations of these forcing functions. To demonstrate the increased value of this complex model, we developed a two-parameter conceptual-benchmark-discharge model for each basin in the study area. The conceptual-benchmark-discharge model includes seasonal scaling and lag parameters and is driven by basin rainfall. The conceptual-benchmark-discharge models were developed in the Python programming language and used weekly rainfall data. Calibration was implemented with the Broyden-Fletcher-Goldfarb-Shanno method available in the Scientific Python (SciPy) library. Normalized benchmark efficiencies calculated using output from the complex model and the corresponding conceptual-benchmark-discharge model indicate that the complex model has more explanatory power than the simple model driven only by rainfall.

  1. Benchmarking and Its Relevance to the Library and Information Sector. Interim Findings of "Best Practice Benchmarking in the Library and Information Sector," a British Library Research and Development Department Project.

    ERIC Educational Resources Information Center

    Kinnell, Margaret; Garrod, Penny

    This British Library Research and Development Department study assesses current activities and attitudes toward quality management in library and information services (LIS) in the academic sector as well as the commercial/industrial sector. Definitions and types of benchmarking are described, and the relevance of benchmarking to LIS is evaluated.…

  2. Benchmark Evaluation of the Medium-Power Reactor Experiment Program Critical Configurations

    SciTech Connect

    Margaret A. Marshall; John D. Bess

    2013-02-01

    A series of small, compact critical assembly (SCCA) experiments were performed in 1962-1965 at the Oak Ridge National Laboratory Critical Experiments Facility (ORCEF) for the Medium-Power Reactor Experiment (MPRE) program. The MPRE was a stainless-steel clad, highly enriched uranium (HEU)-O2 fuelled, BeO reflected reactor design to provide electrical power to space vehicles. Cooling and heat transfer were to be achieved by boiling potassium in the reactor core and passing vapor directly through a turbine. Graphite- and beryllium-reflected assemblies were constructed at ORCEF to verify the critical mass, power distribution, and other reactor physics measurements needed to validate reactor calculations and reactor physics methods. The experimental series was broken into three parts, with the third portion of the experiments representing the beryllium-reflected measurements. The latter experiments are of interest for validating current reactor design efforts for a fission surface power reactor. The entire series has been evaluated as acceptable benchmark experiments and submitted for publication in the International Handbook of Evaluated Criticality Safety Benchmark Experiments and in the International Handbook of Evaluated Reactor Physics Benchmark Experiments.

  3. Windows NT Workstation Performance Evaluation Based on Pro/E 2000i BENCHMARK

    SciTech Connect

    DAVIS,SEAN M.

    2000-08-02

    A performance evaluation of several computers was necessary, so an evaluation program, or benchmark, was run on each computer to determine maximum possible performance. The program was used to test the Computer Aided Drafting (CAD) ability of each computer by monitoring the speed with which several functions were executed. The main objective of the benchmarking program was to record assembly loading times and image regeneration times and then compile a composite score that could be compared with the same tests on other computers. The three computers that were tested were the Compaq AP550, the SGI 230, and the Hewlett-PackardP750C. The Compaq and SGI computers each had a Pentium III 733mhz processor, while the Hewlett-Packard had a Pentium III 750mhz processor. The size and speed of Random Access Memory (RAM) in each computer varied, as did the type of graphics card. Each computer that was tested was using Windows NT 4.0 and Pro/ENGINEER{trademark} 2000i CAD benchmark software provided by Standard Performance Evaluation Corporation (SPEC). The benchmarking program came with its own assembly, automatically loaded and ran tests on the assembly, then compiled the time each test took to complete. Due to the automation of the tests, any sort of user error affecting test scores was virtually eliminated. After all the tests were completed, scores were then compiled and compared. The Silicon Graphics 230 was by far the overall winner with a composite score of 8.57. The Compaq AP550 was next with a score of 5.19, while the Hewlett-Packard P750C performed dismally, achieving a score of 3.34. Several factors, including motherboard chipset, graphics card, and the size and speed of RAM, were involved in the differing scores of the three machines. Surprisingly the Hewlett-Packard, which had the fastest processor, came back with the lowest score. The above factors most likely contributed to the poor performance of the Hewlett-Packard. Based on the results of the benchmark test

  4. Automated Generation of Message-Passing Programs: An Evaluation of CAPTools using NAS Benchmarks

    NASA Technical Reports Server (NTRS)

    Hribar, Michelle R.; Jin, Hao-Qiang; Yan, Jerry C.; Bailey, David (Technical Monitor)

    1998-01-01

    Scientists at NASA Ames Research Center have been developing computational aeroscience applications on highly parallel architectures over the past ten years. During the same time period, a steady transition of hardware and system software also occurred, forcing us to expand great efforts into migrating and receding our applications. As applications and machine architectures continue to become increasingly complex, the cost and time required for this process will become prohibitive. Various attempts to exploit software tools to assist and automate the parallelization process have not produced favorable results. In this paper, we evaluate an interactive parallelization tool, CAPTools, for parallelizing serial versions of the NAB Parallel Benchmarks. Finally, we compare the performance of the resulting CAPTools generated code to the hand-coded benchmarks on the Origin 2000 and IBM SP2. Based on these results, a discussion on the feasibility of automated parallelization of aerospace applications is presented along with suggestions for future work.

  5. Team Projects and Peer Evaluations

    ERIC Educational Resources Information Center

    Doyle, John Kevin; Meeker, Ralph D.

    2008-01-01

    The authors assign semester- or quarter-long team-based projects in several Computer Science and Finance courses. This paper reports on our experience in designing, managing, and evaluating such projects. In particular, we discuss the effects of team size and of various peer evaluation schemes on team performance and student learning. We report…

  6. MPI performance evaluation and characterization using a compact application benchmark code

    SciTech Connect

    Worley, P.H.

    1996-06-01

    In this paper the parallel benchmark code PSTSWM is used to evaluate the performance of the vendor-supplied implementations of the MPI message-passing standard on the Intel Paragon, IBM SP2, and Cray Research T3D. This study is meant to complement the performance evaluation of individual MPI commands by providing information on the practical significance of MPI performance on the execution of a communication-intensive application code. In particular, three performance questions are addressed: how important is the communication protocol in determining performance when using MPI, how does MPI performance compare with that of the native communication library, and how efficient are the collective communication routines.

  7. Project financial evaluation

    SciTech Connect

    None, None

    2009-01-18

    The project financial section of the Renewable Energy Technology Characterizations describes structures and models to support the technical and economic status of emerging renewable energy options for electricity supply.

  8. Performance evaluation of tile-based Fisher Ratio analysis using a benchmark yeast metabolome dataset.

    PubMed

    Watson, Nathanial E; Parsons, Brendon A; Synovec, Robert E

    2016-08-12

    Performance of tile-based Fisher Ratio (F-ratio) data analysis, recently developed for discovery-based studies using comprehensive two-dimensional gas chromatography coupled with time-of-flight mass spectrometry (GC×GC-TOFMS), is evaluated with a metabolomics dataset that had been previously analyzed in great detail, but while taking a brute force approach. The previously analyzed data (referred to herein as the benchmark dataset) were intracellular extracts from Saccharomyces cerevisiae (yeast), either metabolizing glucose (repressed) or ethanol (derepressed), which define the two classes in the discovery-based analysis to find metabolites that are statistically different in concentration between the two classes. Beneficially, this previously analyzed dataset provides a concrete means to validate the tile-based F-ratio software. Herein, we demonstrate and validate the significant benefits of applying tile-based F-ratio analysis. The yeast metabolomics data are analyzed more rapidly in about one week versus one year for the prior studies with this dataset. Furthermore, a null distribution analysis is implemented to statistically determine an adequate F-ratio threshold, whereby the variables with F-ratio values below the threshold can be ignored as not class distinguishing, which provides the analyst with confidence when analyzing the hit table. Forty-six of the fifty-four benchmarked changing metabolites were discovered by the new methodology while consistently excluding all but one of the benchmarked nineteen false positive metabolites previously identified.

  9. The DLESE Evaluation Toolkit Project

    NASA Astrophysics Data System (ADS)

    Buhr, S. M.; Barker, L. J.; Marlino, M.

    2002-12-01

    The Evaluation Toolkit and Community project is a new Digital Library for Earth System Education (DLESE) collection designed to raise awareness of project evaluation within the geoscience education community, and to enable principal investigators, teachers, and evaluators to implement project evaluation more readily. This new resource is grounded in the needs of geoscience educators, and will provide a virtual home for a geoscience education evaluation community. The goals of the project are to 1) provide a robust collection of evaluation resources useful for Earth systems educators, 2) establish a forum and community for evaluation dialogue within DLESE, and 3) disseminate the resources through the DLESE infrastructure and through professional society workshops and proceedings. Collaboration and expertise in education, geoscience and evaluation are necessary if we are to conduct the best possible geoscience education. The Toolkit allows users to engage in evaluation at whichever level best suits their needs, get more evaluation professional development if desired, and access the expertise of other segments of the community. To date, a test web site has been built and populated, initial community feedback from the DLESE and broader community is being garnered, and we have begun to heighten awareness of geoscience education evaluation within our community. The web site contains features that allow users to access professional development about evaluation, search and find evaluation resources, submit resources, find or offer evaluation services, sign up for upcoming workshops, take the user survey, and submit calendar items. The evaluation resource matrix currently contains resources that have met our initial review. The resources are currently organized by type; they will become searchable on multiple dimensions of project type, audience, objectives and evaluation resource type as efforts to develop a collection-specific search engine mature. The peer review

  10. Project OUTREACH Evaluation.

    ERIC Educational Resources Information Center

    Hollis, Patricia A.; Newton, Josephine K.

    Described is a 4-week summer workshop, Project OUTREACH, designed to train Head Start personnel in the knowledge and skills necessary to identify handicapped or potentially handicapped children and to develop specific teaching strategies for the preschool handicapped child. It is explained that a unique aspect of the workshop was the coordination…

  11. Evaluation of DFT-D3 dispersion corrections for various structural benchmark sets

    NASA Astrophysics Data System (ADS)

    Schröder, Heiner; Hühnert, Jens; Schwabe, Tobias

    2017-01-01

    We present an evaluation of our newly developed density functional theory (DFT)-D3 dispersion correction D3(CSO) in comparison to its predecessor D3(BJ) for geometry optimizations. Therefore, various benchmark sets covering bond lengths, rotational constants, and center of mass distances of supramolecular complexes have been chosen. Overall both corrections give accurate structures and show no systematic differences. Additionally, we present an optimized algorithm for the computation of the DFT-D3 gradient, which reduces the formal scaling of the gradient calculation from O (N3) to O (N2) .

  12. Project Proposals Evaluation

    NASA Astrophysics Data System (ADS)

    Encheva, Sylvia; Tumin, Sharil

    2009-08-01

    Collaboration among various firms has been traditionally used trough single project joint ventures for bonding purposes. Eventhough the performed work is usually beneficial to some extend to all participants, the type of collaboration option to be adapted is strongly influenced by overall purposes and goals that can be achieved. In order to facilitate a choice of collaboration option best suited to a firm's need a computer based model is proposed.

  13. Surfactant EOR project evaluated

    SciTech Connect

    Holm, L.W.

    1984-07-16

    The Union Oil Co.'s Uniflood process has successfully mobilized and produced tertiary oil from a micellar-polymer pilot project on the Hegberg lease in the El Dorado field, Kansas. This half-completed EOR flood has recovered over 11% of the waterflood residual oil and is currently producing at an oil cut of 10%. Oil recovery has been limited by (1) the presence of gypsum in portions of the reservoir which adversly affects injected chemicals, (2) poor quality reservoir rock in one quadrant of the pilot, and (3) a substantial fluid drift (30 ft/year) which causes a portion of the injected chemicals to flow out of the pilot pattern. The El Dorado demonstration project is a joint experiment covered by a cost-sharing contract between the U.S. Department of Energy and Cities Service Company. It was proposed as a micellar-polymer process in a highly saline (10 wt % salts) reservoir that had been waterflooded to residual oil. Despite the extended project life, and indications that total recovery efficiency will be less than originally predicted, oil response in the Hegberg pattern is encouraging for application of the micellar-polymer process in high brine reservoirs.

  14. GEAR UP Aspirations Project Evaluation

    ERIC Educational Resources Information Center

    Trimble, Brad A.

    2013-01-01

    The purpose of this study was to conduct a formative evaluation of the first two years of the Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP) Aspirations Project (Aspirations) using a Context, Input, Process, and Product (CIPP) model so as to gain an in-depth understanding of the project during the middle school…

  15. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  16. Conceptual Soundness, Metric Development, Benchmarking, and Targeting for PATH Subprogram Evaluation

    SciTech Connect

    Mosey. G.; Doris, E.; Coggeshall, C.; Antes, M.; Ruch, J.; Mortensen, J.

    2009-01-01

    The objective of this study is to evaluate the conceptual soundness of the U.S. Department of Housing and Urban Development (HUD) Partnership for Advancing Technology in Housing (PATH) program's revised goals and establish and apply a framework to identify and recommend metrics that are the most useful for measuring PATH's progress. This report provides an evaluative review of PATH's revised goals, outlines a structured method for identifying and selecting metrics, proposes metrics and benchmarks for a sampling of individual PATH programs, and discusses other metrics that potentially could be developed that may add value to the evaluation process. The framework and individual program metrics can be used for ongoing management improvement efforts and to inform broader program-level metrics for government reporting requirements.

  17. Grass Roots Project Evaluation.

    ERIC Educational Resources Information Center

    Wick, John W.

    Some aspects of a grass roots evaluation training program are presented. The program consists of two elements: (1) a series of 11 slide/tape individualized self-paced units, and (2) a six-week summer program. Three points of view on this program are: (1) University graduate programs in quantitative areas are usually consumed by specialists; (2)…

  18. NASA PC software evaluation project

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Kuan, Julie C.

    1986-01-01

    The USL NASA PC software evaluation project is intended to provide a structured framework for facilitating the development of quality NASA PC software products. The project will assist NASA PC development staff to understand the characteristics and functions of NASA PC software products. Based on the results of the project teams' evaluations and recommendations, users can judge the reliability, usability, acceptability, maintainability and customizability of all the PC software products. The objective here is to provide initial, high-level specifications and guidelines for NASA PC software evaluation. The primary tasks to be addressed in this project are as follows: to gain a strong understanding of what software evaluation entails and how to organize a structured software evaluation process; to define a structured methodology for conducting the software evaluation process; to develop a set of PC software evaluation criteria and evaluation rating scales; and to conduct PC software evaluations in accordance with the identified methodology. Communication Packages, Network System Software, Graphics Support Software, Environment Management Software, General Utilities. This report represents one of the 72 attachment reports to the University of Southwestern Louisiana's Final Report on NASA Grant NGT-19-010-900. Accordingly, appropriate care should be taken in using this report out of context of the full Final Report.

  19. A Quantitative Methodology for Determining the Critical Benchmarks for Project 2061 Strand Maps

    ERIC Educational Resources Information Center

    Kuhn, G.

    2008-01-01

    The American Association for the Advancement of Science (AAAS) was tasked with identifying the key science concepts for science literacy in K-12 students in America (AAAS, 1990, 1993). The AAAS Atlas of Science Literacy (2001) has organized roughly half of these science concepts or benchmarks into fifty flow charts. Each flow chart or strand map…

  20. Benchmark Calculations for Reflector Effect in Fast Cores by Using the Latest Evaluated Nuclear Data Libraries

    NASA Astrophysics Data System (ADS)

    Fukushima, M.; Ishikawa, M.; Numata, K.; Jin, T.; Kugo, T.

    2014-04-01

    Benchmark calculations for reflector effects in fast cores were performed to validate the reliability of scattering data of structural materials in the major evaluated nuclear data libraries, JENDL-4.0, ENDF/B-VII.1 and JEFF-3.1.2. The criticalities of two FCA and two ZPR cores were analyzed by using a continuous energy Monte Carlo calculation code. The ratios of calculation to experimental values were compared between these cores and the sensitivity analyses were performed. From the results, the replacement reactivity from blanket to SS and Na reflector is better evaluated by JENDL-4.0 than by ENDF/B-VII.1 mainly due to the μbar values of Na and 52Cr.

  1. GROWTH OF THE INTERNATIONAL CRITICALITY SAFETY AND REACTOR PHYSICS EXPERIMENT EVALUATION PROJECTS

    SciTech Connect

    J. Blair Briggs; John D. Bess; Jim Gulliford

    2011-09-01

    Since the International Conference on Nuclear Criticality Safety (ICNC) 2007, the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) have continued to expand their efforts and broaden their scope. Eighteen countries participated on the ICSBEP in 2007. Now, there are 20, with recent contributions from Sweden and Argentina. The IRPhEP has also expanded from eight contributing countries in 2007 to 16 in 2011. Since ICNC 2007, the contents of the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments1' have increased from 442 evaluations (38000 pages), containing benchmark specifications for 3955 critical or subcritical configurations to 516 evaluations (nearly 55000 pages), containing benchmark specifications for 4405 critical or subcritical configurations in the 2010 Edition of the ICSBEP Handbook. The contents of the Handbook have also increased from 21 to 24 criticality-alarm-placement/shielding configurations with multiple dose points for each, and from 20 to 200 configurations categorized as fundamental physics measurements relevant to criticality safety applications. Approximately 25 new evaluations and 150 additional configurations are expected to be added to the 2011 edition of the Handbook. Since ICNC 2007, the contents of the 'International Handbook of Evaluated Reactor Physics Benchmark Experiments2' have increased from 16 different experimental series that were performed at 12 different reactor facilities to 53 experimental series that were performed at 30 different reactor facilities in the 2011 edition of the Handbook. Considerable effort has also been made to improve the functionality of the searchable database, DICE (Database for the International Criticality Benchmark Evaluation Project) and verify the accuracy of the data contained therein. DICE will be discussed in separate papers at ICNC 2011. The status of the ICSBEP and the IRPh

  2. The impact of incomplete knowledge on evaluation: an experimental benchmark for protein function prediction

    PubMed Central

    Huttenhower, Curtis; Hibbs, Matthew A.; Myers, Chad L.; Caudy, Amy A.; Hess, David C.; Troyanskaya, Olga G.

    2009-01-01

    Motivation: Rapidly expanding repositories of highly informative genomic data have generated increasing interest in methods for protein function prediction and inference of biological networks. The successful application of supervised machine learning to these tasks requires a gold standard for protein function: a trusted set of correct examples, which can be used to assess performance through cross-validation or other statistical approaches. Since gene annotation is incomplete for even the best studied model organisms, the biological reliability of such evaluations may be called into question. Results: We address this concern by constructing and analyzing an experimentally based gold standard through comprehensive validation of protein function predictions for mitochondrion biogenesis in Saccharomyces cerevisiae. Specifically, we determine that (i) current machine learning approaches are able to generalize and predict novel biology from an incomplete gold standard and (ii) incomplete functional annotations adversely affect the evaluation of machine learning performance. While computational approaches performed better than predicted in the face of incomplete data, relative comparison of competing approaches—even those employing the same training data—is problematic with a sparse gold standard. Incomplete knowledge causes individual methods' performances to be differentially underestimated, resulting in misleading performance evaluations. We provide a benchmark gold standard for yeast mitochondria to complement current databases and an analysis of our experimental results in the hopes of mitigating these effects in future comparative evaluations. Availability: The mitochondrial benchmark gold standard, as well as experimental results and additional data, is available at http://function.princeton.edu/mitochondria Contact: ogt@cs.princeton.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19561015

  3. Model benchmarking and reference signals for angled-beam shear wave ultrasonic nondestructive evaluation (NDE) inspections

    NASA Astrophysics Data System (ADS)

    Aldrin, John C.; Hopkins, Deborah; Datuin, Marvin; Warchol, Mark; Warchol, Lyudmila; Forsyth, David S.; Buynak, Charlie; Lindgren, Eric A.

    2017-02-01

    For model benchmark studies, the accuracy of the model is typically evaluated based on the change in response relative to a selected reference signal. The use of a side drilled hole (SDH) in a plate was investigated as a reference signal for angled beam shear wave inspection for aircraft structure inspections of fastener sites. Systematic studies were performed with varying SDH depth and size, and varying the ultrasonic probe frequency, focal depth, and probe height. Increased error was observed with the simulation of angled shear wave beams in the near-field. Even more significant, asymmetry in real probes and the inherent sensitivity of signals in the near-field to subtle test conditions were found to provide a greater challenge with achieving model agreement. To achieve quality model benchmark results for this problem, it is critical to carefully align the probe with the part geometry, to verify symmetry in probe response, and ideally avoid using reference signals from the near-field response. Suggested reference signals for angled beam shear wave inspections include using the `through hole' corner specular reflection signal and the full skip' signal off of the far wall from the side drilled hole.

  4. SU-E-J-30: Benchmark Image-Based TCP Calculation for Evaluation of PTV Margins for Lung SBRT Patients

    SciTech Connect

    Li, M; Chetty, I; Zhong, H

    2014-06-01

    Purpose: Tumor control probability (TCP) calculated with accumulated radiation doses may help design appropriate treatment margins. Image registration errors, however, may compromise the calculated TCP. The purpose of this study is to develop benchmark CT images to quantify registration-induced errors in the accumulated doses and their corresponding TCP. Methods: 4DCT images were registered from end-inhale (EI) to end-exhale (EE) using a “demons” algorithm. The demons DVFs were corrected by an FEM model to get realistic deformation fields. The FEM DVFs were used to warp the EI images to create the FEM-simulated images. The two images combined with the FEM DVF formed a benchmark model. Maximum intensity projection (MIP) images, created from the EI and simulated images, were used to develop IMRT plans. Two plans with 3 and 5 mm margins were developed for each patient. With these plans, radiation doses were recalculated on the simulated images and warped back to the EI images using the FEM DVFs to get the accumulated doses. The Elastix software was used to register the FEM-simulated images to the EI images. TCPs calculated with the Elastix-accumulated doses were compared with those generated by the FEM to get the TCP error of the Elastix registrations. Results: For six lung patients, the mean Elastix registration error ranged from 0.93 to 1.98 mm. Their relative dose errors in PTV were between 0.28% and 6.8% for 3mm margin plans, and between 0.29% and 6.3% for 5mm-margin plans. As the PTV margin reduced from 5 to 3 mm, the mean TCP error of the Elastix-reconstructed doses increased from 2.0% to 2.9%, and the mean NTCP errors decreased from 1.2% to 1.1%. Conclusion: Patient-specific benchmark images can be used to evaluate the impact of registration errors on the computed TCPs, and may help select appropriate PTV margins for lung SBRT patients.

  5. Evaluation of CRISTO II Storage Arrays Benchmark with TRIPOLI-4.2 Criticality Calculations

    NASA Astrophysics Data System (ADS)

    Lee, Y. K.

    The new lattice feature of TRIPOLI-4.2 geometry package was applied to model the CRISTO II storage arrays of PWR fuels with various kinds of neutron absorber plates. The new `Kcoll' collision estimator of TRIPOLI-4.2 code was utilized to evaluate the infinite multiplication factors, Kinf. Comparing with the published ICSBEP benchmark results of CRISTO II experiments and of three different continuousenergy Monte Carlo codes - TRIPOLI-4.1 (JEF2.2), MCNP4B2 (ENDF/B-V) and MCNP4XS (ENDF/B-VI.r4), the present study using cost-effective modeling, JEF2.2 and ENDF/B-VI.r4 libraries obtained satisfactory results.

  6. Benchmark Data for Evaluation of Aeroacoustic Propagation Codes With Grazing Flow

    NASA Technical Reports Server (NTRS)

    Jones, Michael G.; Watson, Willie R.; Parrott, Tony L.

    2005-01-01

    Increased understanding of the effects of acoustic treatment on the propagation of sound through commercial aircraft engine nacelles is a requirement for more efficient liner design. To this end, one of NASA s goals is to further the development of duct propagation and impedance reduction codes. A number of these codes have been developed over the last three decades. These codes are typically divided into two categories: (1) codes that use the measured complex acoustic pressure field to reduce the acoustic impedance of treatment that is positioned along the wall of the duct, and (2) codes that use the acoustic impedance of the treatment as input and compute the sound field throughout the duct. Clearly, the value of these codes is dependent upon the quality of the data used for their validation. Over the past two decades, data acquired in the NASA Langley Research Center Grazing Incidence Tube have been used by a number of researchers for comparison with their propagation codes. Many of these comparisons have been based upon Grazing Incidence Tube tests that were conducted to study specific liner technology components, and were incomplete for general propagation code validation. Thus, the objective of the current investigation is to provide a quality data set that can be used as a benchmark for evaluation of duct propagation and impedance reduction codes. In order to achieve this objective, two parallel efforts have been undertaken. The first of these is the development of an enhanced impedance eduction code that uses data acquired in the Grazing Incidence Tube. This enhancement is intended to place the benchmark data on as firm a foundation as possible. The second key effort is the acquisition of a comprehensive set of data selected to allow propagation code evaluations over a range of test conditions.

  7. Block Transfer Agreement Evaluation Project

    ERIC Educational Resources Information Center

    Bastedo, Helena

    2010-01-01

    The objective of this project is to evaluate for the British Columbia Council on Admissions and Transfer (BCCAT) the effectiveness of block transfer agreements (BTAs) in the BC Transfer System and recommend steps to be taken to improve their effectiveness. Findings of this study revealed that institutions want to expand block credit transfer;…

  8. ISLES 2015 - A public evaluation benchmark for ischemic stroke lesion segmentation from multispectral MRI.

    PubMed

    Maier, Oskar; Menze, Bjoern H; von der Gablentz, Janina; Häni, Levin; Heinrich, Mattias P; Liebrand, Matthias; Winzeck, Stefan; Basit, Abdul; Bentley, Paul; Chen, Liang; Christiaens, Daan; Dutil, Francis; Egger, Karl; Feng, Chaolu; Glocker, Ben; Götz, Michael; Haeck, Tom; Halme, Hanna-Leena; Havaei, Mohammad; Iftekharuddin, Khan M; Jodoin, Pierre-Marc; Kamnitsas, Konstantinos; Kellner, Elias; Korvenoja, Antti; Larochelle, Hugo; Ledig, Christian; Lee, Jia-Hong; Maes, Frederik; Mahmood, Qaiser; Maier-Hein, Klaus H; McKinley, Richard; Muschelli, John; Pal, Chris; Pei, Linmin; Rangarajan, Janaki Raman; Reza, Syed M S; Robben, David; Rueckert, Daniel; Salli, Eero; Suetens, Paul; Wang, Ching-Wei; Wilms, Matthias; Kirschke, Jan S; Krämer, Ulrike M; Münte, Thomas F; Schramm, Peter; Wiest, Roland; Handels, Heinz; Reyes, Mauricio

    2017-01-01

    Ischemic stroke is the most common cerebrovascular disease, and its diagnosis, treatment, and study relies on non-invasive imaging. Algorithms for stroke lesion segmentation from magnetic resonance imaging (MRI) volumes are intensely researched, but the reported results are largely incomparable due to different datasets and evaluation schemes. We approached this urgent problem of comparability with the Ischemic Stroke Lesion Segmentation (ISLES) challenge organized in conjunction with the MICCAI 2015 conference. In this paper we propose a common evaluation framework, describe the publicly available datasets, and present the results of the two sub-challenges: Sub-Acute Stroke Lesion Segmentation (SISS) and Stroke Perfusion Estimation (SPES). A total of 16 research groups participated with a wide range of state-of-the-art automatic segmentation algorithms. A thorough analysis of the obtained data enables a critical evaluation of the current state-of-the-art, recommendations for further developments, and the identification of remaining challenges. The segmentation of acute perfusion lesions addressed in SPES was found to be feasible. However, algorithms applied to sub-acute lesion segmentation in SISS still lack accuracy. Overall, no algorithmic characteristic of any method was found to perform superior to the others. Instead, the characteristics of stroke lesion appearances, their evolution, and the observed challenges should be studied in detail. The annotated ISLES image datasets continue to be publicly available through an online evaluation system to serve as an ongoing benchmarking resource (www.isles-challenge.org).

  9. Managing for Results in America's Great City Schools 2014: Results from Fiscal Year 2012-13. A Report of the Performance Measurement and Benchmarking Project

    ERIC Educational Resources Information Center

    Council of the Great City Schools, 2014

    2014-01-01

    In 2002 the "Council of the Great City Schools" and its members set out to develop performance measures that could be used to improve business operations in urban public school districts. The Council launched the "Performance Measurement and Benchmarking Project" to achieve these objectives. The purposes of the project was to:…

  10. Evaluation for 4S core nuclear design method through integration of benchmark data

    SciTech Connect

    Nagata, A.; Tsuboi, Y.; Moriki, Y.; Kawashima, M.

    2012-07-01

    The 4S is a sodium-cooled small fast reactor which is reflector-controlled for operation through core lifetime about 30 years. The nuclear design method has been selected to treat neutron leakage with high accuracy. It consists of a continuous-energy Monte Carlo code, discrete ordinate transport codes and JENDL-3.3. These two types of neutronic analysis codes are used for the design in a complementary manner. The accuracy of the codes has been evaluated by analysis of benchmark critical experiments and the experimental reactor data. The measured data used for the evaluation is critical experimental data of the FCA XXIII as a physics mockup assembly of the 4S core, FCA XVI, FCA XIX, ZPR, and data of experimental reactor JOYO MK-1. Evaluated characteristics are criticality, reflector reactivity worth, power distribution, absorber reactivity worth, and sodium void worth. A multi-component bias method was applied, especially to improve the accuracy of sodium void reactivity worth. As the result, it has been confirmed that the 4S core nuclear design method provides good accuracy, and typical bias factors and their uncertainties are determined. (authors)

  11. Analysis of a benchmark suite to evaluate mixed numeric and symbolic processing

    NASA Technical Reports Server (NTRS)

    Ragharan, Bharathi; Galant, David

    1992-01-01

    The suite of programs that formed the benchmark for a proposed advanced computer is described and analyzed. The features of the processor and its operating system that are tested by the benchmark are discussed. The computer codes and the supporting data for the analysis are given as appendices.

  12. BENCHMARK DOSES FOR CHEMICAL MIXTURES: EVALUATION OF A MIXTURE OF 18 PHAHS.

    EPA Science Inventory

    Benchmark doses (BMDs), defined as doses of a substance that are expected to result in a pre-specified level of "benchmark" response (BMR), have been used for quantifying the risk associated with exposure to environmental hazards. The lower confidence limit of the BMD is used as...

  13. Integral Benchmark Data for Nuclear Data Testing Through the ICSBEP & IRPhEP

    SciTech Connect

    J. Blair Briggs; John D. Bess; Jim Gulliford; Ian Hill

    2013-10-01

    The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and International Reactor Physics Experiment Evaluation Project (IRPhEP) was last discussed directly with the nuclear data community at ND2007. Since ND2007, integral benchmark data that are available for nuclear data testing have increased significantly. The status of the ICSBEP and the IRPhEP is discussed and selected benchmark configurations that have been added to the ICSBEP and IRPhEP Handbooks since ND2007 are highlighted.

  14. Integral Benchmark Data for Nuclear Data Testing Through the ICSBEP & IRPhEP

    NASA Astrophysics Data System (ADS)

    Briggs, J. B.; Bess, J. D.; Gulliford, J.

    2014-04-01

    The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and International Reactor Physics Experiment Evaluation Project (IRPhEP) was last discussed directly with the nuclear data community at ND2007. Since ND2007, integral benchmark data that are available for nuclear data testing have increased significantly. The status of the ICSBEP and the IRPhEP is discussed and selected benchmark configurations that have been added to the ICSBEP and IRPhEP Handbooks since ND2007 are highlighted.

  15. 5 CFR 470.317 - Project evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Project evaluation. 470.317 Section 470... MANAGEMENT RESEARCH PROGRAMS AND DEMONSTRATIONS PROJECTS Regulatory Requirements Pertaining to Demonstration Projects § 470.317 Project evaluation. (a) Compliance evaluation. OPM will review the operation of...

  16. 5 CFR 470.317 - Project evaluation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 5 Administrative Personnel 1 2012-01-01 2012-01-01 false Project evaluation. 470.317 Section 470... MANAGEMENT RESEARCH PROGRAMS AND DEMONSTRATIONS PROJECTS Regulatory Requirements Pertaining to Demonstration Projects § 470.317 Project evaluation. (a) Compliance evaluation. OPM will review the operation of...

  17. 5 CFR 470.317 - Project evaluation.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 1 2011-01-01 2011-01-01 false Project evaluation. 470.317 Section 470... MANAGEMENT RESEARCH PROGRAMS AND DEMONSTRATIONS PROJECTS Regulatory Requirements Pertaining to Demonstration Projects § 470.317 Project evaluation. (a) Compliance evaluation. OPM will review the operation of...

  18. 5 CFR 470.317 - Project evaluation.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 1 2014-01-01 2014-01-01 false Project evaluation. 470.317 Section 470... MANAGEMENT RESEARCH PROGRAMS AND DEMONSTRATIONS PROJECTS Regulatory Requirements Pertaining to Demonstration Projects § 470.317 Project evaluation. (a) Compliance evaluation. OPM will review the operation of...

  19. An evaluation, comparison, and accurate benchmarking of several publicly available MS/MS search algorithms: Sensitivity and Specificity analysis.

    SciTech Connect

    Kapp, Eugene; Schutz, Frederick; Connolly, Lisa M.; Chakel, John A.; Meza, Jose E.; Miller, Christine A.; Fenyo, David; Eng, Jimmy K.; Adkins, Joshua N.; Omenn, Gilbert; Simpson, Richard

    2005-08-01

    MS/MS and associated database search algorithms are essential proteomic tools for identifying peptides. Due to their widespread use, it is now time to perform a systematic analysis of the various algorithms currently in use. Using blood specimens used in the HUPO Plasma Proteome Project, we have evaluated five search algorithms with respect to their sensitivity and specificity, and have also accurately benchmarked them based on specified false-positive (FP) rates. Spectrum Mill and SEQUEST performed well in terms of sensitivity, but were inferior to MASCOT, X-Tandem, and Sonar in terms of specificity. Overall, MASCOT, a probabilistic search algorithm, correctly identified most peptides based on a specified FP rate. The rescoring algorithm, Peptide Prophet, enhanced the overall performance of the SEQUEST algorithm, as well as provided predictable FP error rates. Ideally, score thresholds should be calculated for each peptide spectrum or minimally, derived from a reversed-sequence search as demonstrated in this study based on a validated data set. The availability of open-source search algorithms, such as X-Tandem, makes it feasible to further improve the validation process (manual or automatic) on the basis of ''consensus scoring'', i.e., the use of multiple (at least two) search algorithms to reduce the number of FPs. complement.

  20. Evaluation of anode (electro)catalytic materials for the direct borohydride fuel cell: Methods and benchmarks

    NASA Astrophysics Data System (ADS)

    Olu, Pierre-Yves; Job, Nathalie; Chatenet, Marian

    2016-09-01

    In this paper, different methods are discussed for the evaluation of the potential of a given catalyst, in view of an application as a direct borohydride fuel cell DBFC anode material. Characterizations results in DBFC configuration are notably analyzed at the light of important experimental variables which influence the performances of the DBFC. However, in many practical DBFC-oriented studies, these various experimental variables prevent one to isolate the influence of the anode catalyst on the cell performances. Thus, the electrochemical three-electrode cell is a widely-employed and useful tool to isolate the DBFC anode catalyst and to investigate its electrocatalytic activity towards the borohydride oxidation reaction (BOR) in the absence of other limitations. This article reviews selected results for different types of catalysts in electrochemical cell containing a sodium borohydride alkaline electrolyte. In particular, propositions of common experimental conditions and benchmarks are given for practical evaluation of the electrocatalytic activity towards the BOR in three-electrode cell configuration. The major issue of gaseous hydrogen generation and escape upon DBFC operation is also addressed through a comprehensive review of various results depending on the anode composition. At last, preliminary concerns are raised about the stability of potential anode catalysts upon DBFC operation.

  1. ICSBEP Benchmarks For Nuclear Data Applications

    NASA Astrophysics Data System (ADS)

    Briggs, J. Blair

    2005-05-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organization for Economic Cooperation and Development (OECD) — Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Serbia and Montenegro (formerly Yugoslavia), Kazakhstan, Spain, Israel, Brazil, Poland, and the Czech Republic are now participating. South Africa, India, China, and Germany are considering participation. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled "International Handbook of Evaluated Criticality Safety Benchmark Experiments." The 2004 Edition of the Handbook contains benchmark specifications for 3331 critical or subcritical configurations that are intended for use in validation efforts and for testing basic nuclear data. New to the 2004 Edition of the Handbook is a draft criticality alarm / shielding type benchmark that should be finalized in 2005 along with two other similar benchmarks. The Handbook is being used extensively for nuclear data testing and is expected to be a valuable resource for code and data validation and improvement efforts for decades to come. Specific benchmarks that are useful for testing structural materials such as iron, chromium, nickel, and manganese; beryllium; lead; thorium; and 238U are highlighted.

  2. Using false discovery rates to benchmark SNP-callers in next-generation sequencing projects.

    PubMed

    Farrer, Rhys A; Henk, Daniel A; MacLean, Dan; Studholme, David J; Fisher, Matthew C

    2013-01-01

    Sequence alignments form the basis for many comparative and population genomic studies. Alignment tools provide a range of accuracies dependent on the divergence between the sequences and the alignment methods. Despite widespread use, there is no standard method for assessing the accuracy of a dataset and alignment strategy after resequencing. We present a framework and tool for determining the overall accuracies of an input read dataset, alignment and SNP-calling method providing an isolate in that dataset has a corresponding, or closely related reference sequence available. In addition to this tool for comparing False Discovery Rates (FDR), we include a method for determining homozygous and heterozygous positions from an alignment using binomial probabilities for an expected error rate. We benchmark this method against other SNP callers using our FDR method with three fungal genomes, finding that it was able achieve a high level of accuracy. These tools are available at http://cfdr.sourceforge.net/.

  3. Using False Discovery Rates to Benchmark SNP-callers in next-generation sequencing projects

    PubMed Central

    Farrer, Rhys A.; Henk, Daniel A.; MacLean, Dan; Studholme, David J.; Fisher, Matthew C.

    2013-01-01

    Sequence alignments form the basis for many comparative and population genomic studies. Alignment tools provide a range of accuracies dependent on the divergence between the sequences and the alignment methods. Despite widespread use, there is no standard method for assessing the accuracy of a dataset and alignment strategy after resequencing. We present a framework and tool for determining the overall accuracies of an input read dataset, alignment and SNP-calling method providing an isolate in that dataset has a corresponding, or closely related reference sequence available. In addition to this tool for comparing False Discovery Rates (FDR), we include a method for determining homozygous and heterozygous positions from an alignment using binomial probabilities for an expected error rate. We benchmark this method against other SNP callers using our FDR method with three fungal genomes, finding that it was able achieve a high level of accuracy. These tools are available at http://cfdr.sourceforge.net/. PMID:23518929

  4. A Benchmark Data Set to Evaluate the Illumination Robustness of Image Processing Algorithms for Object Segmentation and Classification

    PubMed Central

    Khan, Arif ul Maula; Mikut, Ralf; Reischl, Markus

    2015-01-01

    Developers of image processing routines rely on benchmark data sets to give qualitative comparisons of new image analysis algorithms and pipelines. Such data sets need to include artifacts in order to occlude and distort the required information to be extracted from an image. Robustness, the quality of an algorithm related to the amount of distortion is often important. However, using available benchmark data sets an evaluation of illumination robustness is difficult or even not possible due to missing ground truth data about object margins and classes and missing information about the distortion. We present a new framework for robustness evaluation. The key aspect is an image benchmark containing 9 object classes and the required ground truth for segmentation and classification. Varying levels of shading and background noise are integrated to distort the data set. To quantify the illumination robustness, we provide measures for image quality, segmentation and classification success and robustness. We set a high value on giving users easy access to the new benchmark, therefore, all routines are provided within a software package, but can as well easily be replaced to emphasize other aspects. PMID:26191792

  5. A Benchmark Data Set to Evaluate the Illumination Robustness of Image Processing Algorithms for Object Segmentation and Classification.

    PubMed

    Khan, Arif Ul Maula; Mikut, Ralf; Reischl, Markus

    2015-01-01

    Developers of image processing routines rely on benchmark data sets to give qualitative comparisons of new image analysis algorithms and pipelines. Such data sets need to include artifacts in order to occlude and distort the required information to be extracted from an image. Robustness, the quality of an algorithm related to the amount of distortion is often important. However, using available benchmark data sets an evaluation of illumination robustness is difficult or even not possible due to missing ground truth data about object margins and classes and missing information about the distortion. We present a new framework for robustness evaluation. The key aspect is an image benchmark containing 9 object classes and the required ground truth for segmentation and classification. Varying levels of shading and background noise are integrated to distort the data set. To quantify the illumination robustness, we provide measures for image quality, segmentation and classification success and robustness. We set a high value on giving users easy access to the new benchmark, therefore, all routines are provided within a software package, but can as well easily be replaced to emphasize other aspects.

  6. Comparative assessment of scoring functions on an updated benchmark: 2. Evaluation methods and general results.

    PubMed

    Li, Yan; Han, Li; Liu, Zhihai; Wang, Renxiao

    2014-06-23

    Our comparative assessment of scoring functions (CASF) benchmark is created to provide an objective evaluation of current scoring functions. The key idea of CASF is to compare the general performance of scoring functions on a diverse set of protein-ligand complexes. In order to avoid testing scoring functions in the context of molecular docking, the scoring process is separated from the docking (or sampling) process by using ensembles of ligand binding poses that are generated in prior. Here, we describe the technical methods and evaluation results of the latest CASF-2013 study. The PDBbind core set (version 2013) was employed as the primary test set in this study, which consists of 195 protein-ligand complexes with high-quality three-dimensional structures and reliable binding constants. A panel of 20 scoring functions, most of which are implemented in main-stream commercial software, were evaluated in terms of "scoring power" (binding affinity prediction), "ranking power" (relative ranking prediction), "docking power" (binding pose prediction), and "screening power" (discrimination of true binders from random molecules). Our results reveal that the performance of these scoring functions is generally more promising in the docking/screening power tests than in the scoring/ranking power tests. Top-ranked scoring functions in the scoring power test, such as X-Score(HM), ChemScore@SYBYL, ChemPLP@GOLD, and PLP@DS, are also top-ranked in the ranking power test. Top-ranked scoring functions in the docking power test, such as ChemPLP@GOLD, Chemscore@GOLD, GlidScore-SP, LigScore@DS, and PLP@DS, are also top-ranked in the screening power test. Our results obtained on the entire test set and its subsets suggest that the real challenge in protein-ligand binding affinity prediction lies in polar interactions and associated desolvation effect. Nonadditive features observed among high-affinity protein-ligand complexes also need attention.

  7. Technical Requirements for Benchmark Simulator-Based Terminal Instrument Procedures (TERPS) Evaluation.

    DTIC Science & Technology

    1986-05-01

    ARD-RA169 947 TECHNICAL -REQUIREMENTS FOR BENCHMARK SIMULATOR-BASED 1.𔃼 TERMINAL INSTRUMENT ..(U) ANALYTICAL MECHANICS ASSOCIATES INC MOUNTAIN VIEW...14 10 U1es DOTI FAA/ PM-86/14 NASA CR-177407 Technical Requirements for Benchmark Program Engineering Simulator-Based Terminal Instrument and...16 Abstract ---In order to take full advantage of the helicopter’s unique flight characteristics, enhanced terminal instrument procedures (TERPS) need

  8. Using a Project Portfolio: Empowerment Evaluation for Model Demonstration Projects.

    ERIC Educational Resources Information Center

    Baggett, David

    For model postsecondary demonstration projects serving individuals with disabilities, a portfolio of project activities may serve as a method for program evaluation, program replication, and program planning. Using a portfolio for collecting, describing, and documenting a project's successes, efforts, and failures enables project staff to take…

  9. Ada compiler evaluation on the Space Station Freedom Software Support Environment project

    NASA Technical Reports Server (NTRS)

    Badal, D. L.

    1989-01-01

    This paper describes the work in progress to select the Ada compilers for the Space Station Freedom Program (SSFP) Software Support Environment (SSE) project. The purpose of the SSE Ada compiler evaluation team is to establish the criteria, test suites, and benchmarks to be used for evaluating Ada compilers for the mainframes, workstations, and the realtime target for flight- and ground-based computers. The combined efforts and cooperation of the customer, subcontractors, vendors, academia and SIGAda groups made it possible to acquire the necessary background information, benchmarks, test suites, and criteria used.

  10. BioBenchmark Toyama 2012: an evaluation of the performance of triple stores on biological data

    PubMed Central

    2014-01-01

    Background Biological databases vary enormously in size and data complexity, from small databases that contain a few million Resource Description Framework (RDF) triples to large databases that contain billions of triples. In this paper, we evaluate whether RDF native stores can be used to meet the needs of a biological database provider. Prior evaluations have used synthetic data with a limited database size. For example, the largest BSBM benchmark uses 1 billion synthetic e-commerce knowledge RDF triples on a single node. However, real world biological data differs from the simple synthetic data much. It is difficult to determine whether the synthetic e-commerce data is efficient enough to represent biological databases. Therefore, for this evaluation, we used five real data sets from biological databases. Results We evaluated five triple stores, 4store, Bigdata, Mulgara, Virtuoso, and OWLIM-SE, with five biological data sets, Cell Cycle Ontology, Allie, PDBj, UniProt, and DDBJ, ranging in size from approximately 10 million to 8 billion triples. For each database, we loaded all the data into our single node and prepared the database for use in a classical data warehouse scenario. Then, we ran a series of SPARQL queries against each endpoint and recorded the execution time and the accuracy of the query response. Conclusions Our paper shows that with appropriate configuration Virtuoso and OWLIM-SE can satisfy the basic requirements to load and query biological data less than 8 billion or so on a single node, for the simultaneous access of 64 clients. OWLIM-SE performs best for databases with approximately 11 million triples; For data sets that contain 94 million and 590 million triples, OWLIM-SE and Virtuoso perform best. They do not show overwhelming advantage over each other; For data over 4 billion Virtuoso works best. 4store performs well on small data sets with limited features when the number of triples is less than 100 million, and our test shows its

  11. Vermont Rural and Farm Family Rehabilitation Project. A Benchmark Report. Research Report MP73.

    ERIC Educational Resources Information Center

    Tompkins, E. H.; And Others

    The report presents information about client families and their farms during their contact with the Vermont Rural and Farm Family Rehabilitation (RFFR) project from March 1, 1969 to June 30, 1971. Data are from 450 family case histories which include 2,089 members. Most were from northern Vermont. Families averaged 4.64 persons each, about 1 more…

  12. Oregon's Technical, Human, and Organizational Networking Infrastructure for Science and Mathematics: A Planning Project. Benchmark Reports.

    ERIC Educational Resources Information Center

    Lamb, William G., Ed.

    This compilation of reports is part of a planning project that aims to establish a coalition of organizations and key people who can work together to bring computerized telecommunications (CT) to Oregon as a teaching tool for science and mathematics teachers and students, and to give that coalition practical ideas for proposals to make CT a…

  13. Criticality experiments and benchmarks for cross section evaluation: the neptunium case

    NASA Astrophysics Data System (ADS)

    Leong, L. S.; Tassan-Got, L.; Audouin, L.; Paradela, C.; Wilson, J. N.; Tarrio, D.; Berthier, B.; Duran, I.; Le Naour, C.; Stéphan, C.

    2013-03-01

    The 237Np neutron-induced fission cross section has been recently measured in a large energy range (from eV to GeV) at the n_TOF facility at CERN. When compared to previous measurement the n_TOF fission cross section appears to be higher by 5-7% beyond the fission threshold. To check the relevance of n_TOF data, we apply a criticality experiment performed at Los Alamos with a 6 kg sphere of 237Np, surrounded by enriched uranium 235U so as to approach criticality with fast neutrons. The multiplication factor ke f f of the calculation is in better agreement with the experiment (the deviation of 750 pcm is reduced to 250 pcm) when we replace the ENDF/B-VII.0 evaluation of the 237Np fission cross section by the n_TOF data. We also explore the hypothesis of deficiencies of the inelastic cross section in 235U which has been invoked by some authors to explain the deviation of 750 pcm. With compare to inelastic large distortion calculation, it is incompatible with existing measurements. Also we show that the v of 237Np can hardly be incriminated because of the high accuracy of the existing data. Fission rate ratios or averaged fission cross sections measured in several fast neutron fields seem to give contradictory results on the validation of the 237Np cross section but at least one of the benchmark experiments, where the active deposits have been well calibrated for the number of atoms, favors the n_TOF data set. These outcomes support the hypothesis of a higher fission cross section of 237Np.

  14. Benchmarking Clinical Speech Recognition and Information Extraction: New Data, Methods, and Evaluations

    PubMed Central

    Zhou, Liyuan; Hanlen, Leif; Ferraro, Gabriela

    2015-01-01

    Background Over a tenth of preventable adverse events in health care are caused by failures in information flow. These failures are tangible in clinical handover; regardless of good verbal handover, from two-thirds to all of this information is lost after 3-5 shifts if notes are taken by hand, or not at all. Speech recognition and information extraction provide a way to fill out a handover form for clinical proofing and sign-off. Objective The objective of the study was to provide a recorded spoken handover, annotated verbatim transcriptions, and evaluations to support research in spoken and written natural language processing for filling out a clinical handover form. This dataset is based on synthetic patient profiles, thereby avoiding ethical and legal restrictions, while maintaining efficacy for research in speech-to-text conversion and information extraction, based on realistic clinical scenarios. We also introduce a Web app to demonstrate the system design and workflow. Methods We experiment with Dragon Medical 11.0 for speech recognition and CRF++ for information extraction. To compute features for information extraction, we also apply CoreNLP, MetaMap, and Ontoserver. Our evaluation uses cross-validation techniques to measure processing correctness. Results The data provided were a simulation of nursing handover, as recorded using a mobile device, built from simulated patient records and handover scripts, spoken by an Australian registered nurse. Speech recognition recognized 5276 of 7277 words in our 100 test documents correctly. We considered 50 mutually exclusive categories in information extraction and achieved the F1 (ie, the harmonic mean of Precision and Recall) of 0.86 in the category for irrelevant text and the macro-averaged F1 of 0.70 over the remaining 35 nonempty categories of the form in our 101 test documents. Conclusions The significance of this study hinges on opening our data, together with the related performance benchmarks and some

  15. Benchmarking the Selection and Projection Operations, and Ordering Capabilities of Relational Database Machines.

    DTIC Science & Technology

    1983-09-01

    814M 6?h’ekI 6aThis thesis describes th( performance- measurement experiments designed for a number of back- end, relational database machine...SELECTION MEASUREMENTS . . . . . . . . . . . . 30 1. The Percentage of Selection . . . . . . . 30 2. Effects of Clustare1 and Non-Clustered Indicies... MEASUREMENTS . . ......... 47 1. P.rcgntage of Projections on Non-Key A ttributes . . . . . . o . . o o . . . . o 47 2. Comparison of the Equivalent Qur4es Cn

  16. Benchmark simulation Model no 2 in Matlab-simulink: towards plant-wide WWTP control strategy evaluation.

    PubMed

    Vreck, D; Gernaey, K V; Rosen, C; Jeppsson, U

    2006-01-01

    In this paper, implementation of the Benchmark Simulation Model No 2 (BSM2) within Matlab-Simulink is presented. The BSM2 is developed for plant-wide WWTP control strategy evaluation on a long-term basis. It consists of a pre-treatment process, an activated sludge process and sludge treatment processes. Extended evaluation criteria are proposed for plant-wide control strategy assessment. Default open-loop and closed-loop strategies are also proposed to be used as references with which to compare other control strategies. Simulations indicate that the BM2 is an appropriate tool for plant-wide control strategy evaluation.

  17. ICSBEP Criticality Benchmark Eigenvalues with ENDF/B-VII.1 Cross Sections

    SciTech Connect

    Kahler, Albert C. III; MacFarlane, Robert

    2012-06-28

    We review MCNP eigenvalue calculations from a suite of International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbook evaluations with the recently distributed ENDF/B-VII.1 cross section library.

  18. Helical screw expander evaluation project

    NASA Astrophysics Data System (ADS)

    McKay, R.

    1982-03-01

    A one MW helical rotary screw expander power system for electric power generation from geothermal brine was evaluated. The technology explored in the testing is simple, potentially very efficient, and ideally suited to wellhead installations in moderate to high enthalpy, liquid dominated field. A functional one MW geothermal electric power plant that featured a helical screw expander was produced and then tested with a demonstrated average performance of approximately 45% machine efficiency over a wide range of test conditions in noncondensing, operation on two-phase geothermal fluids. The Project also produced a computer equipped data system, an instrumentation and control van, and a 1000 kW variable load bank, all integrated into a test array designed for operation at a variety of remote test sites. Data are presented for the Utah testing and for the noncondensing phases of the testing in Mexico. Test time logged was 437 hours during the Utah tests and 1101 hours during the Mexico tests.

  19. Helical screw expander evaluation project

    NASA Technical Reports Server (NTRS)

    Mckay, R.

    1982-01-01

    A one MW helical rotary screw expander power system for electric power generation from geothermal brine was evaluated. The technology explored in the testing is simple, potentially very efficient, and ideally suited to wellhead installations in moderate to high enthalpy, liquid dominated field. A functional one MW geothermal electric power plant that featured a helical screw expander was produced and then tested with a demonstrated average performance of approximately 45% machine efficiency over a wide range of test conditions in noncondensing, operation on two-phase geothermal fluids. The Project also produced a computer equipped data system, an instrumentation and control van, and a 1000 kW variable load bank, all integrated into a test array designed for operation at a variety of remote test sites. Data are presented for the Utah testing and for the noncondensing phases of the testing in Mexico. Test time logged was 437 hours during the Utah tests and 1101 hours during the Mexico tests.

  20. Evaluation of the potential of benchmarking to facilitate the measurement of chemical persistence in lakes.

    PubMed

    Zou, Hongyan; MacLeod, Matthew; McLachlan, Michael S

    2014-01-01

    The persistence of chemicals in the environment is rarely measured in the field due to a paucity of suitable methods. Here we explore the potential of chemical benchmarking to facilitate the measurement of persistence in lake systems using a multimedia chemical fate model. The model results show that persistence in a lake can be assessed by quantifying the ratio of test chemical and benchmark chemical at as few as two locations: the point of emission and the outlet of the lake. Appropriate selection of benchmark chemicals also allows pseudo-first-order rate constants for physical removal processes such as volatilization and sediment burial to be quantified. We use the model to explore how the maximum persistence that can be measured in a particular lake depends on the partitioning properties of the test chemical of interest and the characteristics of the lake. Our model experiments demonstrate that combining benchmarking techniques with good experimental design and sensitive environmental analytical chemistry may open new opportunities for quantifying chemical persistence, particularly for relatively slowly degradable chemicals for which current methods do not perform well.

  1. The PIE Institute Project: Final Evaluation Report

    ERIC Educational Resources Information Center

    St. John, Mark; Carroll, Becky; Helms, Jen; Smith, Anita

    2008-01-01

    The Playful Invention and Exploration (PIE) Institute project was funded in 2005 by the National Science Foundation (NSF). For the past three years, Inverness Research has served as the external evaluator for the PIE project. The authors' evaluation efforts have included extensive observation and documentation of PIE project activities; ongoing…

  2. Neutron Cross Section Processing Methods for Improved Integral Benchmarking of Unresolved Resonance Region Evaluations

    NASA Astrophysics Data System (ADS)

    Walsh, Jonathan A.; Forget, Benoit; Smith, Kord S.; Brown, Forrest B.

    2016-03-01

    In this work we describe the development and application of computational methods for processing neutron cross section data in the unresolved resonance region (URR). These methods are integrated with a continuous-energy Monte Carlo neutron transport code, thereby enabling their use in high-fidelity analyses. Enhanced understanding of the effects of URR evaluation representations on calculated results is then obtained through utilization of the methods in Monte Carlo integral benchmark simulations of fast spectrum critical assemblies. First, we present a so-called on-the-fly (OTF) method for calculating and Doppler broadening URR cross sections. This method proceeds directly from ENDF-6 average unresolved resonance parameters and, thus, eliminates any need for a probability table generation pre-processing step in which tables are constructed at several energies for all desired temperatures. Significant memory reduction may be realized with the OTF method relative to a probability table treatment if many temperatures are needed. Next, we examine the effects of using a multi-level resonance formalism for resonance reconstruction in the URR. A comparison of results obtained by using the same stochastically-generated realization of resonance parameters in both the single-level Breit-Wigner (SLBW) and multi-level Breit-Wigner (MLBW) formalisms allows for the quantification of level-level interference effects on integrated tallies such as keff and energy group reaction rates. Though, as is well-known, cross section values at any given incident energy may differ significantly between single-level and multi-level formulations, the observed effects on integral results are minimal in this investigation. Finally, we demonstrate the calculation of true expected values, and the statistical spread of those values, through independent Monte Carlo simulations, each using an independent realization of URR cross section structure throughout. It is observed that both probability table

  3. Linking user and staff perspectives in the evaluation of innovative transition projects for youth with disabilities.

    PubMed

    McAnaney, Donal F; Wynne, Richard F

    2016-06-01

    A key challenge in formative evaluation is to gather appropriate evidence to inform the continuous improvement of initiatives. In the absence of outcome data, the programme evaluator often must rely on the perceptions of beneficiaries and staff in generating insight into what is making a difference. The article describes the approach adopted in an evaluation of 15 innovative projects supporting school-leavers with disabilities in making the transition to education, work and life in community settings. Two complementary processes provided an insight into what project staff and leadership viewed as the key project activities and features that facilitated successful transition as well as the areas of quality of life (QOL) that participants perceived as having been impacted positively by the projects. A comparison was made between participants' perceptions of QOL impact with the views of participants in services normally offered by the wider system. This revealed that project participants were significantly more positive in their views than participants in traditional services. In addition, the processes and activities of the more highly rated projects were benchmarked against less highly rated projects and also with usually available services. Even in the context of a range of intervening variables such as level and complexity of participant needs and variations in the stage of development of individual projects, the benchmarking process indicated a number of project characteristics that were highly valued by participants.

  4. Project Performance Evaluation Using Deep Belief Networks

    NASA Astrophysics Data System (ADS)

    Nguvulu, Alick; Yamato, Shoso; Honma, Toshihisa

    A Project Assessment Indicator (PAI) Model has recently been applied to evaluate monthly project performance based on 15 project elements derived from the project management (PM) knowledge areas. While the PAI Model comprehensively evaluates project performance, it lacks objectivity and universality. It lacks objectivity because experts assign model weights intuitively based on their PM skills and experience. It lacks universality because the allocation of ceiling scores to project elements is done ad hoc based on the empirical rule without taking into account the interactions between the project elements. This study overcomes these limitations by applying a DBN approach where the model automatically assigns weights and allocates ceiling scores to the project elements based on the DBN weights which capture the interaction between the project elements. We train our DBN on 5 IT projects of 12 months duration and test it on 8 IT projects with less than 12 months duration. We completely eliminate the manual assigning of weights and compute ceiling scores of project elements based on DBN weights. Our trained DBN evaluates monthly project performance of the 8 test projects based on the 15 project elements to within a monthly relative error margin of between ±1.03 and ±3.30%.

  5. Benchmark IMRT evaluation of a Co-60 MRI-guided radiation therapy system.

    PubMed

    Wooten, H Omar; Rodriguez, Vivian; Green, Olga; Kashani, Rojano; Santanam, Lakshmi; Tanderup, Kari; Mutic, Sasa; Li, H Harold

    2015-03-01

    A device for MRI-guided radiation therapy (MR-IGRT) that uses cobalt-60 sources to deliver intensity modulated radiation therapy is now commercially available. We investigated the performance of the treatment planning and delivery system against the benchmark recommended by the American Association of Physicists in Medicine (AAPM) Task Group 119 for IMRT commissioning and demonstrated that the device plans and delivers IMRT treatments within recommended confidence limits and with similar accuracy as linac IMRT.

  6. Establishing Benchmarks for DOE Commercial Building R&D and Program Evaluation: Preprint

    SciTech Connect

    Deru, M.; Griffith, B.; Torcellini, P.

    2006-06-01

    The U.S. Department of Energy (DOE) Building Technologies Program and the DOE research laboratories conduct a great deal of research on building technologies. However, differences in models and simulation tools used by various research groups make it difficult to compare results among studies. The authors have developed a set of 22 hypothetical benchmark buildings and weighting factors for nine locations across the country, for a total of 198 buildings.

  7. Evaluating the Joint Theater Trauma Registry as a Data Source to Benchmark Casualty Care

    DTIC Science & Technology

    2012-05-01

    in casualties with polytrauma and a moderate blunt TBI. Secondary insults after TBI, especially hypother- mia and hypoxemia, increased the odds of 24...combat casualty care.3 Benchmark ana - lyses can be used to document the effectiveness of the combat care provided but may also reveal gaps in care...increased mortality when hypother- mia accompanies polytrauma in the civilian sector,36–38 our data indicate that combat injured individuals with hypother

  8. Evaluation of a High-Accuracy MacCormack-Type Scheme Using Benchmark Problems

    NASA Technical Reports Server (NTRS)

    Hixon, R.

    1997-01-01

    Due to their inherent dissipation and stability, the MacCormack scheme and its variants have been widely used in the computation of unsteady flow and acoustic problems. However, these schemes require many points per wavelength in order to propagate waves with a reasonable amount of accuracy. In this work, the linear wave propagation characteristics of MacCormack-type schemes are shown by solving several of the CAA Benchmark Problems.

  9. A modified ATP benchmark for evaluating the cleaning of some hospital environmental surfaces.

    PubMed

    Lewis, T; Griffith, C; Gallo, M; Weinbren, M

    2008-06-01

    Hospital cleaning continues to attract patient, media and political attention. In the UK it is still primarily assessed via visual inspection, which can be misleading. Calls have therefore been made for a more objective approach to assessing surface cleanliness. To improve the management of hospital cleaning the use of adenosine triphosphate (ATP) in combination with microbiological analysis has been proposed, with a general ATP benchmark value of 500 relative light units (RLU) for one combination of test and equipment. In this study, the same test combination was used to assess cleaning effectiveness in a 1300-bed teaching hospital after routine and modified cleaning protocols. Based upon the ATP results a revised stricter pass/fail benchmark of 250 RLU is proposed for the range of surfaces used in this study. This was routinely achieved using modified best practice cleaning procedures which also gave reduced surface counts with, for example, aerobic colony counts reduced from >100 to <2.5 cfu/cm(2), and counts of Staphylococcus aureus reduced from up to 2.5 to <1 cfu/cm(2) (95% of the time). Benchmarking is linked to incremental quality improvements and both the original suggestion of 500 RLU and the revised figure of 250 RLU can be used by hospitals as part of this process. They can also be used in the assessment of novel cleaning methods, such as steam cleaning and microfibre cloths, which have potential use in the National Health Service.

  10. Comprehensive Evaluation Project. Final Report.

    ERIC Educational Resources Information Center

    1969

    This project sought to develop a set of tests for the assessment of the basic literacy and occupational cognizance of pupils in those public elementary and secondary schools, including vocational schools, receiving services through Federally supported educational programs and projects. The assessment is to produce generalizable average scores for…

  11. Thermal Performance Benchmarking: Annual Report

    SciTech Connect

    Moreno, Gilbert

    2016-04-08

    The goal for this project is to thoroughly characterize the performance of state-of-the-art (SOA) automotive power electronics and electric motor thermal management systems. Information obtained from these studies will be used to: Evaluate advantages and disadvantages of different thermal management strategies; establish baseline metrics for the thermal management systems; identify methods of improvement to advance the SOA; increase the publicly available information related to automotive traction-drive thermal management systems; help guide future electric drive technologies (EDT) research and development (R&D) efforts. The performance results combined with component efficiency and heat generation information obtained by Oak Ridge National Laboratory (ORNL) may then be used to determine the operating temperatures for the EDT components under drive-cycle conditions. In FY15, the 2012 Nissan LEAF power electronics and electric motor thermal management systems were benchmarked. Testing of the 2014 Honda Accord Hybrid power electronics thermal management system started in FY15; however, due to time constraints it was not possible to include results for this system in this report. The focus of this project is to benchmark the thermal aspects of the systems. ORNL's benchmarking of electric and hybrid electric vehicle technology reports provide detailed descriptions of the electrical and packaging aspects of these automotive systems.

  12. Benchmarking expert system tools

    NASA Technical Reports Server (NTRS)

    Riley, Gary

    1988-01-01

    As part of its evaluation of new technologies, the Artificial Intelligence Section of the Mission Planning and Analysis Div. at NASA-Johnson has made timing tests of several expert system building tools. Among the production systems tested were Automated Reasoning Tool, several versions of OPS5, and CLIPS (C Language Integrated Production System), an expert system builder developed by the AI section. Also included in the test were a Zetalisp version of the benchmark along with four versions of the benchmark written in Knowledge Engineering Environment, an object oriented, frame based expert system tool. The benchmarks used for testing are studied.

  13. A study on operation efficiency evaluation based on firm's financial index and benchmark selection: take China Unicom as an example

    NASA Astrophysics Data System (ADS)

    Wu, Zu-guang; Tian, Zhan-jun; Liu, Hui; Huang, Rui; Zhu, Guo-hua

    2009-07-01

    Being the only listed telecom operators of A share market, China Unicom has always been attracted many institutional investors under the concept of 3G recent years,which itself is a great technical progress expectation.Do the institutional investors or the concept of technical progress have signficant effect on the improving of firm's operating efficiency?Though reviewing the documentary about operating efficiency we find that schoolars study this problem useing the regress analyzing based on traditional production function and data envelopment analysis(DEA) and financial index anayzing and marginal function and capital labor ratio coefficient etc. All the methods mainly based on macrodata. This paper we use the micro-data of company to evaluate the operating efficiency.Using factor analyzing based on financial index and comparing the factor score of three years from 2005 to 2007, we find that China Unicom's operating efficiency is under the averge level of benchmark corporates and has't improved under the concept of 3G from 2005 to 2007.In other words,institutional investor or the conception of technical progress expectation have faint effect on the changes of China Unicom's operating efficiency. Selecting benchmark corporates as post to evaluate the operating efficiency is a characteristic of this method ,which is basicallly sipmly and direct.This method is suit for the operation efficiency evaluation of agriculture listed companies because agriculture listed also face technical progress and marketing concept such as tax-free etc.

  14. MicroRNA array normalization: an evaluation using a randomized dataset as the benchmark.

    PubMed

    Qin, Li-Xuan; Zhou, Qin

    2014-01-01

    MicroRNA arrays possess a number of unique data features that challenge the assumption key to many normalization methods. We assessed the performance of existing normalization methods using two microRNA array datasets derived from the same set of tumor samples: one dataset was generated using a blocked randomization design when assigning arrays to samples and hence was free of confounding array effects; the second dataset was generated without blocking or randomization and exhibited array effects. The randomized dataset was assessed for differential expression between two tumor groups and treated as the benchmark. The non-randomized dataset was assessed for differential expression after normalization and compared against the benchmark. Normalization improved the true positive rate significantly in the non-randomized data but still possessed a false discovery rate as high as 50%. Adding a batch adjustment step before normalization further reduced the number of false positive markers while maintaining a similar number of true positive markers, which resulted in a false discovery rate of 32% to 48%, depending on the specific normalization method. We concluded the paper with some insights on possible causes of false discoveries to shed light on how to improve normalization for microRNA arrays.

  15. Benchmark evaluation of the RELAP code to calculate boiling in narrow channels

    SciTech Connect

    Kunze, J.F.; Loyalka, S.K. ); McKibben, J.C.; Hultsch, R.; Oladiran, O.

    1990-06-01

    The RELAP code has been tested with benchmark experiments (such as the loss-of-fluid test experiments at the Idaho National Engineering Laboratory) at high pressures and temperatures characteristic of those encountered in loss-of-coolant accidents (LOCAs) in commercial light water power reactors. Application of RELAP to the LOCA analysis of a low pressure (< 7 atm) and low temperature (< 100{degree}C), plate-type research reactor, such as the University of Missouri Research Reactor (MURR), the high-flux breeder reactor, high-flux isotope reactor, and Advanced Test Reactor, requires resolution of questions involving overextrapolation to very low pressures and low temperatures, and calculations of the pulsed boiling/reflood conditions in the narrow rectangular cross-section channels (typically 2 mm thick) of the plate fuel elements. The practical concern of this problem is that plate fuel temperatures predicted by RELAP5 (MOD2, version 3) during the pulsed boiling period can reach high enough temperatures to cause plate (clad) weakening, though not melting. Since an experimental benchmark of RELAP under such LOCA conditions is not available and since such conditions present substantial challenges to the code, it is important to verify the code predictions. The comparison of the pulsed boiling experiments with the RELAP calculations involves both visual observations of void fraction versus time and measurements of temperatures near the fuel plate surface.

  16. Benchmarking foreign electronics technologies

    SciTech Connect

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  17. INTEGRAL BENCHMARK DATA FOR NUCLEAR DATA TESTING THROUGH THE ICSBEP AND THE NEWLY ORGANIZED IRPHEP

    SciTech Connect

    J. Blair Briggs; Lori Scott; Yolanda Rugama; Enrico Satori

    2007-04-01

    The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) was last reported in a nuclear data conference at the International Conference on Nuclear Data for Science and Technology, ND-2004, in Santa Fe, New Mexico. Since that time the number and type of integral benchmarks have increased significantly. Included in the ICSBEP Handbook are criticality-alarm / shielding and fundamental physic benchmarks in addition to the traditional critical / subcritical benchmark data. Since ND 2004, a reactor physics counterpart to the ICSBEP, the International Reactor Physics Experiment Evaluation Project (IRPhEP) was initiated. The IRPhEP is patterned after the ICSBEP, but focuses on other integral measurements, such as buckling, spectral characteristics, reactivity effects, reactivity coefficients, kinetics measurements, reaction-rate and power distributions, nuclide compositions, and other miscellaneous-type measurements in addition to the critical configuration. The status of these two projects is discussed and selected benchmarks highlighted in this paper.

  18. Project HEED. Final Evaluation Report.

    ERIC Educational Resources Information Center

    Hughes, Orval D.

    During 1972-73, Project HEED (Heed Ethnic Educational Depolarization) involved 1,350 Indian students in 60 classrooms at Sells, Topowa, San Carlos, Rice, Many Farms, Hotevilla, Peach Springs, and Sacaton. Primary objectives were: (1) improvement in reading skills, (2) development of cultural awareness, and (3) providing for the Special Education…

  19. An Evaluation of Project PLAN.

    ERIC Educational Resources Information Center

    Patterson, Eldon

    Project Plan, a computer managed individualized learning system developed by the Westinghouse Learning Corporation, was introduced into the St. Louis Public Schools under a Title III grant of the Elementary and Secondary Education Act. The program, offering individualized education in reading, language arts, mathematics, science, and social…

  20. Benchmark Evaluation of Fuel Effect and Material Worth Measurements for a Beryllium-Reflected Space Reactor Mockup

    SciTech Connect

    Marshall, Margaret A.; Bess, John D.

    2015-02-01

    The critical configuration of the small, compact critical assembly (SCCA) experiments performed at the Oak Ridge Critical Experiments Facility (ORCEF) in 1962-1965 have been evaluated as acceptable benchmark experiments for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. The initial intent of these experiments was to support the design of the Medium Power Reactor Experiment (MPRE) program, whose purpose was to study “power plants for the production of electrical power in space vehicles.” The third configuration in this series of experiments was a beryllium-reflected assembly of stainless-steel-clad, highly enriched uranium (HEU)-O2 fuel mockup of a potassium-cooled space power reactor. Reactivity measurements cadmium ratio spectral measurements and fission rate measurements were measured through the core and top reflector. Fuel effect worth measurements and neutron moderating and absorbing material worths were also measured in the assembly fuel region. The cadmium ratios, fission rate, and worth measurements were evaluated for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. The fuel tube effect and neutron moderating and absorbing material worth measurements are the focus of this paper. Additionally, a measurement of the worth of potassium filling the core region was performed but has not yet been evaluated Pellets of 93.15 wt.% enriched uranium dioxide (UO2) were stacked in 30.48 cm tall stainless steel fuel tubes (0.3 cm tall end caps). Each fuel tube had 26 pellets with a total mass of 295.8 g UO2 per tube. 253 tubes were arranged in 1.506-cm triangular lattice. An additional 7-tube cluster critical configuration was also measured but not used for any physics measurements. The core was surrounded on all side by a beryllium reflector. The fuel effect worths were measured by removing fuel tubes at various radius. An accident scenario

  1. Evaluating success levels of mega-projects

    NASA Technical Reports Server (NTRS)

    Kumaraswamy, Mohan M.

    1994-01-01

    Today's mega-projects transcend the traditional trajectories traced within national and technological limitations. Powers unleashed by internationalization of initiatives, in for example space exploration and environmental protection, are arguably only temporarily suppressed by narrower national, economic, and professional disagreements as to how best they should be harnessed. While the world gets its act together there is time to develop the technologies of such supra-mega-project management that will synergize truly diverse resources and smoothly mesh their interfaces. Such mega-projects and their management need to be realistically evaluated, when implementing such improvements. This paper examines current approaches to evaluating mega-projects and questions the validity of extrapolations to the supra-mega-projects of the future. Alternatives to improve such evaluations are proposed and described.

  2. Evaluation of potential factors affecting deriving conductivity benchmark by utilizing weighting methods in Hun-Tai River Basin, Northeastern China.

    PubMed

    Jia, Xiaobo; Zhao, Qian; Guo, Fen; Ma, Shuqin; Zhang, Yuan; Zang, Xiaomiao

    2017-03-01

    Specific conductivity is an increasingly important stressor for freshwater ecosystems. Interacting with other environmental factors, it may lead to habitat degradation and biodiversity loss. However, it is still poorly understood how the effect of specific conductivity on freshwater organisms is confounded by other environmental factors. In this study, a weight-of-evidence method was applied to evaluate the potential environmental factors that may confound the effect of specific conductivity on macroinvertebrate structure communities and identify the confounders affecting deriving conductivity benchmark in Hun-Tai River Basin, China. A total of seven potential environmental factors were assessed by six types of evidence (i.e., correlation of cause and confounder, correlation of effect and confounder, the contingency of high level cause and confounder, the removal of confounder, levels of confounder known to cause effects, and multivariate statistics for confounding). Results showed that effects of dissolved oxygen (DO), fecal coliform, habitat score, total phosphorus (TP), pH, and temperature on the relationship between sensitive genera loss and specific conductivity were minimal and manageable. NH3-N was identified as a confounder affecting deriving conductivity benchmark for macroinvertebrate. The potential confounding by high NH3-N was minimized by removing sites with NH3-N > 2.0 mg/L from the data set. Our study tailored the weighting method previously developed by USEPA to use field data to develop causal relationships for basin-scale applications and may provide useful information for pollution remediation and natural resource management.

  3. Toxicological Benchmarks for Wildlife

    SciTech Connect

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red-tailed hawk

  4. Evaluating a Project on Roma Education

    ERIC Educational Resources Information Center

    Georgiadis, Fokion; Nikolajevic, Dragana; van Driel, Barry

    2011-01-01

    This research note is based on the evaluation of the Comenius project Teacher-IN-SErvice-Training-for-Roma-inclusion ("INSETRom"). The project represented an international effort that was undertaken to bridge the gap between Roma and non-Roma communities and to improve the educational attainment of Roma children in the mainstream…

  5. Evaluation of the Law Focus Curriculum Project.

    ERIC Educational Resources Information Center

    Watson, Patricia J.; Workman, Eva Mae

    1974-01-01

    This evaluation of the Law Focused Curriculum Project of the Oklahoma Public Schools analyzes the human and nonhuman resources utilized in the project, and the nature and extent of activities. The first part of the document examines the program and its objectives. School-age citizens are to become acquainted with the law, the functions and…

  6. 5 CFR 470.317 - Project evaluation.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 5 Administrative Personnel 1 2013-01-01 2013-01-01 false Project evaluation. 470.317 Section 470.317 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PERSONNEL MANAGEMENT RESEARCH PROGRAMS AND DEMONSTRATIONS PROJECTS Regulatory Requirements Pertaining to...

  7. Monitoring and Evaluating Nonpoint Source Watershed Projects

    EPA Pesticide Factsheets

    This guide is written primarily for those who develop and implement monitoring plans for watershed management projects. it can also be used evaluate the technical merits of monitoring proposals they might sponsor. It is an update to the 1997 Guide.

  8. Current Reactor Physics Benchmark Activities at the Idaho National Laboratory

    SciTech Connect

    John D. Bess; Margaret A. Marshall; Mackenzie L. Gorham; Joseph Christensen; James C. Turnbull; Kim Clark

    2011-11-01

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) [1] and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) [2] were established to preserve integral reactor physics and criticality experiment data for present and future research. These valuable assets provide the basis for recording, developing, and validating our integral nuclear data, and experimental and computational methods. These projects are managed through the Idaho National Laboratory (INL) and the Organisation for Economic Co-operation and Development Nuclear Energy Agency (OECD-NEA). Staff and students at the Department of Energy - Idaho (DOE-ID) and INL are engaged in the development of benchmarks to support ongoing research activities. These benchmarks include reactors or assemblies that support Next Generation Nuclear Plant (NGNP) research, space nuclear Fission Surface Power System (FSPS) design validation, and currently operational facilities in Southeastern Idaho.

  9. EVALUATION OF U10MO FUEL PLATE IRRADIATION BEHAVIOR VIA NUMERICAL AND EXPERIMENTAL BENCHMARKING

    SciTech Connect

    Samuel J. Miller; Hakan Ozaltun

    2012-11-01

    This article analyzes dimensional changes due to irradiation of monolithic plate-type nuclear fuel and compares results with finite element analysis of the plates during fabrication and irradiation. Monolithic fuel plates tested in the Advanced Test Reactor (ATR) at Idaho National Lab (INL) are being used to benchmark proposed fuel performance for several high power research reactors. Post-irradiation metallographic images of plates sectioned at the midpoint were analyzed to determine dimensional changes of the fuel and the cladding response. A constitutive model of the fabrication process and irradiation behavior of the tested plates was developed using the general purpose commercial finite element analysis package, Abaqus. Using calculated burn-up profiles of irradiated plates to model the power distribution and including irradiation behaviors such as swelling and irradiation enhanced creep, model simulations allow analysis of plate parameters that are either impossible or infeasible in an experimental setting. The development and progression of fabrication induced stress concentrations at the plate edges was of primary interest, as these locations have a unique stress profile during irradiation. Additionally, comparison between 2D and 3D models was performed to optimize analysis methodology. In particular, the ability of 2D and 3D models account for out of plane stresses which result in 3-dimensional creep behavior that is a product of these components. Results show that assumptions made in 2D models for the out-of-plane stresses and strains cannot capture the 3-dimensional physics accurately and thus 2D approximations are not computationally accurate. Stress-strain fields are dependent on plate geometry and irradiation conditions, thus, if stress based criteria is used to predict plate behavior (as opposed to material impurities, fine micro-structural defects, or sharp power gradients), unique 3D finite element formulation for each plate is required.

  10. In response to an open invitation for comments on AAAS project 2061's Benchmark books on science. Part 1: documentation of serious errors in cell biology.

    PubMed

    Ling, Gilbert

    2006-01-01

    Project 2061 was founded by the American Association for the Advancement of Science (AAAS) to improve secondary school science education. An in-depth study of ten 9 to 12th grade biology textbooks led to the verdict that none conveyed "Big Ideas" that would give coherence and meaning to the profusion of lavishly illustrated isolated details. However, neither the Project report itself nor the Benchmark books put out earlier by the Project carries what deserves the designation of "Big Ideas." Worse, in the two earliest-published Benchmark books, the basic unit of all life forms--the living cell--is described as a soup enclosed by a cell membrane, that determines what can enter or leave the cell. This is astonishing since extensive experimental evidence has unequivocally disproved this idea 60 years ago. A "new" version of the membrane theory brought in to replace the discredited (sieve) version is the pump model--currently taught as established truth in all high-school and college biology textbooks--was also unequivocally disproved 40 years ago. This comment is written partly in response to Bechmark's gracious open invitation for ideas to improve the books and through them, to improve US secondary school science education.

  11. Project SAVE: Evaluation of Pilot Test Results

    ERIC Educational Resources Information Center

    Bell, Mary Lou; Bliss, Kappie

    The long-term goal of Project SAVE (Stop Alcohol Violations Early) is to reduce underage drinking. When a major revision of the program was initiated, the pilot program was evaluated for statistically measurable changes against short-term goals. The results of that evaluation are presented here. Four elements were included in the evaluation…

  12. Training Evaluation Based on Cases of Taiwanese Benchmarked High-Tech Companies

    ERIC Educational Resources Information Center

    Lien, Bella Ya Hui; Hung, Richard Yu Yuan; McLean, Gary N.

    2007-01-01

    Although the influence of workplace practices and employees' experiences with training effectiveness has received considerable attention, less is known of the influence of workplace practices on training evaluation methods. The purposes of this study were to: (1) explore and understand the training evaluation methods used by seven Taiwanese…

  13. Strategic evaluation central to LNG project formation

    SciTech Connect

    Nissen, D.; DiNapoli, R.N.; Yost, C.C.

    1995-07-03

    An efficient-scale, grassroots LNG facility of about 6 million metric tons/year capacity requires a prestart-up outlay of $5 billion or more for the supply facilities--production, feedgas pipeline, liquefaction, and shipping. The demand side of the LNG chain requires a similar outlay, counting the import-regasification terminal and a combination of 5 gigawatts or more of electric power generation or the equivalent in city gas and industrial gas-using facilities. There exist no well-developed commodity markets for free-on-board (fob) or delivered LNG. A new LNG supply project is dedicated to its buyers. Indeed, the buyers` revenue commitment is the project`s only bankable asset. For the buyer to make this commitment, the supply venture`s capability and commitment must be credible: to complete the project and to deliver the LNG reliably over the 20+ years required to recover capital committed on both sides. This requirement has technical, economic, and business dimensions. In this article the authors describe a LNG project evaluation system and show its application to typical tasks: project cost of service and participant shares; LNG project competition; alternative project structures; and market competition for LNG-supplied electric power generation.

  14. Evaluation of various LandFlux evapotranspiration algorithms using the LandFlux-EVAL synthesis benchmark products and observational data

    NASA Astrophysics Data System (ADS)

    Michel, Dominik; Hirschi, Martin; Jimenez, Carlos; McCabe, Mathew; Miralles, Diego; Wood, Eric; Seneviratne, Sonia

    2014-05-01

    Research on climate variations and the development of predictive capabilities largely rely on globally available reference data series of the different components of the energy and water cycles. Several efforts aimed at producing large-scale and long-term reference data sets of these components, e.g. based on in situ observations and remote sensing, in order to allow for diagnostic analyses of the drivers of temporal variations in the climate system. Evapotranspiration (ET) is an essential component of the energy and water cycle, which can not be monitored directly on a global scale by remote sensing techniques. In recent years, several global multi-year ET data sets have been derived from remote sensing-based estimates, observation-driven land surface model simulations or atmospheric reanalyses. The LandFlux-EVAL initiative presented an ensemble-evaluation of these data sets over the time periods 1989-1995 and 1989-2005 (Mueller et al. 2013). Currently, a multi-decadal global reference heat flux data set for ET at the land surface is being developed within the LandFlux initiative of the Global Energy and Water Cycle Experiment (GEWEX). This LandFlux v0 ET data set comprises four ET algorithms forced with a common radiation and surface meteorology. In order to estimate the agreement of this LandFlux v0 ET data with existing data sets, it is compared to the recently available LandFlux-EVAL synthesis benchmark product. Additional evaluation of the LandFlux v0 ET data set is based on a comparison to in situ observations of a weighing lysimeter from the hydrological research site Rietholzbach in Switzerland. These analyses serve as a test bed for similar evaluation procedures that are envisaged for ESA's WACMOS-ET initiative (http://wacmoset.estellus.eu). Reference: Mueller, B., Hirschi, M., Jimenez, C., Ciais, P., Dirmeyer, P. A., Dolman, A. J., Fisher, J. B., Jung, M., Ludwig, F., Maignan, F., Miralles, D. G., McCabe, M. F., Reichstein, M., Sheffield, J., Wang, K

  15. Medico-economic evaluation of healthcare products. Methodology for defining a significant impact on French health insurance costs and selection of benchmarks for interpreting results.

    PubMed

    Dervaux, Benoît; Baseilhac, Eric; Fagon, Jean-Yves; Biot, Claire; Blachier, Corinne; Braun, Eric; Debroucker, Frédérique; Detournay, Bruno; Ferretti, Carine; Granger, Muriel; Jouan-Flahault, Chrystel; Lussier, Marie-Dominique; Meyer, Arlette; Muller, Sophie; Pigeon, Martine; De Sahb, Rima; Sannié, Thomas; Sapède, Claudine; Vray, Muriel

    2014-01-01

    Decree No. 2012-1116 of 2 October 2012 on medico-economic assignments of the French National Authority for Health (Haute autorité de santé, HAS) significantly alters the conditions for accessing the health products market in France. This paper presents a theoretical framework for interpreting the results of the economic evaluation of health technologies and summarises the facts available in France for developing benchmarks that will be used to interpret incremental cost-effectiveness ratios. This literature review shows that it is difficult to determine a threshold value but it is also difficult to interpret then incremental cost effectiveness ratio (ICER) results without a threshold value. In this context, round table participants favour a pragmatic approach based on "benchmarks" as opposed to a threshold value, based on an interpretative and normative perspective, i.e. benchmarks that can change over time based on feedback.

  16. 23 CFR 505.11 - Project evaluation and rating.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... preliminary rating and evaluation at any point in the project development after the project's concept plan is... 23 Highways 1 2011-04-01 2011-04-01 false Project evaluation and rating. 505.11 Section 505.11... MANAGEMENT PROJECTS OF NATIONAL AND REGIONAL SIGNIFICANCE EVALUATION AND RATING § 505.11 Project...

  17. Self-benchmarking Guide for Data Centers: Metrics, Benchmarks, Actions

    SciTech Connect

    Mathew, Paul; Ganguly, Srirupa; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in data centers. This guide is primarily intended for personnel who have responsibility for managing energy use in existing data centers - including facilities managers, energy managers, and their engineering consultants. Additionally, data center designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior data center benchmarking studies supported by the California Energy Commission. Much of the benchmarking data are drawn from the LBNL data center benchmarking database that was developed from these studies. Additional benchmark data were obtained from engineering experts including facility designers and energy managers. This guide also builds on recent research supported by the U.S. Department of Energy's Save Energy Now program.

  18. Implementing Cognitive Behavioral Therapy for Chronic Fatigue Syndrome in a Mental Health Center: A Benchmarking Evaluation

    ERIC Educational Resources Information Center

    Scheeres, Korine; Wensing, Michel; Knoop, Hans; Bleijenberg, Gijs

    2008-01-01

    Objective: This study evaluated the success of implementing cognitive behavioral therapy (CBT) for chronic fatigue syndrome (CFS) in a representative clinical practice setting and compared the patient outcomes with those of previously published randomized controlled trials (RCTs) of CBT for CFS. Method: The implementation interventions were the…

  19. Benchmarking Quality in Online Teaching and Learning: A Rubric for Course Construction and Evaluation

    ERIC Educational Resources Information Center

    Ternus, Mona P.; Palmer, Kay L.; Faulk, Debbie R.

    2007-01-01

    Online courses have many components and dimensions. Both the form (structure) and the content (expression) are situated in an overall environment. The sum of these elements results in student outcomes and learning. In order to facilitate construction and evaluate the quality of an online course, a four-part rubric was designed to reflect:…

  20. 'Score to Door Time', a benchmarking tool for rapid response systems: a pilot multi-centre service evaluation

    PubMed Central

    2011-01-01

    Introduction Rapid Response Systems were created to minimise delays in recognition and treatment of deteriorating patients on general wards. Physiological 'track and trigger' systems are used to alert a team with critical care skills to stabilise patients and expedite admission to intensive care units. No benchmarking tool exists to facilitate comparison for quality assurance. This study was designed to create and test a tool to analyse the efficiency of intensive care admission processes. Methods We conducted a pilot multicentre service evaluation of patients admitted to 17 intensive care units from the United Kingdom, Ireland, Denmark, United States of America and Australia. Physiological abnormalities were recorded via a standardised track and trigger score (VitalPAC™ Early Warning Score). The period between the time of initial physiological abnormality (Score) and admission to intensive care (Door) was recorded as 'Score to Door Time'. Participants subsequently suggested causes for admission delays. Results Score to Door Time for 177 admissions was a median of 4:10 hours (interquartile range (IQR) 1:49 to 9:10). Time from physiological trigger to activation of a Rapid Response System was a median 0:47 hours (IQR 0:00 to 2:15). Time from call-out to intensive care admission was a median of 2:45 hours (IQR 1:19 to 6:32). A total of 127 (71%) admissions were deemed to have been delayed. Stepwise linear regression analysis yielded three significant predictors of longer Score to Door Time: being treated in a British centre, higher Acute Physiology and Chronic Health Evaluation (APACHE) II score and increasing age. Binary regression analysis demonstrated a significant association (P < 0.045) of APACHE II scores >20 with Score to Door Times greater than the median 4:10 hours. Conclusions Score to Door Time seemed to be largely independent of illness severity and, when combined with qualitative feedback from centres, suggests that admission delays could be due to

  1. HANFORD DST THERMAL & SEISMIC PROJECT ANSYS BENCHMARK ANALYSIS OF SEISMIC INDUCED FLUID STRUCTURE INTERACTION IN A HANFORD DOUBLE SHELL PRIMARY TANK

    SciTech Connect

    MACKEY, T.C.

    2006-03-14

    M&D Professional Services, Inc. (M&D) is under subcontract to Pacific Northwest National Laboratories (PNNL) to perform seismic analysis of the Hanford Site Double-Shell Tanks (DSTs) in support of a project entitled ''Double-Shell Tank (DSV Integrity Project-DST Thermal and Seismic Analyses)''. The overall scope of the project is to complete an up-to-date comprehensive analysis of record of the DST System at Hanford in support of Tri-Party Agreement Milestone M-48-14. The work described herein was performed in support of the seismic analysis of the DSTs. The thermal and operating loads analysis of the DSTs is documented in Rinker et al. (2004). The overall seismic analysis of the DSTs is being performed with the general-purpose finite element code ANSYS. The overall model used for the seismic analysis of the DSTs includes the DST structure, the contained waste, and the surrounding soil. The seismic analysis of the DSTs must address the fluid-structure interaction behavior and sloshing response of the primary tank and contained liquid. ANSYS has demonstrated capabilities for structural analysis, but the capabilities and limitations of ANSYS to perform fluid-structure interaction are less well understood. The purpose of this study is to demonstrate the capabilities and investigate the limitations of ANSYS for performing a fluid-structure interaction analysis of the primary tank and contained waste. To this end, the ANSYS solutions are benchmarked against theoretical solutions appearing in BNL 1995, when such theoretical solutions exist. When theoretical solutions were not available, comparisons were made to theoretical solutions of similar problems and to the results from Dytran simulations. The capabilities and limitations of the finite element code Dytran for performing a fluid-structure interaction analysis of the primary tank and contained waste were explored in a parallel investigation (Abatt 2006). In conjunction with the results of the global ANSYS analysis

  2. Workforce development and effective evaluation of projects.

    PubMed

    Dickerson, Claire; Green, Tess; Blass, Eddie

    The success of a project or programme is typically determined in relation to outputs. However, there is a commitment among UK public services to spending public funds efficiently and on activities that provide the greatest benefit to society. Skills for Health recognised the need for a tool to manage the complex process of evaluating project benefits. An integrated evaluation framework was developed to help practitioners identify, describe, measure and evaluate the benefits of workforce development projects. Practitioners tested the framework on projects within three NHS trusts and provided valuable feedback to support its development. The prospective approach taken to identify benefits and collect baseline data to support evaluation was positively received and the clarity and completeness of the framework, as well as the relevance of the questions, were commended. Users reported that the framework was difficult to complete; an online version could be developed, which might help to improve usability. Effective implementation of this approach will depend on the quality and usability of the framework, the willingness of organisations to implement it, and the presence or establishment of an effective change management culture.

  3. An Evaluation of the Connected Mathematics Project.

    ERIC Educational Resources Information Center

    Cain, Judith S.

    2002-01-01

    Evaluated the Connected Mathematics Project (CMP), a middle school reform mathematics curriculum used in Louisiana's Lafayette parish. Analysis of Iowa Test of Basic Skills and Louisiana Education Assessment Program mathematics data indicated that CMP schools significantly outperformed non-CMP schools. Surveys of teachers and students showed that…

  4. Project ALERT. Workplace Education. External Evaluators Reports.

    ERIC Educational Resources Information Center

    Philippi, Jorie W.; Mikulecky, Larry; Lloyd, Paul

    This document contains four evaluations of Project ALERT (Adult Literacy Enhanced & Redefined through Training), a workplace literacy partnership of Wayne State University, the Detroit Public Schools, and several city organizations, unions, and manufacturers in the automobile industry that was formed to meet employees' job-specific basic skills…

  5. Federal Workplace Literacy Project. Internal Evaluation Report.

    ERIC Educational Resources Information Center

    Matuszak, David J.

    This report describes the following components of the Nestle Workplace Literacy Project: six job task analyses, curricula for six workplace basic skills training programs, delivery of courses using these curricula, and evaluation of the process. These six job categories were targeted for training: forklift loader/checker, BB's processing systems…

  6. 23 CFR 505.11 - Project evaluation and rating.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... MANAGEMENT PROJECTS OF NATIONAL AND REGIONAL SIGNIFICANCE EVALUATION AND RATING § 505.11 Project evaluation and rating. (a) The Secretary shall evaluate and rate each proposed project as “highly recommended... 23 Highways 1 2010-04-01 2010-04-01 false Project evaluation and rating. 505.11 Section...

  7. Reactor Physics and Criticality Benchmark Evaluations for Advanced Nuclear Fuel, Progress Report for Work through August 31, 2002, First Annual/4th Quarterly Report

    SciTech Connect

    Anderson, William J.; Ake, Timothy N.; Punatar, Mahendra; Pitts, Michelle L.; Harms, Gary A.; Rearden, Bradley T.; Parks, Cecil V.; Tulenko, James S.; Dugan, Edward; Smith, Robert M.

    2002-09-23

    OAK B204 The objective of this Nuclear Energy Research Initiative (NERI) project is to design, perform, and analyze critical benchmark experiments for validating reactor physics methods and models for fuel enrichments greater than 5-wt% 235U. These experiments will also provide additional information for application to the criticality-safety bases for commercial fuel facilities handling greater than 5-wt% 235U fuel. These experiments are designed as reactor physics benchmarks, to include measurements of critical boron concentration, burnable absorber worth, relative pin powers, and relative average powers.The first year focused primarily on designing the experiments using available fuel, preparing the necessary plans, procedures and authorization basis for performing the experiments, and preparing for the transportation, receipt and storage of the Pathfinder fuel currently stored at Pennsylvania State University.Framatome ANP, Inc. leads the project with the collaboration of Oak Ridge National Laboratory (ORNL), Sandia National Laboratories (SNL), and the University of Florida (UF). The project is organized into 5 tasks:Task 1: Framatome ANP, Inc., ORNL, and SNL will design the specific experiments, establish the safety authorization, and obtain approvals to perform these experiments at the SNL facility. ORNL will apply their sensitivity/uncertainty methodology to verify the need for particular experiments and the parameters that these experiments need to explore.Task 2: Framatome ANP, Inc., ORNL, and UF will analyze the proposed experiments using a variety of reactor-physics methods employed in the nuclear industry. These analyses will support the operation of the experiments by predicting the expected experimental values for the criticality and physics parameters.Task 3: This task encompasses the experiments to be performed. The Pathfinder fuel will be transported from Penn State to SNL for use in the experiments. The experiments will be performed and the

  8. A simple benchmark for evaluating quality of care of patients following acute myocardial infarction

    PubMed Central

    Dorsch, M; Lawrance, R; Sapsford, R; Oldham, J; Greenwood, D; Jackson, B; Morrell, C; Ball, S; Robinson, M; Hall, A

    2001-01-01

    OBJECTIVE—To develop a simple risk model as a basis for evaluating care of patients admitted with acute myocardial infarction.
METHODS—From coronary care registers, biochemistry records and hospital management systems, 2153 consecutive patients with confirmed acute myocardial infarction were identified. With 30 day all cause mortality as the end point, a multivariable logistic regression model of risk was constructed and validated in independent patient cohorts. The areas under receiver operating characteristic curves were calculated as an assessment of sensitivity and specificity. The model was reapplied to a number of commonly studied subgroups for further assessment of robustness.
RESULTS—A three variable model was developed based on age, heart rate, and systolic blood pressure on admission. This produced an individual probability of death by 30 days (P30) where P30 = 1/(1 + exp(−L30)) and L30 = −5.624 + (0.085 × age) + (0.014 × heart rate) − (0.022 × systolic blood pressure). The areas under the receiver operating characteristic curves for the reference and test cohorts were 0.79 (95% CI 0.76 to 0.82) and 0.76 (95% CI 0.72 to 0.79), respectively. To aid application of the model to routine clinical audit, a normogram relating observed mortality and sample size to the likelihood of a significant deviation from the expected 30 day mortality rate was constructed.
CONCLUSIONS—This risk model is simple, reproducible, and permits quality of care of acute myocardial infarction patients to be reliably evaluated both within and between centres.


Keywords: acute myocardial infarction; risk model PMID:11454829

  9. Testing (Validating?) Cross Sections with ICSBEP Benchmarks

    SciTech Connect

    Kahler, Albert C. III

    2012-06-28

    We discuss how to use critical benchmarks from the International Handbook of Evaluated Criticality Safety Benchmark Experiments to determine the applicability of specific cross sections to the end-user's problem of interest. Particular attention is paid to making sure the selected suite of benchmarks includes the user's range of applicability (ROA).

  10. Small Commercial Program DOE Project: Impact evaluation

    SciTech Connect

    Bathgate, R.; Faust, S. )

    1992-08-12

    In 1991, Washington Electric Cooperative (WEC) implemented a Department of Energy grant to conduct a small commercial energy conservation project. The small commercial Mom, and Pop'' grocery stores within WEC's service territory were selected as the target market for the project. Energy Solid Waste Consultant's (E SWC) Impact Evaluation is documented here. The evaluation was based on data gathered from a variety of sources, including load profile metering, kWh submeters, elapsed time indicators, and billing histories. Five stores were selected to receive measures under this program: Waits River General Store, Joe's Pond Store, Hastings Store, Walden General Store, and Adamant Cooperative. Specific measures installed in each store and description of each are included.

  11. Benchmarking pathology services: implementing a longitudinal study.

    PubMed

    Gordon, M; Holmes, S; McGrath, K; Neil, A

    1999-05-01

    This paper details the benchmarking process and its application to the activities of pathology laboratories participating in a benchmark pilot study [the Royal College of Pathologists of Australasian (RCPA) Benchmarking Project]. The discussion highlights the primary issues confronted in collecting, processing, analysing and comparing benchmark data. The paper outlines the benefits of engaging in a benchmarking exercise and provides a framework which can be applied across a range of public health settings. This information is then applied to a review of the development of the RCPA Benchmarking Project. Consideration is also given to the nature of the preliminary results of the project and the implications of these results to the on-going conduct of the study.

  12. NASA Countermeasures Evaluation and Validation Project

    NASA Technical Reports Server (NTRS)

    Lundquist, Charlie M.; Paloski, William H. (Technical Monitor)

    2000-01-01

    To support its ISS and exploration class mission objectives, NASA has developed a Countermeasure Evaluation and Validation Project (CEVP). The goal of this project is to evaluate and validate the optimal complement of countermeasures required to maintain astronaut health, safety, and functional ability during and after short- and long-duration space flight missions. The CEVP is the final element of the process in which ideas and concepts emerging from basic research evolve into operational countermeasures. The CEVP is accomplishing these objectives by conducting operational/clinical research to evaluate and validate countermeasures to mitigate these maladaptive responses. Evaluation is accomplished by testing in space flight analog facilities, and validation is accomplished by space flight testing. Both will utilize a standardized complement of integrated physiological and psychological tests, termed the Integrated Testing Regimen (ITR) to examine candidate countermeasure efficacy and intersystem effects. The CEVP emphasis is currently placed on validating the initial complement of ISS countermeasures targeting bone, muscle, and aerobic fitness; followed by countermeasures for neurological, psychological, immunological, nutrition and metabolism, and radiation risks associated with space flight. This presentation will review the processes, plans, and procedures that will enable CEVP to play a vital role in transitioning promising research results into operational countermeasures necessary to maintain crew health and performance during long duration space flight.

  13. Wildlife habitat evaluation demonstration project. [Michigan

    NASA Technical Reports Server (NTRS)

    Burgoyne, G. E., Jr.; Visser, L. G.

    1981-01-01

    To support the deer range improvement project in Michigan, the capability of LANDSAT data in assessing deer habitat in terms of areas and mixes of species and age classes of vegetation is being examined to determine whether such data could substitute for traditional cover type information sources. A second goal of the demonstration project is to determine whether LANDSAT data can be used to supplement and improve the information normally used for making deer habitat management decisions, either by providing vegetative cover for private land or by providing information about the interspersion and juxtaposition of valuable vegetative cover types. The procedure to be used for evaluating in LANDSAT data of the Lake County test site is described.

  14. Color back projection for fruit maturity evaluation

    NASA Astrophysics Data System (ADS)

    Zhang, Dong; Lee, Dah-Jye; Desai, Alok

    2013-12-01

    In general, fruits and vegetables such as tomatoes and dates are harvested before they fully ripen. After harvesting, they continue to ripen and their color changes. Color is a good indicator of fruit maturity. For example, tomatoes change color from dark green to light green and then pink, light red, and dark red. Assessing tomato maturity helps maximize its shelf life. Color is used to determine the length of time the tomatoes can be transported. Medjool dates change color from green to yellow, and the orange, light red and dark red. Assessing date maturity helps determine the length of drying process to help ripen the dates. Color evaluation is an important step in the processing and inventory control of fruits and vegetables that directly affects profitability. This paper presents an efficient color back projection and image processing technique that is designed specifically for real-time maturity evaluation of fruits. This color processing method requires very simple training procedure to obtain the frequencies of colors that appear in each maturity stage. This color statistics is used to back project colors to predefined color indexes. Fruit maturity is then evaluated by analyzing the reprojected color indexes. This method has been implemented and used for commercial production.

  15. Managing for Results in America's Great City Schools. A Report of the Performance Measurement and Benchmarking Project

    ERIC Educational Resources Information Center

    Council of the Great City Schools, 2012

    2012-01-01

    "Managing for Results in America's Great City Schools, 2012" is presented by the Council of the Great City Schools to its members and the public. The purpose of the project was and is to develop performance measures that can improve the business operations of urban public school districts nationwide. This year's report includes data from 61 of the…

  16. Benchmarking in Student Affairs.

    ERIC Educational Resources Information Center

    Mosier, Robert E.; Schwarzmueller, Gary J.

    2002-01-01

    Discusses the use of benchmarking in student affairs, focusing on issues related to student housing. Provides examples of how benchmarking has influenced administrative practice at many institutions. (EV)

  17. ELAPSE - NASA AMES LISP AND ADA BENCHMARK SUITE: EFFICIENCY OF LISP AND ADA PROCESSING - A SYSTEM EVALUATION

    NASA Technical Reports Server (NTRS)

    Davis, G. J.

    1994-01-01

    One area of research of the Information Sciences Division at NASA Ames Research Center is devoted to the analysis and enhancement of processors and advanced computer architectures, specifically in support of automation and robotic systems. To compare systems' abilities to efficiently process Lisp and Ada, scientists at Ames Research Center have developed a suite of non-parallel benchmarks called ELAPSE. The benchmark suite was designed to test a single computer's efficiency as well as alternate machine comparisons on Lisp, and/or Ada languages. ELAPSE tests the efficiency with which a machine can execute the various routines in each environment. The sample routines are based on numeric and symbolic manipulations and include two-dimensional fast Fourier transformations, Cholesky decomposition and substitution, Gaussian elimination, high-level data processing, and symbol-list references. Also included is a routine based on a Bayesian classification program sorting data into optimized groups. The ELAPSE benchmarks are available for any computer with a validated Ada compiler and/or Common Lisp system. Of the 18 routines that comprise ELAPSE, provided within this package are 14 developed or translated at Ames. The others are readily available through literature. The benchmark that requires the most memory is CHOLESKY.ADA. Under VAX/VMS, CHOLESKY.ADA requires 760K of main memory. ELAPSE is available on either two 5.25 inch 360K MS-DOS format diskettes (standard distribution) or a 9-track 1600 BPI ASCII CARD IMAGE format magnetic tape. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. The ELAPSE benchmarks were written in 1990. VAX and VMS are trademarks of Digital Equipment Corporation. MS-DOS is a registered trademark of Microsoft Corporation.

  18. Toxicological Benchmarks for Screening Potential Contaminants of Concern for Effects on Sediment-Associated Biota

    SciTech Connect

    Hull, R.N.

    1993-01-01

    A hazardous waste site may contain hundreds of chemicals; therefore, it is important to screen contaminants of potential concern for the ecological risk assessment. Often this screening is done as part of a screening assessment, the purpose of which is to evaluate the available data, identify data gaps, and screen contaminants of potential concern. Screening may be accomplished by using a set of toxicological benchmarks. These benchmarks are helpful in determining whether contaminants warrant further assessment or are at a level that requires no further attention. If a chemical concentration or the reported detection limit exceeds a proposed lower benchmark, further analysis is needed to determine the hazards posed by that chemical. If, however, the chemical concentration falls below the lower benchmark value, the chemical may be eliminated from further study. The use of multiple benchmarks is recommended for screening chemicals of concern in sediments. Integrative benchmarks developed for the National Oceanic and Atmospheric Administration and the Florida Department of Environmental Protection are included for inorganic and organic chemicals. Equilibrium partitioning benchmarks are included for screening nonionic organic chemicals. Freshwater sediment effect concentrations developed as part of the U.S. Environmental Protection Agency's (EPA's) Assessment and Remediation of Contaminated Sediment Project are included for inorganic and organic chemicals (EPA 1996). Field survey benchmarks developed for the Ontario Ministry of the Environment are included for inorganic and organic chemicals. In addition, EPA-proposed sediment quality criteria are included along with screening values from EPA Region IV and Ecotox Threshold values from the EPA Office of Solid Waste and Emergency Response. Pore water analysis is recommended for ionic organic compounds; comparisons are then made against water quality benchmarks. This report is an update of three prior reports (Jones et al

  19. Evaluation in Adult Literacy Research. Project ALERT. Phase II.

    ERIC Educational Resources Information Center

    Ntiri, Daphne Williams, Ed.

    This document contains an evaluation handbook for adult literacy programs and feedback from/regarding the evaluation instruments developed during the project titled Adult Literacy and Evaluation Research Team (also known as Project ALERT), a two-phase project initiated by the Detroit Literacy Coalition (DLC) for the purpose of developing and…

  20. Evaluation of Title I ESEA Projects: 1975-76.

    ERIC Educational Resources Information Center

    Philadelphia School District, PA. Office of Research and Evaluation.

    Evaluation services to be provided during 1975-76 to projects funded under the Elementary and Secondary Education Act Title I are listed in this annual booklet. For each project, the following information is provided: goals to be assessed, evaluation techniques (design), and evaluation milestones. Regular term and summer term projects reported on…

  1. Nuclear Data Performance Testing Using Sensitive, but Less Frequently Used ICSBEP Benchmarks

    SciTech Connect

    J. Blair Briggs; John D. Bess

    2011-08-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) has published the International Handbook of Evaluated Criticality Safety Benchmark Experiments annually since 1995. The Handbook now spans over 51,000 pages with benchmark specifications for 4,283 critical, near critical, or subcritical configurations; 24 criticality alarm placement/shielding configurations with multiple dose points for each; and 200 configurations that have been categorized as fundamental physics measurements relevant to criticality safety applications. Benchmark data in the ICSBEP Handbook were originally intended for validation of criticality safety methods and data; however, the benchmark specifications are now used extensively for nuclear data testing. There are several, less frequently used benchmarks within the Handbook that are very sensitive to thorium and certain key structural and moderating materials. Calculated results for many of those benchmarks using modern nuclear data libraries suggest there is still room for improvement. These and other highly sensitive, but rarely quoted benchmarks are highlighted and data testing results provided using the Monte Carlo N-Particle Version 5 (MCNP5) code and continuous energy ENDF/B-V, VI.8, and VII.0, JEFF-3.1, and JENDL-3.3 nuclear data libraries.

  2. Airway Science Curriculum Demonstration Project: Summary of Initial Evaluation Findings

    DTIC Science & Technology

    1988-10-01

    DEMONSTRATION PROJECT: Or C988 SUMMARY OF INITIAL EVALUATION FINDINGS 8. Performn 9 Organ zaton Report No. 7. Author’ s$ Debora L. Clough 9...Airway Science project objectives for which data were available. Two limitations associated with the project evaluation at this time were described... EVALUATION FINDINGS INTRODUCTION The Airway Science Curriculum Demonstration Project was designed to investigate the effectiveness of an alternative approach

  3. Benchmarking on Tsunami Currents with ComMIT

    NASA Astrophysics Data System (ADS)

    Sharghi vand, N.; Kanoglu, U.

    2015-12-01

    There were no standards for the validation and verification of tsunami numerical models before 2004 Indian Ocean tsunami. Even, number of numerical models has been used for inundation mapping effort, evaluation of critical structures, etc. without validation and verification. After 2004, NOAA Center for Tsunami Research (NCTR) established standards for the validation and verification of tsunami numerical models (Synolakis et al. 2008 Pure Appl. Geophys. 165, 2197-2228), which will be used evaluation of critical structures such as nuclear power plants against tsunami attack. NCTR presented analytical, experimental and field benchmark problems aimed to estimate maximum runup and accepted widely by the community. Recently, benchmark problems were suggested by the US National Tsunami Hazard Mitigation Program Mapping & Modeling Benchmarking Workshop: Tsunami Currents on February 9-10, 2015 at Portland, Oregon, USA (http://nws.weather.gov/nthmp/index.html). These benchmark problems concentrated toward validation and verification of tsunami numerical models on tsunami currents. Three of the benchmark problems were: current measurement of the Japan 2011 tsunami in Hilo Harbor, Hawaii, USA and in Tauranga Harbor, New Zealand, and single long-period wave propagating onto a small-scale experimental model of the town of Seaside, Oregon, USA. These benchmark problems were implemented in the Community Modeling Interface for Tsunamis (ComMIT) (Titov et al. 2011 Pure Appl. Geophys. 168, 2121-2131), which is a user-friendly interface to the validated and verified Method of Splitting Tsunami (MOST) (Titov and Synolakis 1995 J. Waterw. Port Coastal Ocean Eng. 121, 308-316) model and is developed by NCTR. The modeling results are compared with the required benchmark data, providing good agreements and results are discussed. Acknowledgment: The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant

  4. Evaluation and comparison of benchmark QSAR models to predict a relevant REACH endpoint: The bioconcentration factor (BCF)

    SciTech Connect

    Gissi, Andrea; Lombardo, Anna; Roncaglioni, Alessandra; Gadaleta, Domenico; Mangiatordi, Giuseppe Felice; Nicolotti, Orazio; Benfenati, Emilio

    2015-02-15

    }=0.85) and sensitivity (average>0.70) for new compounds in the AD but not present in the training set. However, no single optimal model exists and, thus, it would be wise a case-by-case assessment. Yet, integrating the wealth of information from multiple models remains the winner approach. - Highlights: • REACH encourages the use of in silico methods in the assessment of chemicals safety. • The performances of nine BCF models were evaluated on a benchmark database of 851 chemicals. • We compared the models on the basis of both regression and classification performance. • Statistics on chemicals out of the training set and/or within the applicability domain were compiled. • The results show that QSAR models are useful as weight-of-evidence in support to other methods.

  5. Processor Emulator with Benchmark Applications

    SciTech Connect

    Lloyd, G. Scott; Pearce, Roger; Gokhale, Maya

    2015-11-13

    A processor emulator and a suite of benchmark applications have been developed to assist in characterizing the performance of data-centric workloads on current and future computer architectures. Some of the applications have been collected from other open source projects. For more details on the emulator and an example of its usage, see reference [1].

  6. Cleanroom energy benchmarking results

    SciTech Connect

    Tschudi, William; Xu, Tengfang

    2001-09-01

    A utility market transformation project studied energy use and identified energy efficiency opportunities in cleanroom HVAC design and operation for fourteen cleanrooms. This paper presents the results of this work and relevant observations. Cleanroom owners and operators know that cleanrooms are energy intensive but have little information to compare their cleanroom's performance over time, or to others. Direct comparison of energy performance by traditional means, such as watts/ft{sup 2}, is not a good indicator with the wide range of industrial processes and cleanliness levels occurring in cleanrooms. In this project, metrics allow direct comparison of the efficiency of HVAC systems and components. Energy and flow measurements were taken to determine actual HVAC system energy efficiency. The results confirm a wide variation in operating efficiency and they identify other non-energy operating problems. Improvement opportunities were identified at each of the benchmarked facilities. Analysis of the best performing systems and components is summarized, as are areas for additional investigation.

  7. Towards Systematic Benchmarking of Climate Model Performance

    NASA Astrophysics Data System (ADS)

    Gleckler, P. J.

    2014-12-01

    The process by which climate models are evaluated has evolved substantially over the past decade, with the Coupled Model Intercomparison Project (CMIP) serving as a centralizing activity for coordinating model experimentation and enabling research. Scientists with a broad spectrum of expertise have contributed to the CMIP model evaluation process, resulting in many hundreds of publications that have served as a key resource for the IPCC process. For several reasons, efforts are now underway to further systematize some aspects of the model evaluation process. First, some model evaluation can now be considered routine and should not require "re-inventing the wheel" or a journal publication simply to update results with newer models. Second, the benefit of CMIP research to model development has not been optimal because the publication of results generally takes several years and is usually not reproducible for benchmarking newer model versions. And third, there are now hundreds of model versions and many thousands of simulations, but there is no community-based mechanism for routinely monitoring model performance changes. An important change in the design of CMIP6 can help address these limitations. CMIP6 will include a small set standardized experiments as an ongoing exercise (CMIP "DECK": ongoing Diagnostic, Evaluation and Characterization of Klima), so that modeling groups can submit them at any time and not be overly constrained by deadlines. In this presentation, efforts to establish routine benchmarking of existing and future CMIP simulations will be described. To date, some benchmarking tools have been made available to all CMIP modeling groups to enable them to readily compare with CMIP5 simulations during the model development process. A natural extension of this effort is to make results from all CMIP simulations widely available, including the results from newer models as soon as the simulations become available for research. Making the results from routine

  8. Framework for the Evaluation of an IT Project Portfolio

    ERIC Educational Resources Information Center

    Tai, W. T.

    2010-01-01

    The basis for evaluating projects in an organizational IT project portfolio includes complexity factors, arguments/criteria, and procedures, with various implications. The purpose of this research was to develop a conceptual framework for IT project proposal evaluation. The research involved using a heuristic roadmap and the mind-mapping method to…

  9. Modelling in Evaluating a Working Life Project in Higher Education

    ERIC Educational Resources Information Center

    Sarja, Anneli; Janhonen, Sirpa; Havukainen, Pirjo; Vesterinen, Anne

    2012-01-01

    This article describes an evaluation method based on collaboration between the higher education, a care home and university, in a R&D project. The aim of the project was to elaborate modelling as a tool of developmental evaluation for innovation and competence in project cooperation. The approach was based on activity theory. Modelling enabled a…

  10. Global and local scale flood discharge simulations in the Rhine River basin for flood risk reduction benchmarking in the Flagship Project

    NASA Astrophysics Data System (ADS)

    Gädeke, Anne; Gusyev, Maksym; Magome, Jun; Sugiura, Ai; Cullmann, Johannes; Takeuchi, Kuniyoshi

    2015-04-01

    The global flood risk assessment is prerequisite to set global measurable targets of post-Hyogo Framework for Action (HFA) that mobilize international cooperation and national coordination towards disaster risk reduction (DRR) and requires the establishment of a uniform flood risk assessment methodology on various scales. To address these issues, the International Flood Initiative (IFI) has initiated a Flagship Project, which was launched in year 2013, to support flood risk reduction benchmarking at global, national and local levels. In the Flagship Project road map, it is planned to identify the original risk (1), to identify the reduced risk (2), and to facilitate the risk reduction actions (3). In order to achieve this goal at global, regional and local scales, international research collaboration is absolutely necessary involving domestic and international institutes, academia and research networks such as UNESCO International Centres. The joint collaboration by ICHARM and BfG was the first attempt that produced the first step (1a) results on the flood discharge estimates with inundation maps under way. As a result of this collaboration, we demonstrate the outcomes of the first step of the IFI Flagship Project to identify flood hazard in the Rhine river basin on the global and local scale. In our assessment, we utilized a distributed hydrological Block-wise TOP (BTOP) model on 20-km and 0.5-km scales with local precipitation and temperature input data between 1980 and 2004. We utilized existing 20-km BTOP model, which is applied globally, and constructed the local scale 0.5-km BTOP model for the Rhine River basin. For the BTOP model results, both calibrated 20-km and 0.5-km BTOP models had similar statistical performance and represented observed flood river discharges, epecially for 1993 and 1995 floods. From 20-km and 0.5-km BTOP simulation, the flood discharges of the selected return period were estimated using flood frequency analysis and were comparable to

  11. Alternate Methods for Assuring Credibility of Research and Evaluation Findings in Project Evaluation.

    ERIC Educational Resources Information Center

    Denton, William T.; Murray, Wayne R.

    This paper describes six existing evaluator-auditor working formats and the conditions which foster credibility of evaluation findings. Evaluators were classified as: (1) member of project developmental team, accountable to project director; (2) independent internal evaluator, accountable to system in general but not to project directors, and (3)…

  12. Design Alternatives for Evaluating the Impact of Conservation Projects

    ERIC Educational Resources Information Center

    Margoluis, Richard; Stem, Caroline; Salafsky, Nick; Brown, Marcia

    2009-01-01

    Historically, examples of project evaluation in conservation were rare. In recent years, however, conservation professionals have begun to recognize the importance of evaluation both for accountability and for improving project interventions. Even with this growing interest in evaluation, the conservation community has paid little attention to…

  13. BN-600 full MOX core benchmark analysis.

    SciTech Connect

    Kim, Y. I.; Hill, R. N.; Grimm, K.; Rimpault, G.; Newton, T.; Li, Z. H.; Rineiski, A.; Mohanakrishan, P.; Ishikawa, M.; Lee, K. B.; Danilytchev, A.; Stogov, V.; Nuclear Engineering Division; International Atomic Energy Agency; CEA SERCO Assurance; China Inst. of Atomic Energy; Forschnungszentrum Karlsruhe; Indira Gandhi Centre for Atomic Research; Japan Nuclear Cycle Development Inst.; Korea Atomic Energy Research Inst.; Inst. of Physics and Power Engineering

    2004-01-01

    As a follow-up of the BN-600 hybrid core benchmark, a full MOX core benchmark was performed within the framework of the IAEA co-ordinated research project. Discrepancies between the values of main reactivity coefficients obtained by the participants for the BN-600 full MOX core benchmark appear to be larger than those in the previous hybrid core benchmarks on traditional core configurations. This arises due to uncertainties in the proper modelling of the axial sodium plenum above the core. It was recognized that the sodium density coefficient strongly depends on the core model configuration of interest (hybrid core vs. fully MOX fuelled core with sodium plenum above the core) in conjunction with the calculation method (diffusion vs. transport theory). The effects of the discrepancies revealed between the participants results on the ULOF and UTOP transient behaviours of the BN-600 full MOX core were investigated in simplified transient analyses. Generally the diffusion approximation predicts more benign consequences for the ULOF accident but more hazardous ones for the UTOP accident when compared with the transport theory results. The heterogeneity effect does not have any significant effect on the simulation of the transient. The comparison of the transient analyses results concluded that the fuel Doppler coefficient and the sodium density coefficient are the two most important coefficients in understanding the ULOF transient behaviour. In particular, the uncertainty in evaluating the sodium density coefficient distribution has the largest impact on the description of reactor dynamics. This is because the maximum sodium temperature rise takes place at the top of the core and in the sodium plenum.

  14. Computational Chemistry Comparison and Benchmark Database

    National Institute of Standards and Technology Data Gateway

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  15. A BENCHMARK PROGRAM FOR EVALUATION OF METHODS FOR COMPUTING SEISMIC RESPONSE OF COUPLED BUILDING-PIPING/EQUIPMENT WITH NON-CLASSICAL DAMPING.

    SciTech Connect

    Xu, J.; Degrassi, G.; Chokshi, N.

    2001-03-22

    Under the auspices of the US Nuclear Regulatory Commission (NRC), Brookhaven National Laboratory (BNL) developed a comprehensive program to evaluate state-of-the-art methods and computer programs for seismic analysis of typical coupled nuclear power plant (NPP) systems with nonclassical damping. In this program, four benchmark models of coupled building-piping/equipment systems with different damping characteristics were analyzed for a suite of earthquakes by program participants applying their uniquely developed methods and computer programs. This paper presents the results of their analyses, and their comparison to the benchmark solutions generated by BNL using time domain direct integration methods. The participant's analysis results established using complex modal time history methods showed good comparison with the BNL solutions, while the analyses produced with either complex-mode response spectrum methods or classical normal-mode response spectrum method, in general, produced more conservative results, when averaged over a suite of earthquakes. However, when coupling due to damping is significant, complex-mode response spectrum methods performed better than the classical normal-mode response spectrum method. Furthermore, as part of the program objectives, a parametric assessment is also presented in this paper, aimed at evaluation of the applicability of various analysis methods to problems with different dynamic characteristics unique to coupled NPP systems. It is believed that the findings and insights learned from this program will be useful in developing new acceptance criteria and providing guidance for future regulatory activities involving licensing applications of these alternate methods to coupled systems.

  16. Evolving Our Evaluation of Lighting Environments Project

    NASA Technical Reports Server (NTRS)

    Terrier, Douglas; Clayton, Ronald; Clark, Toni Anne

    2016-01-01

    Imagine you are an astronaut on their 100th day of your three year exploration mission. During your daily routine to the small hygiene compartment of the spacecraft, you realize that no matter what you do, your body blocks the light from the lamp. You can clearly see your hands or your toes but not both! What were those design engineers thinking! It would have been nice if they could have made the walls glow instead! The reason the designers were not more innovative is that their interpretation of the system lighting requirements didn't allow them to be so! Currently, our interior spacecraft lighting standards and requirements are written around the concept of a quantity of light illuminating a spacecraft surface. The natural interpretation for the engineer is that a lamp that throws light to the surface is required. Because of certification costs, only one lamp is designed and small rooms can wind up with lamps that may be inappropriate for the room architecture. The advances in solid state light emitting technologies and optics for lighting and visual communication necessitates the evaluation of how NASA envisions spacecraft lighting architectures and how NASA uses industry standards for the design and evaluation of lighting system. Current NASA lighting standards and requirements for existing architectures focus on the separate ability of a lighting system to throw light against a surface or the ability of a display system to provide the appropriate visual contrast. Realization that these systems can be integrated is not realized. The result is that the systems are developed independent from one another and potential efficiencies that could be realized from borrowing from the concept of one technology and applying it for the purpose of the other does not occur. This project investigated the possibility of incorporating large luminous surface lamps as an alternative or supplement to overhead lighting. We identified existing industry standards for architectural

  17. Benchmarking concentrating photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Duerr, Fabian; Muthirayan, Buvaneshwari; Meuret, Youri; Thienpont, Hugo

    2010-08-01

    Integral to photovoltaics is the need to provide improved economic viability. To achieve this goal, photovoltaic technology has to be able to harness more light at less cost. A large variety of concentrating photovoltaic concepts has provided cause for pursuit. To obtain a detailed profitability analysis, a flexible evaluation is crucial for benchmarking the cost-performance of this variety of concentrating photovoltaic concepts. To save time and capital, a way to estimate the cost-performance of a complete solar energy system is to use computer aided modeling. In this work a benchmark tool is introduced based on a modular programming concept. The overall implementation is done in MATLAB whereas Advanced Systems Analysis Program (ASAP) is used for ray tracing calculations. This allows for a flexible and extendable structuring of all important modules, namely an advanced source modeling including time and local dependence, and an advanced optical system analysis of various optical designs to obtain an evaluation of the figure of merit. An important figure of merit: the energy yield for a given photovoltaic system at a geographical position over a specific period, can be calculated.

  18. Wais-III norms for working-age adults: a benchmark for conducting vocational, career, and employment-related evaluations.

    PubMed

    Fjordbak, Timothy; Fjordbak, Bess Sirmon

    2005-02-01

    The Wechsler Intelligence Scales are routinely used to assess threshold variables which correlate with subsequent job performance. Intellectual testing within educational and clinical settings accommodates natural developmental changes by referencing results to restricted age-band norms. However, accuracy in vocational and career consultation, as well as equity in hiring and promotion requires the application of a single normative benchmark unbiased by chronological age. Such unitary norms for working-age adults (18- to 64-yr.-olds) were derived from the WAIS-III standardization sample in accord with the proportional representation of the seven age-bands subsumed within this age range. Tabular summaries of results are given for the conversion of raw scores to scaled scores for the working-age population which can be used to derive IQ values and Index Scores.

  19. Benchmarking: The New Tool.

    ERIC Educational Resources Information Center

    Stralser, Steven

    1995-01-01

    This article suggests that benchmarking, the process of comparing one's own operation with the very best, can be used to make improvements in colleges and universities. Six steps are outlined: determining what to benchmark, forming a team, discovering who to benchmark, collecting and analyzing data, using the data to redesign one's own operation,…

  20. Benchmarking for Higher Education.

    ERIC Educational Resources Information Center

    Jackson, Norman, Ed.; Lund, Helen, Ed.

    The chapters in this collection explore the concept of benchmarking as it is being used and developed in higher education (HE). Case studies and reviews show how universities in the United Kingdom are using benchmarking to aid in self-regulation and self-improvement. The chapters are: (1) "Introduction to Benchmarking" (Norman Jackson…

  1. Project Aprendizaje. 1990-91 Final Evaluation Profile. OREA Report.

    ERIC Educational Resources Information Center

    New York City Board of Education, Brooklyn, NY. Office of Research, Evaluation, and Assessment.

    An evaluation was done of New York City Public Schools' Project Aprendizaje, which served disadvantaged, immigrant, Spanish-speaking high school students at Seward Park High School in Manhattan. The Project enrolled 290 students in grades 9 through 12, 93.1 percent of whom were eligible for the Free Lunch Program. The Project provided students of…

  2. Project T.E.A.C.H.: An Evaluative Study.

    ERIC Educational Resources Information Center

    Howarth, Les

    A survey of 17 graduates of Project T.E.A.C.H. (Teacher Effectiveness and Classroom Handling), an inservice education program offered through the Ontario (Canada) Public School Men Teacher's Association in conjunction with Lesley College, used closed- and open-ended questions to obtain evaluations of the project's effectiveness. Five project areas…

  3. Evaluation of the Appalachian Regional Commission's Educational Projects: Final Report.

    ERIC Educational Resources Information Center

    Silverstein, Gary; Bartfai, Nicole; Plishker, Laurie; Snow, Kyle; Frechtling, Joy

    This report presents findings from an evaluation of 84 educational projects funded by the Appalachian Regional Commission (ARC) during the 1990's. Data were collected via document reviews, interviews, a mail survey completed by 78 projects, and eight site visits. Most projects provided services to rural areas or community segments most in need.…

  4. PLATO across the Curriculum: An Evaluation of a Project.

    ERIC Educational Resources Information Center

    Freer, David

    1986-01-01

    A project at the University of Witwatersrand examined the implications of introducing a centrally controlled system of computer-based learning in which 13 university departments utilized PLATO to supplement teaching programs and encourage computer literacy. Department project descriptions and project evaluations (which reported positive student…

  5. Programme for Learning Enrichment. A Van Leer Project: An Evaluation.

    ERIC Educational Resources Information Center

    Ghani, Zainal

    This paper reports the evaluation of a project undertaken by the Sarawak Education Department to improve the quality of education in upper primary classes in rural Sarawak, Malaysia. The project is known officially as the Programme for Learning Enrichment, and commonly as the Van Leer Project, after the international agency which provides the main…

  6. Outside Evaluation Report for the Arlington Federal Workplace Literacy Project.

    ERIC Educational Resources Information Center

    Wrigley, Heide Spruck

    The successes and challenges of the Arlington Education and Employment Program (REEP) Workplace Literacy Project in Virginia are described in this evaluation report. REEP's federal Workplace Literacy Project Consortium is operated as a special project within the Department of Adult, Career and Vocational Education of the Arlington Public Schools.…

  7. Social Studies Project Evaluation: Case Study and Recommendations.

    ERIC Educational Resources Information Center

    Napier, John

    1982-01-01

    Describes the development and application of a model for social studies program evaluations. A case study showing how the model's three-step process was used to evaluate the Improving Citizenship Education Project in Fulton County, Georgia is included. (AM)

  8. Benchmarking the performance of daily temperature homogenisation algorithms

    NASA Astrophysics Data System (ADS)

    Warren, Rachel; Bailey, Trevor; Jolliffe, Ian; Willett, Kate

    2015-04-01

    This work explores the creation of realistic synthetic data and its use as a benchmark for comparing the performance of different homogenisation algorithms on daily temperature data. Four different regions in the United States have been selected and three different inhomogeneity scenarios explored for each region. These benchmark datasets are beneficial as, unlike in the real world, the underlying truth is known a priori, thus allowing definite statements to be made about the performance of the algorithms run on them. Performance can be assessed in terms of the ability of algorithms to detect changepoints and also their ability to correctly remove inhomogeneities. The focus is on daily data, thus presenting new challenges in comparison to monthly data and pushing the boundaries of previous studies. The aims of this work are to evaluate and compare the performance of various homogenisation algorithms, aiding their improvement and enabling a quantification of the uncertainty remaining in the data even after they have been homogenised. An important outcome is also to evaluate how realistic the created benchmarks are. It is essential that any weaknesses in the benchmarks are taken into account when judging algorithm performance against them. This information in turn will help to improve future versions of the benchmarks. I intend to present a summary of this work including the method of benchmark creation, details of the algorithms run and some preliminary results. This work forms a three year PhD and feeds into the larger project of the International Surface Temperature Initiative which is working on a global scale and with monthly instead of daily data.

  9. Evaluation of the School Administration Manager Project

    ERIC Educational Resources Information Center

    Turnbull, Brenda J.; Haslam, M. Bruce; Arcaira, Erikson R.; Riley, Derek L.; Sinclair, Beth; Coleman, Stephen

    2009-01-01

    The School Administration Manager (SAM) project, supported by The Wallace Foundation as part of its education initiative, focuses on changing the conditions in schools that prevent principals from devoting more time to instructional leadership. In schools participating in the National SAM Project, principals have made a commitment to increase the…

  10. Human Relations Education Project. Final Evaluation Report.

    ERIC Educational Resources Information Center

    Buffalo Board of Education, NY.

    This project did the planning and pilot phases of an effort to improve the teaching of human relations in grades K-12 of public and private schools in the Buffalo-Niagara Falls metropolitan area. In the pilot phase, the project furnished on-the-job training for approximately 70 schools. The training was given by teams of human relations…

  11. Evaluation of the Matrix Project. Interchange 77.

    ERIC Educational Resources Information Center

    McIvor, Gill; Moodie, Kristina

    The Matrix Project is a program that has been established in central Scotland with the aim of reducing the risk of offending and anti-social behavior among vulnerable children. The project provides a range of services to children between eight and 11 years of age who are at risk in the local authority areas of Clackmannanshire, Falkirk and…

  12. The Program Evaluator's Role in Cross-Project Pollination.

    ERIC Educational Resources Information Center

    Yasgur, Bruce J.

    An expanded duties role of the multiple-program evaluator as an integral part of the ongoing decision-making process in all projects served is defended. Assumptions discussed included that need for projects with related objectives to pool resources and avoid duplication of effort and the evaluator's unique ability to provide an objective…

  13. Evaluation in Adult Literacy Research. Project ALERT. [Phase I.

    ERIC Educational Resources Information Center

    Ntiri, Daphne Williams, Ed.

    The Adult Literacy and Evaluation Research Team (also known as Project ALERT) was a project conducted by the Detroit Literacy Coalition (DLC) at Wayne State University in 1993-1994 to develop and pilot a user-friendly program model for evaluating literacy operations of community-based organizations throughout Michigan under the provisions of…

  14. Student Assistance Program Demonstration Project Evaluation. Final Report.

    ERIC Educational Resources Information Center

    Pollard, John A.; Houle, Denise M.

    This document presents the final report on the evaluation of California's model student assistance program (SAP) demonstration projects implemented in five locations across the state from July 1989 through June 1992. The report provides an overall, integrated review of the evaluation of the SAP demonstration projects, summarizes important findings…

  15. Project SEARCH UK--Evaluating Its Employment Outcomes

    ERIC Educational Resources Information Center

    Kaehne, Axel

    2016-01-01

    Background: The study reports the findings of an evaluation of Project SEARCH UK. The programme develops internships for young people with intellectual disabilities who are about to leave school or college. The aim of the evaluation was to investigate at what rate Project SEARCH provided employment opportunities to participants. Methods: The…

  16. Benchmark simulation models, quo vadis?

    PubMed

    Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.

  17. Closed-Loop Neuromorphic Benchmarks

    PubMed Central

    Stewart, Terrence C.; DeWolf, Travis; Kleinhans, Ashley; Eliasmith, Chris

    2015-01-01

    Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is even more difficult when the task of interest is a closed-loop task; that is, a task where the output from the neuromorphic hardware affects some environment, which then in turn affects the hardware's future input. However, closed-loop situations are one of the primary potential uses of neuromorphic hardware. To address this, we present a methodology for generating closed-loop benchmarks that makes use of a hybrid of real physical embodiment and a type of “minimal” simulation. Minimal simulation has been shown to lead to robust real-world performance, while still maintaining the practical advantages of simulation, such as making it easy for the same benchmark to be used by many researchers. This method is flexible enough to allow researchers to explicitly modify the benchmarks to identify specific task domains where particular hardware excels. To demonstrate the method, we present a set of novel benchmarks that focus on motor control for an arbitrary system with unknown external forces. Using these benchmarks, we show that an error-driven learning rule can consistently improve motor control performance across a randomly generated family of closed-loop simulations, even when there are up to 15 interacting joints to be controlled. PMID:26696820

  18. Evaluation of direct-use-project drilling costs

    SciTech Connect

    Dolenc, M.R.; Childs, F.W.; Allman, D.W.; Sanders, R.D.

    1983-01-01

    This study evaluates drilling and completion costs from eleven low-to-moderate temperature geothermal projects carried out under the Program Opportunity Notice (PON) and User-Coupled Confirmation Drilling Programs. Several studies have evaluated geothermal drilling costs, particularly with respect to high-temperature-system drilling costs. This study evaluates drilling costs and individual cost elements for low-to-moderate temperature projects. It considers the effect of drilling depth, rock types, remoteness of location, rig size, and unique operating and subsurface conditions on the total drilling cost. This detailed evaluation should provide the investor in direct-use projects with approximate cost projections by which the economics of such projects can be evaluated.

  19. Authentic e-Learning in a Multicultural Context: Virtual Benchmarking Cases from Five Countries

    ERIC Educational Resources Information Center

    Leppisaari, Irja; Herrington, Jan; Vainio, Leena; Im, Yeonwook

    2013-01-01

    The implementation of authentic learning elements at education institutions in five countries, eight online courses in total, is examined in this paper. The International Virtual Benchmarking Project (2009-2010) applied the elements of authentic learning developed by Herrington and Oliver (2000) as criteria to evaluate authenticity. Twelve…

  20. Radiation Detection Computational Benchmark Scenarios

    SciTech Connect

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  1. Evaluation of the cytotoxic and genotoxic effects of benchmark multi-walled carbon nanotubes in relation to their physicochemical properties.

    PubMed

    Louro, Henriqueta; Pinhão, Mariana; Santos, Joana; Tavares, Ana; Vital, Nádia; Silva, Maria João

    2016-11-16

    To contribute with scientific evidence to the grouping strategy for the safety assessment of multi-walled carbon nanotubes (MWCNTs), this work describes the investigation of the cytotoxic and genotoxic effects of four benchmark MWCNTs in relation to their physicochemical characteristics, using two types of human respiratory cells. The cytotoxic effects were analysed using the clonogenic assay and replication index determination. A 48h-exposure of cells revealed that NM-401 was the only cytotoxic MWCNT in both cell lines, but after 8-days exposure, the clonogenic assay in A549 cells showed cytotoxic effects for all the tested MWCNTs. Correlation analysis suggested an association between the MWCNTs size in cell culture medium and cytotoxicity. No induction of DNA damage was observed after any MWCNTs in any cell line by the comet assay, while the micronucleus assay revealed that both NM-401 and NM-402 were genotoxic in A549 cells. NM-401 and NM-402 are the two longest MWCNTs analyzed in this work, suggesting that length may be determinant for genotoxicity. No induction of micronuclei was observed in BBEAS-2Beas-2B cell line and the different effect in both cell lines is explained in view of the size-distribution of MWCNTs in the cell culture medium, rather than cell's specificities.

  2. A guide for mental health clinicians to develop and undertake benchmarking activities.

    PubMed

    Cleary, Michelle; Hunt, Glenn E; Walter, Garry; Tong, Lizabeth

    2010-04-01

    There is a growing expectation for staff to participate in benchmarking activities. If benchmarking projects are to be successful, managers and clinicians need to be aware of the steps involved. In this article, we identify key aspects of benchmarking and consider how clinicians and managers can respond to and meet contemporary requirements for the development of sound benchmarking relationships. Practicalities and issues that must be considered by benchmarking teams are also outlined. Before commencing a benchmarking project, ground rules and benchmarking agreements must be developed and ratified. An understandable benchmarking framework is required: one that is sufficiently robust for clinicians to engage in benchmarking activities and convince others that benchmarking has taken place. There is a need to build the capacity of clinicians in relation to benchmarking.

  3. Decay Data Evaluation Project (DDEP): evaluation of the main 233Pa decay characteristics.

    PubMed

    Chechev, Valery P; Kuzmenko, Nikolay K

    2006-01-01

    The results of a decay data evaluation are presented for 233Pa (beta-) decay to nuclear levels in 233U. These evaluated data have been obtained within the Decay Data Evaluation Project using information published up to 2005.

  4. Process Evaluation of Nebraska's Team Training Project.

    ERIC Educational Resources Information Center

    Scott, David M.; And Others

    1994-01-01

    This article describes a "system approach" training project which utilizes the formation and implementation of localized strategic (action) plans for targeting substance abuse prevention. Participants surveyed in the program reported positive attitudes about the program due to their training and their ability to resist substance abuse…

  5. Project Great Start Biennial Evaluation Report.

    ERIC Educational Resources Information Center

    Rudy, Dennis W.

    Project Great Start is designed to provide non-, limited-, and near-native English proficient students with improved, intensified, and increased learning opportunities for accelerated English acquisition and significant academic achievement. It focuses on three groups: students, parents, and school staff. Students and parents benefit from separate…

  6. Food Processors Skills Building Project. Evaluation Report.

    ERIC Educational Resources Information Center

    White, Eileen Casey

    The Food Processors Skills Building project was undertaken by four Oregon community colleges, with funds from the Oregon Economic Development Department and 11 local food processing companies, to address basic skills needs in the food processing industry through the development and implementation of an industry-specific curriculum. Based on…

  7. Learning with East Aurora Families. Project Evaluation.

    ERIC Educational Resources Information Center

    Bercovitz, Laura

    The Learning with East Aurora Families (LEAF) Project was a 1-year family literacy program developed and implemented by Waubonsee Community College in Sugar Grove, Illinois. It recruited 51 parents and other significant adults of 4- and 5-year-olds enrolled in at-risk programs. Each of the 4-week sessions were divided into 5 components: adult…

  8. Project RESPECT. Third Year Program Evaluation Report.

    ERIC Educational Resources Information Center

    Kester, Don; Plakos, John; Santos, Will

    In January 1995, John Marshall High School (Los Angeles, California) implemented a 3-year bilingual special alternative instructional program, Redesign of Educational Services Providing Enhanced Computer Technology (Project RESPECT). The federally funded program was to prepare limited-English-proficient (LEP) high school students for higher…

  9. Evaluation of Project HAPPIER Survey: Illinois.

    ERIC Educational Resources Information Center

    Haenn, Joseph F.

    As part of Project HAPPIER (Health Awareness Patterns Preventing Illnesses and Encouraging Responsibility), a survey was conducted among teachers and other migrant personnel in Illinois to assess the current health needs of migrants. The availability of educational materials was also investigated in the survey in order to ensure that a proposed…

  10. Implementing and Evaluating Online Service Learning Projects

    ERIC Educational Resources Information Center

    Helms, Marilyn M.; Rutti, Raina M.; Hervani, Aref Agahei; LaBonte, Joanne; Sarkarat, Sy

    2015-01-01

    As online learning proliferates, professors must adapt traditional projects for an asynchronous environment. Service learning is an effective teaching style fostering interactive learning through integration of classroom activities into communities. While prior studies have documented the appropriateness of service learning in online courses,…

  11. Evaluating the Peruvian Rural Communication Services Project.

    ERIC Educational Resources Information Center

    Mayo, John

    1988-01-01

    Reviews the Peruvian Rural Communication Services (PRCS) Project and outlines selected findings. Topics discussed include a brief description of Peru's economic and social conditions; satellite communication systems; audio teleconferencing; telephone service; planning and administration; research design features; data collection; and project…

  12. Quality framework proposal for Component Material Evaluation (CME) projects.

    SciTech Connect

    Christensen, Naomi G.; Arfman, John F.; Limary, Siviengxay

    2008-09-01

    This report proposes the first stage of a Quality Framework approach that can be used to evaluate and document Component Material Evaluation (CME) projects. The first stage of the Quality Framework defines two tools that will be used to evaluate a CME project. The first tool is used to decompose a CME project into its essential elements. These elements can then be evaluated for inherent quality by looking at the subelements that impact their level of quality maturity or rigor. Quality Readiness Levels (QRLs) are used to valuate project elements for inherent quality. The Framework provides guidance for the Principal Investigator (PI) and stakeholders for CME project prerequisites that help to ensure the proper level of confidence in the deliverable given its intended use. The Framework also Provides a roadmap that defined when and how the Framework tools should be applied. Use of these tools allow the Principal Investigator (PI) and stakeholders to understand what elements the project will use to execute the project, the inherent quality of the elements, which of those are critical to the project and why, and the risks associated to the project's elements.

  13. How is success or failure in river restoration projects evaluated? Feedback from French restoration projects.

    PubMed

    Morandi, Bertrand; Piégay, Hervé; Lamouroux, Nicolas; Vaudor, Lise

    2014-05-01

    Since the 1990s, French operational managers and scientists have been involved in the environmental restoration of rivers. The European Water Framework Directive (2000) highlights the need for feedback from restoration projects and for evidence-based evaluation of success. Based on 44 French pilot projects that included such an evaluation, the present study includes: 1) an introduction to restoration projects based on their general characteristics 2) a description of evaluation strategies and authorities in charge of their implementation, and 3) a focus on the evaluation of results and the links between these results and evaluation strategies. The results show that: 1) the quality of an evaluation strategy often remains too poor to understand well the link between a restoration project and ecological changes; 2) in many cases, the conclusions drawn are contradictory, making it difficult to determine the success or failure of a restoration project; and 3) the projects with the poorest evaluation strategies generally have the most positive conclusions about the effects of restoration. Recommendations are that evaluation strategies should be designed early in the project planning process and be based on clearly-defined objectives.

  14. Criticality Benchmark Analysis of the HTTR Annular Startup Core Configurations

    SciTech Connect

    John D. Bess

    2009-11-01

    One of the high priority benchmarking activities for corroborating the Next Generation Nuclear Plant (NGNP) Project and Very High Temperature Reactor (VHTR) Program is evaluation of Japan's existing High Temperature Engineering Test Reactor (HTTR). The HTTR is a 30 MWt engineering test reactor utilizing graphite moderation, helium coolant, and prismatic TRISO fuel. A large amount of critical reactor physics data is available for validation efforts of High Temperature Gas-cooled Reactors (HTGRs). Previous international reactor physics benchmarking activities provided a collation of mixed results that inaccurately predicted actual experimental performance.1 Reevaluations were performed by the Japanese to reduce the discrepancy between actual and computationally-determined critical configurations.2-3 Current efforts at the Idaho National Laboratory (INL) involve development of reactor physics benchmark models in conjunction with the International Reactor Physics Experiment Evaluation Project (IRPhEP) for use with verification and validation methods in the VHTR Program. Annular cores demonstrate inherent safety characteristics that are of interest in developing future HTGRs.

  15. Evaluation of the concrete shield compositions from the 2010 criticality accident alarm system benchmark experiments at the CEA Valduc SILENE facility

    SciTech Connect

    Miller, Thomas Martin; Celik, Cihangir; Dunn, Michael E; Wagner, John C; McMahan, Kimberly L; Authier, Nicolas; Jacquet, Xavier; Rousseau, Guillaume; Wolff, Herve; Savanier, Laurence; Baclet, Nathalie; Lee, Yi-kang; Trama, Jean-Christophe; Masse, Veronique; Gagnier, Emmanuel; Naury, Sylvie; Blanc-Tranchant, Patrick; Hunter, Richard; Kim, Soon; Dulik, George Michael; Reynolds, Kevin H.

    2015-01-01

    In October 2010, a series of benchmark experiments were conducted at the French Commissariat a l'Energie Atomique et aux Energies Alternatives (CEA) Valduc SILENE facility. These experiments were a joint effort between the United States Department of Energy Nuclear Criticality Safety Program and the CEA. The purpose of these experiments was to create three benchmarks for the verification and validation of radiation transport codes and evaluated nuclear data used in the analysis of criticality accident alarm systems. This series of experiments consisted of three single-pulsed experiments with the SILENE reactor. For the first experiment, the reactor was bare (unshielded), whereas in the second and third experiments, it was shielded by lead and polyethylene, respectively. The polyethylene shield of the third experiment had a cadmium liner on its internal and external surfaces, which vertically was located near the fuel region of SILENE. During each experiment, several neutron activation foils and thermoluminescent dosimeters (TLDs) were placed around the reactor. Nearly half of the foils and TLDs had additional high-density magnetite concrete, high-density barite concrete, standard concrete, and/or BoroBond shields. CEA Saclay provided all the concrete, and the US Y-12 National Security Complex provided the BoroBond. Measurement data from the experiments were published at the 2011 International Conference on Nuclear Criticality (ICNC 2011) and the 2013 Nuclear Criticality Safety Division (NCSD 2013) topical meeting. Preliminary computational results for the first experiment were presented in the ICNC 2011 paper, which showed poor agreement between the computational results and the measured values of the foils shielded by concrete. Recently the hydrogen content, boron content, and density of these concrete shields were further investigated within the constraints of the previously available data. New computational results for the first experiment are now available that

  16. Evaluation of Career Education Projects, 1976-1977. Report #7829.

    ERIC Educational Resources Information Center

    Chern, Hermine J.; And Others

    Evaluations of thirty career education projects in the school district of Philadelphia, Pennsylvania are contained in this report. Fifteen of the projects concern classroom or shop instruction, six concern development and/or field testing of curriculum materials, and the remainder involve staff development, installation of shop equipment, job…

  17. Project Closeout: Guidance for Final Evaluation of Building America Communities

    SciTech Connect

    Norton, P.; Burch, J.; Hendron, B.

    2008-03-01

    This report presents guidelines for Project Closeout. It is used to determine whether the Building America program is successfully facilitating improved design and practices to achieve energy savings goals in production homes. Its objective is to use energy simulations, targeted utility bill analysis, and feedback from project stakeholders to evaluate the performance of occupied BA communities.

  18. Evaluating Quality in Educational Spaces: OECD/CELE Pilot Project

    ERIC Educational Resources Information Center

    von Ahlefeld, Hannah

    2009-01-01

    CELE's International Pilot Project on Evaluating Quality in Educational Spaces aims to assist education authorities, schools and others to maximise the use of and investment in learning environments. This article provides an update on the pilot project, which is currently being implemented in Brazil, Mexico, New Zealand, Portugal and the United…

  19. Latin American Literacy Partnership Project. Final Formative Evaluation.

    ERIC Educational Resources Information Center

    Watt, David L. E.

    This final evaluation of the 1991-92 program year of the Latin American literacy Project, designed to foster English language literacy in Spanish-speaking families in Canada, is intended as a formative report, American Literacy Project is intended as a formative report, assessing the changes in the students' language proficiency and the progress…

  20. Project Familia. Final Evaluation Report, 1992-93. OREA Report.

    ERIC Educational Resources Information Center

    Clarke, Candice

    Project Familia was an Elementary and Secondary Education Act Title VII funded project that, in the year covered by this evaluation, served 41 special education students of limited English proficiency (LEP) from 5 schools, with the participation of 54 parents and 33 siblings. Participating students received English language enrichment and…

  1. An Evaluation of Project Gifted 1971-1972.

    ERIC Educational Resources Information Center

    Renzulli, Joseph S.

    Evaluated was Project Gifted, a tri-city (Cranston, East Providence, and Warwick, Rhode Island) program which focused on the training of gifted children in grades 4-6 in the creative thinking process. Project goals were identification of gifted students, development of differential experiences, and development of innovative programs. Cranston's…

  2. Project Aprendizaje. Final Evaluation Report 1992-93.

    ERIC Educational Resources Information Center

    Clark, Andrew

    This report provides evaluative information regarding the effectiveness of Project Aprendizaje, a New York City program that served 269 Spanish-speaking students of limited English proficiency (LEP). The project promoted parent and community involvement by sponsoring cultural events, such as a large Latin American festival. Students developed…

  3. Challenges and Realities: Evaluating a School-Based Service Project.

    ERIC Educational Resources Information Center

    Keir, Scott S.; Millea, Susan

    The Hogg Foundation for Mental Health created the School of the Future (SoF) project to enable selected Texas schools to coordinate and implement school-based social and health services on their campuses and to demonstrate the effectiveness of this method of service delivery by evaluating the project to show the process used and the outcomes that…

  4. Portland Public Schools Project Chrysalis: Year 2 Evaluation Report.

    ERIC Educational Resources Information Center

    Mitchell, Stephanie J.; Gabriel, Roy M.; Hahn, Karen J.; Laws, Katherine E.

    In 1994, the Chrysalis Project in Portland Public Schools received funding to prevent or delay the onset of substance abuse among a special target population: high-risk, female adolescents with a history of childhood abuse. Findings from the evaluation of the project's second year of providing assistance to these students are reported here. During…

  5. Childhood Obesity Research Demonstration project: Cross-site evaluation method

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Childhood Obesity Research Demonstration (CORD) project links public health and primary care interventions in three projects described in detail in accompanying articles in this issue of Childhood Obesity. This article describes a comprehensive evaluation plan to determine the extent to which th...

  6. A Program Evaluation Manual for Project Initiators. Final Report.

    ERIC Educational Resources Information Center

    Senf, Gerald; Anderson, David

    Intended for directors of special education projects, the manual provides guidelines for program evaluation. It is explained that the manual developed out of the experiences of the staff of the Leadership Training Institute in Learning Disabilities which provided technical assistance to 43 state projects. The manual's eight major sections focus on…

  7. Evaluation of the Treatment of Diabetic Retinopathy A Research Project

    ERIC Educational Resources Information Center

    Kupfer, Carl

    1973-01-01

    Evaluated is the treatment of diabetic retinopathy (blindness due to ruptured vessels of the retina as a side effect of diabetes), and described is a research project comparing two types of photocoagulation treatment. (DB)

  8. JUPITER PROJECT - JOINT UNIVERSAL PARAMETER IDENTIFICATION AND EVALUATION OF RELIABILITY

    EPA Science Inventory

    The JUPITER (Joint Universal Parameter IdenTification and Evaluation of Reliability) project builds on the technology of two widely used codes for sensitivity analysis, data assessment, calibration, and uncertainty analysis of environmental models: PEST and UCODE.

  9. Science Base and Tools for Evaluating Stream Restoration Project Proposals.

    NASA Astrophysics Data System (ADS)

    Cluer, B.; Thorne, C.; Skidmore, P.; Castro, J.; Pess, G.; Beechie, T.; Shea, C.

    2008-12-01

    Stream restoration, stabilization, or enhancement projects typically employ site-specific designs and site- scale habitat improvement projects have become the default solution to many habitat problems and constraints. Such projects are often planned and implemented without thorough consideration of the broader scale problems that may be contributing to habitat degradation, attention to project resiliency to flood events, accounting for possible changes in climate or watershed land use, or ensuring the long term sustainability of the project. To address these issues, NOAA Fisheries and USFWS have collaboratively commissioned research to develop a science document and accompanying tools to support more consistent and comprehensive review of stream management and restoration projects proposals by Service staff responsible for permitting. The science document synthesizes the body of knowledge in fluvial geomorphology and presents it in a way that is accessible to the Services staff biologists, who are not trained experts in this field. Accompanying the science document are two electronic tools: a Project Information Checklist to assist in evaluating whether a proposal includes all the information necessary to allow critical and thorough project evaluation; and a Project Evaluation Tool (in flow chart format) that guides reviewers through the steps necessary to critically evaluate the quality of the information submitted, the goals and objectives of the project, project planning and development, project design, geomorphic-habitat-species relevance, and risks to listed species. Materials for training Services staff and others in the efficient use of the science document and tools have also been developed. The longer term goals of this effort include: enabling consistent and comprehensive reviews that are completed in a timely fashion by regulators; facilitating improved project planning and design by proponents; encouraging projects that are attuned to their watershed

  10. Benchmarking monthly homogenization algorithms

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  11. Evaluation on Collaborative Satisfaction for Project Management Team in Integrated Project Delivery Mode

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Li, Y.; Wu, Q.

    2013-05-01

    Integrated Project Delivery (IPD) is a newly-developed project delivery approach for construction projects, and the level of collaboration of project management team is crucial to the success of its implementation. Existing research has shown that collaborative satisfaction is one of the key indicators of team collaboration. By reviewing the literature on team collaborative satisfaction and taking into consideration the characteristics of IPD projects, this paper summarizes the factors that influence collaborative satisfaction of IPD project management team. Based on these factors, this research develops a fuzzy linguistic method to effectively evaluate the level of team collaborative satisfaction, in which the authors adopted the 2-tuple linguistic variables and 2-tuple linguistic hybrid average operators to enhance the objectivity and accuracy of the evaluation. The paper demonstrates the practicality and effectiveness of the method through carrying out a case study with the method.

  12. PRISMATIC CORE COUPLED TRANSIENT BENCHMARK

    SciTech Connect

    J. Ortensi; M.A. Pope; G. Strydom; R.S. Sen; M.D. DeHart; H.D. Gougar; C. Ellis; A. Baxter; V. Seker; T.J. Downar; K. Vierow; K. Ivanov

    2011-06-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  13. Benchmarks in Management Training.

    ERIC Educational Resources Information Center

    Paddock, Susan C.

    1997-01-01

    Data were collected from 12 states with Certified Public Manager training programs to establish benchmarks. The 38 benchmarks were in the following areas: program leadership, stability of administrative/financial support, consistent management philosophy, administrative control, participant selection/support, accessibility, application of…

  14. Evaluation of EUREKA Project, 1978-1979.

    ERIC Educational Resources Information Center

    Burke, Paul J., Ed.

    An evaluation for 1978-79 was conducted of EUREKA, a career information system in California. Personal visits were made to sixteen EUREKA sites throughout the state, accounting for over 75% of the high schools and agencies with active programs. Both the directors of the programs and counselors were interviewed for their reactions. It was found…

  15. New Parents as Teachers Project: Evaluation Report.

    ERIC Educational Resources Information Center

    Pfannenstiel, Judy C.; Seltzer, Dianne A.

    Reported is an evaluation of a program providing information and services to first-time parents and their children from the third trimester of pregnancy until the children were three years of age. Interventions were designed to provide educational guidance to parents, to make parenting less stressful and more pleasurable, and to help parents…

  16. National Evaluation of Diversion Projects. Executive Summary.

    ERIC Educational Resources Information Center

    Dunford, Franklyn W.; And Others

    In 1976 the Special Emphasis branch of the Office of Juvenile Justice and Delinquency Prevention made $10 million available for the development of 11 diversion programs. A national evaluation of these programs was promoted in the hope of better understanding the viability of diversion as an alternative to traditional practices. The impact of…

  17. The Design of the IGE Evaluation Project Phase IV Comparative Studies. Comparative Study of Phase IV IGE Evaluation Project. Phase IV, Project Paper 80-2.

    ERIC Educational Resources Information Center

    Romberg, Thomas A.; And Others

    This paper outlines the design of two Comparative Studies of Phase IV of the Individually Guided Education (IGE) Evaluation Project. More than 2,000 elementary schools in 25 states use the IGE system. The Evaluation Project was designed to gain a comprehensive view of the system's operation and effectiveness. Phase IV investigated pupil outcomes,…

  18. A portfolio evaluation framework for air transportation improvement projects

    NASA Astrophysics Data System (ADS)

    Baik, Hyeoncheol

    This thesis explores the application of portfolio theory to the Air Transportation System (ATS) improvement. The ATS relies on complexly related resources and different stakeholder groups. Moreover, demand for air travel is significantly increasing relative to capacity of air transportation. In this environment, improving the ATS is challenging. Many projects, which are defined as technologies or initiatives, for improvement have been proposed and some have been demonstrated in practice. However, there is no clear understanding of how well these projects work in different conditions nor of how they interact with each other or with existing systems. These limitations make it difficult to develop good project combinations, or portfolios that maximize improvement. To help address this gap, a framework for identifying good portfolios is proposed. The framework can be applied to individual projects or portfolios of projects. Projects or portfolios are evaluated using four different groups of factors (effectiveness, time-to-implement, scope of applicability, and stakeholder impacts). Portfolios are also evaluated in terms of interaction-determining factors (prerequisites, co-requisites, limiting factors, and amplifying factors) because, while a given project might work well in isolation, interdependencies between projects or with existing systems could result in lower overall performance in combination. Ways to communicate a portfolio to decision makers are also introduced. The framework is unique because (1) it allows using a variety of available data, and (2) it covers diverse benefit metrics. For demonstrating the framework, an application to ground delay management projects serves as a case study. The portfolio evaluation approach introduced in this thesis can aid decision makers and researchers at universities and aviation agencies such as Federal Aviation Administration (FAA), National Aeronautics and Space Administration (NASA), and Department of Defense (DoD), in

  19. Verification and validation benchmarks.

    SciTech Connect

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  20. Iterative Knowledge-Based Scoring Functions Derived from Rigid and Flexible Decoy Structures: Evaluation with the 2013 and 2014 CSAR Benchmarks.

    PubMed

    Yan, Chengfei; Grinter, Sam Z; Merideth, Benjamin Ryan; Ma, Zhiwei; Zou, Xiaoqin

    2016-06-27

    In this study, we developed two iterative knowledge-based scoring functions, ITScore_pdbbind(rigid) and ITScore_pdbbind(flex), using rigid decoy structures and flexible decoy structures, respectively, that were generated from the protein-ligand complexes in the refined set of PDBbind 2012. These two scoring functions were evaluated using the 2013 and 2014 CSAR benchmarks. The results were compared with the results of two other scoring functions, the Vina scoring function and ITScore, the scoring function that we previously developed from rigid decoy structures for a smaller set of protein-ligand complexes. A graph-based method was developed to evaluate the root-mean-square deviation between two conformations of the same ligand with different atom names and orders due to different file preparations, and the program is freely available. Our study showed that the two new scoring functions developed from the larger training set yielded significantly improved performance in binding mode predictions. For binding affinity predictions, all four scoring functions showed protein-dependent performance. We suggest the development of protein-family-dependent scoring functions for accurate binding affinity prediction.

  1. Factors Common to High-Utilization Evaluations. Evaluation Productivity Project.

    ERIC Educational Resources Information Center

    Alkin, Marvin; And Others

    This paper reports on the factors that characterize high-utilization evaluations. It is based on materials submitted to an American Educational Research Association (AERA) Division H competition for outstanding examples of evaluation utilization. The paper is organized into three sections. The first section outlines the background of the study:…

  2. A framework for benchmarking land models

    SciTech Connect

    Luo, Yiqi; Randerson, J.; Abramowitz, G.; Bacour, C.; Blyth, E.; Carvalhais, N.; Ciais, Philippe; Dalmonech, D.; Fisher, J.B.; Fisher, R.; Friedlingstein, P.; Hibbard, Kathleen A.; Hoffman, F. M.; Huntzinger, Deborah; Jones, C.; Koven, C.; Lawrence, David M.; Li, D.J.; Mahecha, M.; Niu, S.L.; Norby, Richard J.; Piao, S.L.; Qi, X.; Peylin, P.; Prentice, I.C.; Riley, William; Reichstein, M.; Schwalm, C.; Wang, Y.; Xia, J. Y.; Zaehle, S.; Zhou, X. H.

    2012-10-09

    Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1) targeted aspects of model performance to be evaluated, (2) a set of benchmarks as defined references to test model performance, (3) metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4) model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1) a priori thresholds of acceptable model performance and (2) a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties of land models

  3. A framework for benchmarking land models

    SciTech Connect

    Luo, Yiqi; Randerson, James T.; Hoffman, Forrest; Norby, Richard J

    2012-01-01

    Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1) targeted aspects of model performance to be evaluated, (2) a set of benchmarks as defined references to test model performance, (3) metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4) model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1) a priori thresholds of acceptable model performance and (2) a scoring system to combine data model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties of land models

  4. Restaurant Energy Use Benchmarking Guideline

    SciTech Connect

    Hedrick, R.; Smith, V.; Field, K.

    2011-07-01

    A significant operational challenge for food service operators is defining energy use benchmark metrics to compare against the performance of individual stores. Without metrics, multiunit operators and managers have difficulty identifying which stores in their portfolios require extra attention to bring their energy performance in line with expectations. This report presents a method whereby multiunit operators may use their own utility data to create suitable metrics for evaluating their operations.

  5. Preview: Evaluation of the 1973-1974 Bilingual/Bicultural Project. Formative Evaluation Report.

    ERIC Educational Resources Information Center

    Ligon, Glynn; And Others

    The formative report provided the Austin Independent School District personnel with information useful for planning the remaining activities for the 1973-74 Bilingual/Bicultural Project and the activities for the 1974-75 Project. Emphasis was on what had been done to evaluate the 1973-74 Project, the data which was or would be available for the…

  6. 40 CFR 57.604 - Evaluation of projects.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) PRIMARY NONFERROUS SMELTER ORDERS Research and Development Requirements § 57.604 Evaluation of projects. The research and development proposal shall include a provision for the employment of a qualified independent engineering firm to prepare written reports at least annually which evaluate each...

  7. 40 CFR 57.604 - Evaluation of projects.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) PRIMARY NONFERROUS SMELTER ORDERS Research and Development Requirements § 57.604 Evaluation of projects. The research and development proposal shall include a provision for the employment of a qualified independent engineering firm to prepare written reports at least annually which evaluate each...

  8. Participatory Evaluation with Youth Leads to Community Action Project

    ERIC Educational Resources Information Center

    Ashton, Carolyn; Arnold, Mary E.; Wells, Elissa E.

    2010-01-01

    4-H has long emphasized the importance of civic engagement and community service for positive youth development. One pathway to this ideal is youth action research and evaluation. This article demonstrates how participatory youth research and evaluation can lead to the successful implementation of community action projects. It describes the…

  9. Corrections Education Evaluation System Project. Site Visit Report.

    ERIC Educational Resources Information Center

    Nelson, Orville; And Others

    Site visits to five correctional institutions in Wisconsin were conducted as part of the development of an evaluation model for the competency-based vocational education (CBVE) project for the Wisconsin Correctional System. The evaluators' perceptions of the CBVE system are presented with recommendations for improvement. Site visits were conducted…

  10. Summative Evaluation of the Manukau Family Literacy Project, 2004

    ERIC Educational Resources Information Center

    Benseman, John Robert; Sutton, Alison Joy

    2005-01-01

    This report covers a summative evaluation of a family literacy project in Auckland, New Zealand. The evaluation covered 70 adults and their children over a two year period. Outcomes for the program included literacy skill gains for both adults and children, increased levels of self-confidence and self-efficacy, greater parental involvement in…

  11. The ASCD Healthy School Communities Project: Formative Evaluation Results

    ERIC Educational Resources Information Center

    Valois, Robert F.; Lewallen, Theresa C.; Slade, Sean; Tasco, Adriane N.

    2015-01-01

    Purpose: The purpose of this paper is to report the formative evaluation results from the Association for Supervision and Curriculum Development Healthy School Communities (HSC) pilot project. Design/methodology/approach: This study utilized 11 HSC pilot sites in the USA (eight sites) and Canada (three sites). The evaluation question was…

  12. Helical Screw Expander Evaluation Project. Final report

    SciTech Connect

    McKay, R.

    1982-03-01

    A functional 1-MW geothermal electric power plant that featured a helical screw expander was produced and then tested in Utah in 1978 to 1979 with a demonstrated average performance of approximately 45% machine efficiency over a wide range of test conditions in noncondensing operation on two-phase geothermal fluids. The Project also produced a computer-equipped data system, an instrumentation and control van, and a 1000-kW variable load bank, all integrated into a test array designed for operation at a variety of remote test sites. Additional testing was performed in Mexico in 1980 under a cooperative test program using the same test array, and machine efficiency was measured at 62% maximum with the rotors partially coated with scale, compared with approximately 54% maximum in Utah with uncoated rotors, confirming the importance of scale deposits within the machine on performance. Data are presented for the Utah testing and for the noncondensing phases of the testing in Mexico. Test time logged was 437 hours during the Utah tests and 1101 hours during the Mexico tests.

  13. A performance benchmark test for geodynamo simulations

    NASA Astrophysics Data System (ADS)

    Matsui, H.; Heien, E. M.

    2013-12-01

    In the last ten years, a number of numerical dynamo models have successfully represented basic characteristics of the geomagnetic field. As new models and numerical methods continue to be developed, it is important to update and extend benchmarks for testing these models. The first dynamo benchmark of Christensen et al. (2001) was applied to models based on spherical harmonic expansion methods. However, only a few groups have reported results of the dynamo benchmark using local methods (Harder and Hansen, 2005; Matsui and Okuda, 2005; Chan et al., 2007) because of the difficulty treating magnetic boundary conditions based on the local methods. On the other hand, spherical harmonics expansion methods perform poorly on massively parallel computers because global data communications are required for the spherical harmonics expansions to evaluate nonlinear terms. We perform benchmark tests to asses various numerical methods for the next generation of geodynamo simulations. The purpose of this benchmark test is to assess numerical geodynamo models on a massively parallel computational platform. To compare among many numerical methods as possible, we consider the model with the insulated magnetic boundary by Christensen et al. (2001) and with the pseudo vacuum magnetic boundary, because the pseudo vacuum boundaries are implemented easier by using the local method than the magnetic insulated boundaries. In the present study, we consider two kinds of benchmarks, so-called accuracy benchmark and performance benchmark. In the accuracy benchmark, we compare the dynamo models by using modest Ekman and Rayleigh numbers proposed by Christensen et. al. (2001). We investigate a required spatial resolution for each dynamo code to obtain less than 1% difference from the suggested solution of the benchmark test using the two magnetic boundary conditions. In the performance benchmark, we investigate computational performance under the same computational environment. We perform these

  14. Evaluation in Cross-Cultural Contexts: Proposing a Framework for International Education and Training Project Evaluations.

    ERIC Educational Resources Information Center

    bin Yahya, Ismail; And Others

    This paper focuses on the need for increased sensitivity and responsiveness in international education and training project evaluations, particularly those in Third World countries. A conceptual-theoretical framework for designing and developing models appropriate for evaluating education and training projects in non-Western cultures is presented.…

  15. Final report : PATTON Alliance gazetteer evaluation project.

    SciTech Connect

    Bleakly, Denise Rae

    2007-08-01

    In 2005 the National Ground Intelligence Center (NGIC) proposed that the PATTON Alliance provide assistance in evaluating and obtaining the Integrated Gazetteer Database (IGDB), developed for the Naval Space Warfare Command Research group (SPAWAR) under Advance Research and Development Activity (ARDA) funds by MITRE Inc., fielded to the text-based search tool GeoLocator, currently in use by NGIC. We met with the developers of GeoLocator and identified their requirements for a better gazetteer. We then validated those requirements by reviewing the technical literature, meeting with other members of the intelligence community (IC), and talking with both the United States Geologic Survey (USGS) and the National Geospatial Intelligence Agency (NGA), the authoritative sources for official geographic name information. We thus identified 12 high-level requirements from users and the broader intelligence community. The IGDB satisfies many of these requirements. We identified gaps and proposed ways of closing these gaps. Three important needs have not been addressed but are critical future needs for the broader intelligence community. These needs include standardization of gazetteer data, a web feature service for gazetteer information that is maintained by NGA and USGS but accessible to users, and a common forum that brings together IC stakeholders and federal agency representatives to provide input to these activities over the next several years. Establishing a robust gazetteer web feature service that is available to all IC users may go a long way toward resolving the gazetteer needs within the IC. Without a common forum to provide input and feedback, community adoption may take significantly longer than anticipated with resulting risks to the war fighter.

  16. Collected notes from the Benchmarks and Metrics Workshop

    NASA Technical Reports Server (NTRS)

    Drummond, Mark E.; Kaelbling, Leslie P.; Rosenschein, Stanley J.

    1991-01-01

    In recent years there has been a proliferation of proposals in the artificial intelligence (AI) literature for integrated agent architectures. Each architecture offers an approach to the general problem of constructing an integrated agent. Unfortunately, the ways in which one architecture might be considered better than another are not always clear. There has been a growing realization that many of the positive and negative aspects of an architecture become apparent only when experimental evaluation is performed and that to progress as a discipline, we must develop rigorous experimental methods. In addition to the intrinsic intellectual interest of experimentation, rigorous performance evaluation of systems is also a crucial practical concern to our research sponsors. DARPA, NASA, and AFOSR (among others) are actively searching for better ways of experimentally evaluating alternative approaches to building intelligent agents. One tool for experimental evaluation involves testing systems on benchmark tasks in order to assess their relative performance. As part of a joint DARPA and NASA funded project, NASA-Ames and Teleos Research are carrying out a research effort to establish a set of benchmark tasks and evaluation metrics by which the performance of agent architectures may be determined. As part of this project, we held a workshop on Benchmarks and Metrics at the NASA Ames Research Center on June 25, 1990. The objective of the workshop was to foster early discussion on this important topic. We did not achieve a consensus, nor did we expect to. Collected here is some of the information that was exchanged at the workshop. Given here is an outline of the workshop, a list of the participants, notes taken on the white-board during open discussions, position papers/notes from some participants, and copies of slides used in the presentations.

  17. Evaluating success of mobile health projects in the developing world.

    PubMed

    Ginige, J Anupama; Maeder, Anthony J; Long, Vanessa

    2014-01-01

    Many mobile health (mHealth) projects, typically deploying pilot or small scale implementations, have been undertaken in developing world settings and reported with a widely varying range of claims being made on their effectiveness and benefits. As a result, there is little evidence for which aspects of such projects lead to successful outcomes. This paper describes a literature review of papers from PubMed undertaken to identify strong contributions to execution and evaluation of mHealth projects in developing world settings, and suggests a template for classifying the main success factors to assist with collating evidence in the future.

  18. Evaluation of the El Dorado micellar-polymer demonstration project

    SciTech Connect

    Vanhorn, L.E.

    1983-01-01

    The El Dorado Micellar-Polymer Demonstration Project has been a cooperative venture between Cities Service Co. and the U.S. Department of Energy. The objective of the project was to determine if it was technically and economically feasible to produce commercial volumes of oil using a micellar-polymer process in the El Dorado field. The project was designed to allow a side-by-side comparison of 2 distinctly different micellar-polymer processes in the same field in order that the associated benefits and problems of each could be determined. These are described and evaluated.

  19. A client/server database system for project evaluation

    SciTech Connect

    Brule, M.R.; Fair, W.B.; Jiang, J.; Sanvido, R.D.

    1994-12-31

    PETS (Project Evaluation Tool Set) is a networked client/server system that provides a full set of decision-support tools for evaluating the business potential of onshore and offshore development projects. This distributed workgroup computing system combines and streamlines preliminary design, routine cost estimation, economic evaluation, and risk analysis for conceptual developments as well as for ongoing projects and operations. A flexible and extendible client/server integration framework links in-house and third-party software applications with a database and an expert-system knowledgebase, and, where appropriate, links the applications among themselves. The capability and richness of inexpensive commercial operating systems and off-the-shelf applications have made building a client/server system like PETS possible in a relatively short time and at low cost. We will discuss the object-oriented design of the PETS system, detail its capabilities, and outline the methods used to integrate applications from other domains.

  20. Diagnostic Algorithm Benchmarking

    NASA Technical Reports Server (NTRS)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  1. TWODANT benchmark. Progress report

    SciTech Connect

    Lee, Sung

    1994-01-11

    TWODANT (Two-Dimensional, Diffusion-Accelerated, Neutral-Particle Transport) code has been benchmarked against 6 critical experiments (Jezebel plutonium critical assembly) and their k effective values compared with those of KENO and MCNP codes.

  2. Benchmarking TENDL-2012

    NASA Astrophysics Data System (ADS)

    van der Marck, S. C.; Koning, A. J.; Rochman, D. A.

    2014-04-01

    The new release of the TENDL nuclear data library, TENDL-2012, was tested by performing many benchmark calculations. Close to 2000 criticality safety benchmark cases were used, as well as many benchmark shielding cases. All the runs could be compared with similar runs based on the nuclear data libraries ENDF/B-VII.1, JENDL-4.0, and JEFF-3.1.1 respectively. The results are that many of the criticality safety results obtained with TENDL-2012 are close to the ones for the other libraries. In particular the results for the thermal spectrum cases with LEU fuel are good. Nevertheless, there is a fair amount of cases for which the TENDL-2012 results are not as good as the other libraries. Especially a number of fast spectrum cases with reflectors are not well described. The results for the shielding benchmarks are mostly similar to the ones for the other libraries. Some isolated cases with differences are identified.

  3. Childhood Obesity Research Demonstration Project: Cross-Site Evaluation Methods

    PubMed Central

    Lee, Rebecca E.; Mehta, Paras; Thompson, Debbe; Bhargava, Alok; Carlson, Coleen; Kao, Dennis; Layne, Charles S.; Ledoux, Tracey; O'Connor, Teresia; Rifai, Hanadi; Gulley, Lauren; Hallett, Allen M.; Kudia, Ousswa; Joseph, Sitara; Modelska, Maria; Ortega, Dana; Parker, Nathan; Stevens, Andria

    2015-01-01

    Abstract Introduction: The Childhood Obesity Research Demonstration (CORD) project links public health and primary care interventions in three projects described in detail in accompanying articles in this issue of Childhood Obesity. This article describes a comprehensive evaluation plan to determine the extent to which the CORD model is associated with changes in behavior, body weight, BMI, quality of life, and healthcare satisfaction in children 2–12 years of age. Design/Methods: The CORD Evaluation Center (EC-CORD) will analyze the pooled data from three independent demonstration projects that each integrate public health and primary care childhood obesity interventions. An extensive set of common measures at the family, facility, and community levels were defined by consensus among the CORD projects and EC-CORD. Process evaluation will assess reach, dose delivered, and fidelity of intervention components. Impact evaluation will use a mixed linear models approach to account for heterogeneity among project-site populations and interventions. Sustainability evaluation will assess the potential for replicability, continuation of benefits beyond the funding period, institutionalization of the intervention activities, and community capacity to support ongoing program delivery. Finally, cost analyses will assess how much benefit can potentially be gained per dollar invested in programs based on the CORD model. Conclusions: The keys to combining and analyzing data across multiple projects include the CORD model framework and common measures for the behavioral and health outcomes along with important covariates at the individual, setting, and community levels. The overall objective of the comprehensive evaluation will develop evidence-based recommendations for replicating and disseminating community-wide, integrated public health and primary care programs based on the CORD model. PMID:25679060

  4. Benchmarking in Foodservice Operations.

    DTIC Science & Technology

    2007-11-02

    51. Lingle JH, Schiemann WA. From balanced scorecard to strategic gauges: Is measurement worth it? Mgt Rev. 1996; 85(3):56-61. 52. Struebing L...studies lasted from nine to twelve months, and could extend beyond that time for numerous reasons (49). Benchmarking was not industrial tourism , a...not simply data comparison, a fad, a means for reducing resources, a quick-fix program, or industrial tourism . Benchmarking was a complete process

  5. Intermediate evaluation of USAID/Cairo energy policy planning project

    SciTech Connect

    Wilbanks, T.J.; Wright, S.B.; Barron, W.F.; Kamel, A.M.; Santiago, H.T.

    1992-09-01

    Three years ago, a team from the Oak Ridge National Laboratory and the Oak Ridge Associated Universities, supplemented by an expert from the US Department of Energy and a senior Egyptian energy professional, carried out what was termed an ``intermediate evaluation`` of a major energy policy project in Egypt. Supported by USAID/Cairo, the project had concentrated on developing and strengthening an Organization for Energy Planning (OEP) within the Government of India, and it was actually scheduled to end less than a year after this evaluation. The evaluation was submitted to USAID/Cairo and circulated elsewhere in the US Agency for International Development and the Government of Egypt as an internal report. Over the next several years, the USAID energy planning project ended and the functions performed by OEP were merged with planning capabilities in the electric power sector. Now that the major issues addressed by the evaluation report have been resolved, we are making it available to a broader audience as a contribution to the general literature on development project evaluation and institution-building.

  6. Intermediate evaluation of USAID/Cairo energy policy planning project

    SciTech Connect

    Wilbanks, T.J.; Wright, S.B. ); Barron, W.F. ); Kamel, A.M. ); Santiago, H.T. )

    1992-01-01

    Three years ago, a team from the Oak Ridge National Laboratory and the Oak Ridge Associated Universities, supplemented by an expert from the US Department of Energy and a senior Egyptian energy professional, carried out what was termed an intermediate evaluation'' of a major energy policy project in Egypt. Supported by USAID/Cairo, the project had concentrated on developing and strengthening an Organization for Energy Planning (OEP) within the Government of India, and it was actually scheduled to end less than a year after this evaluation. The evaluation was submitted to USAID/Cairo and circulated elsewhere in the US Agency for International Development and the Government of Egypt as an internal report. Over the next several years, the USAID energy planning project ended and the functions performed by OEP were merged with planning capabilities in the electric power sector. Now that the major issues addressed by the evaluation report have been resolved, we are making it available to a broader audience as a contribution to the general literature on development project evaluation and institution-building.

  7. Developing integrated benchmarks for DOE performance measurement

    SciTech Connect

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  8. Protein-Protein Docking Benchmark Version 3.0

    PubMed Central

    Hwang, Howook; Pierce, Brian; Mintseris, Julian; Janin, Joël; Weng, Zhiping

    2009-01-01

    We present version 3.0 of our publicly available protein-protein docking benchmark. This update includes 40 new test cases, representing a 48% increase from Benchmark 2.0. For all of the new cases, the crystal structures of both binding partners are available. As with Benchmark 2.0, SCOP1 (Structural Classification of Proteins) was used to remove redundant test cases. The 124 unbound-unbound test cases in Benchmark 3.0 are classified into 88 rigid-body cases, 19 medium difficulty cases, and 17 difficult cases, based on the degree of conformational change at the interface upon complex formation. In addition to providing the community with more test cases for evaluating docking methods, the expansion of Benchmark 3.0 will facilitate the development of new algorithms that require a large number of training examples. Benchmark 3.0 is available to the public at http://zlab.bu.edu/benchmark. PMID:18491384

  9. BENCHMARK DOSE TECHNICAL GUIDANCE DOCUMENT ...

    EPA Pesticide Factsheets

    The U.S. EPA conducts risk assessments for an array of health effects that may result from exposure to environmental agents, and that require an analysis of the relationship between exposure and health-related outcomes. The dose-response assessment is essentially a two-step process, the first being the definition of a point of departure (POD), and the second extrapolation from the POD to low environmentally-relevant exposure levels. The benchmark dose (BMD) approach provides a more quantitative alternative to the first step in the dose-response assessment than the current NOAEL/LOAEL process for noncancer health effects, and is similar to that for determining the POD proposed for cancer endpoints. As the Agency moves toward harmonization of approaches for human health risk assessment, the dichotomy between cancer and noncancer health effects is being replaced by consideration of mode of action and whether the effects of concern are likely to be linear or nonlinear at low doses. Thus, the purpose of this project is to provide guidance for the Agency and the outside community on the application of the BMD approach in determining the POD for all types of health effects data, whether a linear or nonlinear low dose extrapolation is used. A guidance document is being developed under the auspices of EPA's Risk Assessment Forum. The purpose of this project is to provide guidance for the Agency and the outside community on the application of the benchmark dose (BMD) appr

  10. Collaborative Partnerships and School Change: Evaluating Project SOBEIT

    ERIC Educational Resources Information Center

    Lacey, Candace H.

    2006-01-01

    This presentation will report on the findings of the evaluation of Project SOBEIT a multi-school initiative focused on building partnerships between schools, law enforcement, and community mental health agencies. Guided by a process, context, outcomes, and sustainability framework and grounded in the understanding of the impact of change theory on…

  11. Expedited Permanency Planning: Evaluation of the Kentucky Adoptions Opportunities Project.

    ERIC Educational Resources Information Center

    Martin, Mavin H.; Barbee, Anita P.; Antle, Becky F.; Sar, Bibhuti

    2002-01-01

    Presents evaluation findings of a 3-year Kentucky Adoptions Opportunities Project. Notes that a majority of children had one or both parents coping with multiple risk factors including mental illness, substance abuse, mental retardation, or family violence. Discusses major barriers to permanency, as well as policy and practice implications in the…

  12. Orthographic Projection. Courseware Evaluation for Vocational and Technical Education.

    ERIC Educational Resources Information Center

    Turner, Gordon; And Others

    This courseware evaluation rates the Orthographic Projection program developed by Hobar Publications. (The program--not contained in this document--uses computer graphics to present abstract visual concepts such as points, lines, and planes.) Part A describes the program in terms of subject area and hardware requirements (Apple II), indicates its…

  13. Project Achieve Evaluation Report: Year One, 2001-2002.

    ERIC Educational Resources Information Center

    Speas, Carol

    This report is an evaluation of the pilot year of Project Achieve, a major local instructional initiative at six elementary schools and two middle schools in the Wake County Public School System (WCPSS), North Carolina, that was designed to help reach the WCPSS goal of 95% of students at or above grade level. Participating schools had a higher…

  14. Evaluation of the Universal Design for Learning Projects

    ERIC Educational Resources Information Center

    Cooper-Martin, Elizabeth; Wolanin, Natalie

    2014-01-01

    The Office of Shared Accountability evaluated the "Universal Design for Learning" (UDL) projects during spring 2013. UDL is an instructional framework that seeks to give all students equal opportunities to learn, by providing multiple means of representation, of action and expression, and of engagement. To inform future implementation…

  15. Service Learning in Medical Education: Project Description and Evaluation

    ERIC Educational Resources Information Center

    Borges, Nicole J.; Hartung, Paul J.

    2007-01-01

    Although medical education has long recognized the importance of community service, most medical schools have not formally nor fully incorporated service learning into their curricula. To address this problem, we describe the initial design, development, implementation, and evaluation of a service-learning project within a first-year medical…

  16. Developing and Evaluating a Cardiovascular Risk Reduction Project.

    ERIC Educational Resources Information Center

    Brownson, Ross C.; Mayer, Jeffrey P.; Dusseault, Patricia; Dabney, Sue; Wright, Kathleen; Jackson-Thompson, Jeannette; Malone, Bernard; Goodman, Robert

    1997-01-01

    Describes the development and baseline evaluation data from the Ozark Heart Health Project, a community-based cardiovascular disease risk reduction program in rural Missouri that targeted smoking, physical inactivity, and poor diet. Several Ozark counties participated in either intervention or control groups, and researchers conducted surveillance…

  17. In-depth Evaluation of the Associated Schools Project.

    ERIC Educational Resources Information Center

    Churchill, Stacy; Omari, Issa

    1980-01-01

    Describes methods and conclusions of an in-depth evaluation of the UNESCO Associated Schools Project for International Understanding. The report includes suggestions for improving course content, teaching methods, and instructional materials. Improvements in program quality, international coordination, information dissemination, and expansion into…

  18. Niagara Falls HEW 309 Project 1974-1975: Evaluation Report.

    ERIC Educational Resources Information Center

    Skeen, Elois M.

    The document reports an outside evaluation of a Niagara Falls Adult Basic Education Program special project entitled "Identification of Preferred Cognitive Styles and Matching Adult Reading Program Alternatives for the 0-4 Grade Levels." It was concerned with (1) research, training in cognitive style mapping, and development of a survey…

  19. Evaluation of Project TREC: Teaching Respect for Every Culture.

    ERIC Educational Resources Information Center

    Mitchell, Stephanie

    The purpose of Teaching Respect for Every Culture (TREC) was to ensure that racial/ethnic, gender, disability, and other circumstances did not bar student access to alcohol/drug education, prevention, and intervention services. This report describes the implementation and evaluation of the TREC Project. Five objectives of TREC were to: (1)…

  20. ESEA Title I Projects Evaluation Report 1967, Volume I.

    ERIC Educational Resources Information Center

    Pittsburgh Public Schools, PA.

    Reports of Pittsburgh's 1967 ESEA Title I projects are presented in two volumes. The 17 reports in Volume I, which adhere to the procedures established in an evaluation model, are of programs in communication skills, camping, vocational education, music, standard English, social development, revised class organization, remedial reading by means of…

  1. Education North Evaluation Project. The Second Annual Report.

    ERIC Educational Resources Information Center

    Ingram, E. J.; McIntosh, R. G.

    The report and evaluation of Education North (a project designed to encourage parents, community members, and teachers in small, isolated, primarily Native and Metis communities in northern Alberta to work together to meet community educational needs) is comprised of three parts. Part One presents an update of Education North activities and…

  2. Parent Services Project Evaluation: Final Report of Findings.

    ERIC Educational Resources Information Center

    Stein, Alan R.; Haggard, Molly

    The Parent Services Project (PSP) is a family resource program which provides supportive activities for highly stressed and socially isolated parents based on the "social support as a stress-buffer" model of primary prevention. A PSP evaluation followed parents as they went through the PSP program and compared them with a matched control…

  3. Evaluation of Fatih Project in the Frame of Digital Divide

    ERIC Educational Resources Information Center

    Karabacak, Kerim

    2016-01-01

    The aim of this research realized at the general survey model is to evaluate "FATIH Project" in the frame of digital divide by determining the effects of the distributed tablets to the students being educated at K-12 schools on digital divide. Sample is taking from the 9th grade students in Sakarya city in the 2013-2014 academic session.…

  4. Benchmarking of Graphite Reflected Critical Assemblies of UO2

    SciTech Connect

    Margaret A. Marshall; John D. Bess

    2011-11-01

    A series of experiments were carried out in 1963 at the Oak Ridge National Laboratory Critical Experiments Facility (ORCEF) for use in space reactor research programs. A core containing 93.2% enriched UO2 fuel rods was used in these experiments. The first part of the experimental series consisted of 253 tightly-packed fuel rods (1.27 cm triangular pitch) with graphite reflectors [1], the second part used 253 graphite-reflected fuel rods organized in a 1.506 cm triangular pitch [2], and the final part of the experimental series consisted of 253 beryllium-reflected fuel rods with a 1.506 cm triangular pitch. [3] Fission rate distribution and cadmium ratio measurements were taken for all three parts of the experimental series. Reactivity coefficient measurements were taken for various materials placed in the beryllium reflected core. The first part of this experimental series has been evaluated for inclusion in the International Reactor Physics Experiment Evaluation Project (IRPhEP) [4] and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbooks, [5] and is discussed below. These experiments are of interest as benchmarks because they support the validation of compact reactor designs with similar characteristics to the design parameters for a space nuclear fission surface power systems. [6

  5. PNNL Information Technology Benchmarking

    SciTech Connect

    DD Hostetler

    1999-09-08

    Benchmarking is a methodology for searching out industry best practices that lead to superior performance. It is exchanging information, not just with any organization, but with organizations known to be the best within PNNL, in industry, or in dissimilar industries with equivalent functions. It is used as a continuous improvement tool for business and technical processes, products, and services. Information technology--comprising all computer and electronic communication products and services--underpins the development and/or delivery of many PNNL products and services. This document describes the Pacific Northwest National Laboratory's (PNNL's) approach to information technology (IT) benchmarking. The purpose is to engage other organizations in the collaborative process of benchmarking in order to improve the value of IT services provided to customers. TM document's intended audience consists of other US Department of Energy (DOE) national laboratories and their IT staff. Although the individual participants must define the scope of collaborative benchmarking, an outline of IT service areas for possible benchmarking is described.

  6. Encouraging Strong Family Relationships. Policy Matters: Setting and Measuring Benchmarks for State Policies. A Discussion Paper for the "Policy Matters" Project

    ERIC Educational Resources Information Center

    Anyabwile, Thabiti

    2004-01-01

    "Policy Matters" is an initiative of the Center for the Study of Social Policy. The "Policy Matters" project is designed to develop and make available coherent, comprehensive information regarding the strength and adequacy of state policies affecting children, families, and communities. The project seeks to establish consensus among policy experts…

  7. Analysis and Development of a Project Evaluation Process.

    SciTech Connect

    Coutant, Charles C.; Cada Glenn F.

    1985-01-01

    The Bonneville Power Administration has responsibility, assigned by the Pacific Northwest Electric Power Planning and Conservation Act of 1980 (Public Law 96-501; 16 USC 839), for implementing the Columbia River Basin Fish and Wildlife Program of the Northwest Power Planning Council. One aspect of this responsibility is evaluation of project proposals and ongoing and completed projects. This report recommends formalized procedures for conducting this work in an accurate, professional, and widely respected manner. Recommendations and justifications are based largely on interviews with federal and state agencies and Indian tribes in the Northwest and nationally. Organizations were selected that have evaluation systems of their own, interact with the Fish and Wildlife Program, or have similar objectives or obligations. Perspective on aspects to be considered were obtained from the social science of evaluation planning. Examples of procedures and quantitative criteria are proposed. 1 figure, 2 tables.

  8. An evaluation approach for research project pilot technological applications

    NASA Astrophysics Data System (ADS)

    Marcelino-Jesus, Elsa; Sarraipa, Joao; Jardim-Goncalves, Ricardo

    2013-10-01

    In a world increasingly more competitive and in a constantly development and growth it's important that companies have economic tools, like frameworks to help them to evaluate and validate the technology development to better fits in each company particular needs. The paper presents an evaluation approach for research project pilot applications to stimulate its implementation and deployment, increasing its adequacy and acceptance to their stakeholders and consequently providing new business profit and opportunities. Authors used the DECIDE evaluation framework as a major guide to this approach, which was tested in the iSURF project to support the implementation of an interoperability service utility for collaborative supply chain planning across multiple domains supported by RFID devices.

  9. Six microcomputer programs for population projection: an evaluation.

    PubMed

    Mcgirr, N J; Rutstein, S O

    1987-11-01

    Microcomputer-based population projection software packages were evaluated to determine if all the programs would yield similar results if tested on the same set of data. These included the PROJ5 from Microcomputer Program for Demographic Analysis, converted for microcomputers by Westinghouse; the FIVFIV/SINSIN from The Population Council; the PROJPC-II, developed by Kenneth Hill for the World Bank; and CELADE, developed by Centro Latinamericano de Demographia (CELADE), a Spanish microcomputer version of the population projection program of the United Nations. These were all modified from mainframe programs. The DEMPROJ, developed by the RAPID2 project at the The Futures Group, and ESCAP/POP, developed by the Population Division of the U.N. Economic and Social Commission for Asia and the Pacific (ESCAP) were both specifically developed for microcomputers. A standard set of criteria covering hardware and software and requirements, methodology, projection results, and summary demographic indicators in the output are used in the evaluation. Table 1 gives hardware and software requirements. All the programs can be used on IBM or compatable micros. Table 2 gives data input requirements, which vary widely. All 6 programs use a cohort-component projection, although there is a wide variety in application of methodology. Programs and data sets produced similar results, and choice of a system should based on intended use. Appendices list programs and addresses for obtaining copies as well as other kinds of software available for demogrphic analysis and their sources.

  10. Benchmarking NNWSI flow and transport codes: COVE 1 results

    SciTech Connect

    Hayden, N.K.

    1985-06-01

    The code verification (COVE) activity of the Nevada Nuclear Waste Storage Investigations (NNWSI) Project is the first step in certification of flow and transport codes used for NNWSI performance assessments of a geologic repository for disposing of high-level radioactive wastes. The goals of the COVE activity are (1) to demonstrate and compare the numerical accuracy and sensitivity of certain codes, (2) to identify and resolve problems in running typical NNWSI performance assessment calculations, and (3) to evaluate computer requirements for running the codes. This report describes the work done for COVE 1, the first step in benchmarking some of the codes. Isothermal calculations for the COVE 1 benchmarking have been completed using the hydrologic flow codes SAGUARO, TRUST, and GWVIP; the radionuclide transport codes FEMTRAN and TRUMP; and the coupled flow and transport code TRACR3D. This report presents the results of three cases of the benchmarking problem solved for COVE 1, a comparison of the results, questions raised regarding sensitivities to modeling techniques, and conclusions drawn regarding the status and numerical sensitivities of the codes. 30 refs.

  11. Translational benchmark risk analysis

    PubMed Central

    Piegorsch, Walter W.

    2010-01-01

    Translational development – in the sense of translating a mature methodology from one area of application to another, evolving area – is discussed for the use of benchmark doses in quantitative risk assessment. Illustrations are presented with traditional applications of the benchmark paradigm in biology and toxicology, and also with risk endpoints that differ from traditional toxicological archetypes. It is seen that the benchmark approach can apply to a diverse spectrum of risk management settings. This suggests a promising future for this important risk-analytic tool. Extensions of the method to a wider variety of applications represent a significant opportunity for enhancing environmental, biomedical, industrial, and socio-economic risk assessments. PMID:20953283

  12. Decay Data Evaluation Project: Evaluation of (52)Fe nuclear decay data.

    PubMed

    Luca, Aurelian

    2016-03-01

    Within the Decay Data Evaluation Project (DDEP) and the IAEA Coordinated Research Project no. F41029, the evaluation of the nuclear decay data of (52)Fe, a radionuclide of interest in nuclear medicine, was performed. The main nuclear decay data evaluated are: the half-life, decay energy, energies and probabilities of the electron capture and β(+) transitions, internal conversion coefficients and gamma-ray energies and emission intensities. This new evaluation, made using the DDEP methodology and tools, was included in the DDEP database NUCLEIDE.

  13. Area recommendation report for the crystalline repository project: An evaluation. [Crystalline Repository Project

    SciTech Connect

    Beck, J E; Lowe, H; Yurkovich, S P

    1986-03-28

    An evaluation is given of DOE's recommendation of the Elk River complex in North Carolina for siting the second repository. Twelve recommendations are made including a strong suggestion that the Cherokee Tribe appeal both through political and legal avenues for inclusion as an affected area primarily due to projected impacts upon economy and public health as a consequence of the potential for reduced tourism.

  14. What and How Are We Evaluating? Meta-Evaluation of Climate Education Projects Funded by NASA

    NASA Astrophysics Data System (ADS)

    Martin, A. M.; Chambers, L. H.; Pippin, M. R.

    2014-07-01

    NASA Innovations in Climate Education (NICE) at Langley Research Center has funded 71 climate education initiatives over four years, each evaluated separately by external evaluators. NICE has undertaken a systematic meta-evaluation, seeking to understand the range of evaluations, approaches, and methods represented in this portfolio. When NASA asks for evaluation of funded projects, what happens? Which questions are asked and answered, using which tools? To what extent do the evaluations meet the needs of projects and program officers? How do they contribute to best practices in (climate) science education? These questions are important to ask about general STEM education work; the NICE portfolio provides a broad test case for thinking strategically, critically, and progressively about evaluation in our community. Our findings can inform the NASA, ASP, and STEM EPO communities and prompt us to consider a broad range of informative evaluation options.

  15. Planning and Evaluating Telecommunications Demonstration Projects and Assessing the Costs of Telecommunications Demonstration Projects. Final Report #146-03.

    ERIC Educational Resources Information Center

    Clippinger, John H.; Fain, Sanford B.

    This two-report volume was prepared to describe approaches for evaluating individual Office of Telecommunications Policy (OTP) demonstration projects in the future and to aid demonstration project directors in project planning and development. The first report focuses on the role of planning and evaluation activities, stressing their importance in…

  16. Benchmarking and testing the "Sea Level Equation

    NASA Astrophysics Data System (ADS)

    Spada, G.; Barletta, V. R.; Klemann, V.; van der Wal, W.; James, T. S.; Simon, K.; Riva, R. E. M.; Martinec, Z.; Gasperini, P.; Lund, B.; Wolf, D.; Vermeersen, L. L. A.; King, M. A.

    2012-04-01

    The study of the process of Glacial Isostatic Adjustment (GIA) and of the consequent sea level variations is gaining an increasingly important role within the geophysical community. Understanding the response of the Earth to the waxing and waning ice sheets is crucial in various contexts, ranging from the interpretation of modern satellite geodetic measurements to the projections of future sea level trends in response to climate change. All the processes accompanying GIA can be described solving the so-called Sea Level Equation (SLE), an integral equation that accounts for the interactions between the ice sheets, the solid Earth, and the oceans. Modern approaches to the SLE are based on various techniques that range from purely analytical formulations to fully numerical methods. Despite various teams independently investigating GIA, we do not have a suitably large set of agreed numerical results through which the methods may be validated. Following the example of the mantle convection community and our recent successful Benchmark for Post Glacial Rebound codes (Spada et al., 2011, doi: 10.1111/j.1365-246X.2011.04952.x), here we present the results of a benchmark study of independently developed codes designed to solve the SLE. This study has taken place within a collaboration facilitated through the European Cooperation in Science and Technology (COST) Action ES0701. The tests involve predictions of past and current sea level variations, and 3D deformations of the Earth surface. In spite of the signi?cant differences in the numerical methods employed, the test computations performed so far show a satisfactory agreement between the results provided by the participants. The differences found, which can be often attributed to the different numerical algorithms employed within the community, help to constrain the intrinsic errors in model predictions. These are of fundamental importance for a correct interpretation of the geodetic variations observed today, and

  17. Benchmarking infrastructure for mutation text mining

    PubMed Central

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  18. A Systems Approach to the Development of an Evaluation System for ESEA Title III Projects.

    ERIC Educational Resources Information Center

    Yost, Marlen; Monnin, Frank J.

    A major activity of any ESEA Title III project is evaluation. This paper suggests evaluation methods especially appropriate to such projects by applying a systems approach to the evaluation design. Evaluation as a system is divided into three subsystems: (1) baseline evaluation, which describes conditions as they exist before project treatment;…

  19. Maximizing the Impact of the NASA Innovations in Climate Education (NICE) Project: Building a Community of Project Evaluators, Collaborating Across Agencies & Evaluating a 71-Project Portfolio

    NASA Astrophysics Data System (ADS)

    Martin, A. M.; Chambers, L. H.; Pippin, M. R.; Spruill, K.

    2012-12-01

    Ann Martin, Lin Chambers, Margaret Pippin, & Kate Spruill, NASA The NASA Innovations in Climate Education (NICE) project at Langley Research Center in Hampton, VA, has funded 71 climate education initiatives since 2008. An evaluator was added to the team in mid-2011 to undertake an evaluation of the portfolio. The funded initiatives span across the nation and contribute to the development of a climate-literate public and the preparation of a climate-related STEM workforce through research experiences, professional development opportunities, development of data access and modeling tools, and educational opportunities in both K-12 and higher education. The portfolio of projects also represents a wide range of evaluation questions, approaches, and methodologies. The evaluation of the NICE portfolio has encountered context-specific challenges, including the breadth of the portfolio, the need to build up capacity for electronic project monitoring, and government-wide initiatives to align evaluations across Federal agencies. Additionally, we have contended with the difficulties of maintaining compliance with the Paperwork Reduction Act (PRA), which constrains the ability of NICE to gather data and approach interesting evaluative questions. We will discuss these challenges and our approaches to overcoming them. First, we have committed to fostering communication and partnerships among our awardees and evaluators, facilitating the sharing of expertise, resources, lessons learned and practices across the individual project evaluations. Additionally, NICE has worked in collaboration with NOAA's Environmental Literacy Grants (ELG) and NSF's Climate Change Education Partnerships (CCEP) programs to foster synergy, leverage resources, and facilitate communication. NICE projects, and their evaluators, have had the opportunity to work with and benefit from colleagues on projects funded by other agencies, and to orient their work within the context of the broader tri-agency goals

  20. Small Commercial Program DOE Project: Impact evaluation. Final report

    SciTech Connect

    Bathgate, R.; Faust, S.

    1992-08-12

    In 1991, Washington Electric Cooperative (WEC) implemented a Department of Energy grant to conduct a small commercial energy conservation project. The small commercial ``Mom, and Pop`` grocery stores within WEC`s service territory were selected as the target market for the project. Energy & Solid Waste Consultant`s (E&SWC) Impact Evaluation is documented here. The evaluation was based on data gathered from a variety of sources, including load profile metering, kWh submeters, elapsed time indicators, and billing histories. Five stores were selected to receive measures under this program: Waits River General Store, Joe`s Pond Store, Hastings Store, Walden General Store, and Adamant Cooperative. Specific measures installed in each store and description of each are included.

  1. Mask Waves Benchmark

    DTIC Science & Technology

    2007-10-01

    24 . Measured frequency vs. set frequency for all data .............................................. 23 25. Benchmark Probe#1 wave amplitude variation...4 8 A- 24 . Wave amplitude by probe, blower speed, lip setting for 0.768 Hz on the short I b an k...frequency and wavemaker bank .................................... 24 B- 1. Coefficient of variation as percentage for all conditions for long bank and bridge

  2. Benchmarks: WICHE Region 2012

    ERIC Educational Resources Information Center

    Western Interstate Commission for Higher Education, 2013

    2013-01-01

    Benchmarks: WICHE Region 2012 presents information on the West's progress in improving access to, success in, and financing of higher education. The information is updated annually to monitor change over time and encourage its use as a tool for informed discussion in policy and education communities. To establish a general context for the…

  3. Python/Lua Benchmarks

    SciTech Connect

    Busby, L.

    2014-08-01

    This is an adaptation of the pre-existing Scimark benchmark code to a variety of Python and Lua implementations. It also measures performance of the Fparser expression parser and C and C++ code on a variety of simple scientific expressions.

  4. Monte Carlo Benchmark

    SciTech Connect

    2010-10-20

    The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.

  5. Benchmarking the World's Best

    ERIC Educational Resources Information Center

    Tucker, Marc S.

    2012-01-01

    A century ago, the United States was a world leader in industrial benchmarking. However, after World War II, once no one could compete with the U.S., it became complacent. Many industrialized countries now have higher student achievement and more equitable and efficient education systems. A higher proportion of young people in their workforces…

  6. HPCS HPCchallenge Benchmark Suite

    DTIC Science & Technology

    2007-11-02

    measured HPCchallenge Benchmark performance on various HPC architectures — from Cray X1s to Beowulf clusters — in the presentation and paper...from Cray X1s to Beowulf clusters — using the updated results at http://icl.cs.utk.edu/hpcc/hpcc_results.cgi Even a small percentage of random

  7. A unified evaluation of iterative projection algorithms for phase retrieval

    SciTech Connect

    Marchesini, S

    2006-03-08

    Iterative projection algorithms are successfully being used as a substitute of lenses to recombine, numerically rather than optically, light scattered by illuminated objects. Images obtained computationally allow aberration-free diffraction-limited imaging and allow new types of imaging using radiation for which no lenses exist. The challenge of this imaging technique is transferred from the lenses to the algorithms. We evaluate these new computational ''instruments'' developed for the phase retrieval problem, and discuss acceleration strategies.

  8. Universal-Free School Breakfast Program Evaluation Design Project: Final Evaluation Design.

    ERIC Educational Resources Information Center

    Ponza, Michael; Briefel, Ronette; Corson, Walter; Devaney, Barbara; Glazerman, Steven; Gleason, Philip; Heaviside, Sheila; Kung, Susanna; Meckstroth, Alicia; Murphy, J. Michael; Ohls, Jim

    The Child Nutrition Act of 1998 authorized demonstration pilot projects in up to six school food authorities and a rigorous evaluation to assess the effects of providing free school breakfasts to elementary school children. This report describes the evaluation strategy and data collection plans. Part 1 of the report provides background…

  9. New Fe-56 Evaluation for the CIELO project

    SciTech Connect

    Nobre, G P; Herman, Micheal W; Brown, D A; Capote, R.; Leal, Luiz C; Plompen, A.; Danon, Y.; Qian, Jing; Ge, Zhigang; Liu, Tingjin; Lu, Hnalin; Ruan, Xichao

    2016-01-01

    The Collaborative International Evaluated Library Organisation (CIELO) aims to provide revised and updated evaluations for Pu-239, U-238,U-235, Fe-56, O-16, and H-1 through international collaboration. This work, which is part of the CIELO project, presents the initial results for the evaluation of the Fe-56 isotope, with neutron-incident energy ranging from 0 to 20 MeV. The Fe-56(n,p) cross sections were fitted to reproduce the ones from IRDFF dosimetry file. Our preliminary file provides good cross-section agreements for the main angle-integrated reactions, as well as a reasonable overall agreement for angular distributions and double-differential spectra, when compared to previous evaluations.

  10. New 56Fe Evaluation for the CIELO project

    NASA Astrophysics Data System (ADS)

    Nobre, G. P. A.; Herman, M.; Brown, D.; Capote, R.; Trkov, A.; Leal, L.; Plompen, A.; Danon, Y.; Qian, Jing; Ge, Zhigang; Liu, Tingjin; Lu, Hnalin; Ruan, Xichao

    2016-03-01

    The Collaborative International Evaluated Library Organisation (CIELO) aims to provide revised and updated evaluations for 239Pu, 238,235U, 56Fe, 16O, and 1H through international collaboration. This work, which is part of the CIELO project, presents the initial results for the evaluation of the 56Fe isotope, with neutron-incident energy ranging from 0 to 20 MeV. The 56Fe(n,p) cross sections were fitted to reproduce the ones from IRDFF dosimetry file. Our preliminary file provides good cross-section agreements for the main angle-integrated reactions, as well as a reasonable overall agreement for angular distributions and double-di_erential spectra, when compared to previous evaluations.

  11. NASA Software Engineering Benchmarking Study

    NASA Technical Reports Server (NTRS)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    To identify best practices for the improvement of software engineering on projects, NASA's Offices of Chief Engineer (OCE) and Safety and Mission Assurance (OSMA) formed a team led by Heather Rarick and Sally Godfrey to conduct this benchmarking study. The primary goals of the study are to identify best practices that: Improve the management and technical development of software intensive systems; Have a track record of successful deployment by aerospace industries, universities [including research and development (R&D) laboratories], and defense services, as well as NASA's own component Centers; and Identify candidate solutions for NASA's software issues. Beginning in the late fall of 2010, focus topics were chosen and interview questions were developed, based on the NASA top software challenges. Between February 2011 and November 2011, the Benchmark Team interviewed a total of 18 organizations, consisting of five NASA Centers, five industry organizations, four defense services organizations, and four university or university R and D laboratory organizations. A software assurance representative also participated in each of the interviews to focus on assurance and software safety best practices. Interviewees provided a wealth of information on each topic area that included: software policy, software acquisition, software assurance, testing, training, maintaining rigor in small projects, metrics, and use of the Capability Maturity Model Integration (CMMI) framework, as well as a number of special topics that came up in the discussions. NASA's software engineering practices compared favorably with the external organizations in most benchmark areas, but in every topic, there were ways in which NASA could improve its practices. Compared to defense services organizations and some of the industry organizations, one of NASA's notable weaknesses involved communication with contractors regarding its policies and requirements for acquired software. One of NASA's strengths

  12. Performance Evaluation of State of the Art Systems for Physical Activity Classification of Older Subjects Using Inertial Sensors in a Real Life Scenario: A Benchmark Study.

    PubMed

    Awais, Muhammad; Palmerini, Luca; Bourke, Alan K; Ihlen, Espen A F; Helbostad, Jorunn L; Chiari, Lorenzo

    2016-12-11

    The popularity of using wearable inertial sensors for physical activity classification has dramatically increased in the last decade due to their versatility, low form factor, and low power requirements. Consequently, various systems have been developed to automatically classify daily life activities. However, the scope and implementation of such systems is limited to laboratory-based investigations. Furthermore, these systems are not directly comparable, due to the large diversity in their design (e.g., number of sensors, placement of sensors, data collection environments, data processing techniques, features set, classifiers, cross-validation methods). Hence, the aim of this study is to propose a fair and unbiased benchmark for the field-based validation of three existing systems, highlighting the gap between laboratory and real-life conditions. For this purpose, three representative state-of-the-art systems are chosen and implemented to classify the physical activities of twenty older subjects (76.4 ± 5.6 years). The performance in classifying four basic activities of daily life (sitting, standing, walking, and lying) is analyzed in controlled and free living conditions. To observe the performance of laboratory-based systems in field-based conditions, we trained the activity classification systems using data recorded in a laboratory environment and tested them in real-life conditions in the field. The findings show that the performance of all systems trained with data in the laboratory setting highly deteriorates when tested in real-life conditions, thus highlighting the need to train and test the classification systems in the real-life setting. Moreover, we tested the sensitivity of chosen systems to window size (from 1 s to 10 s) suggesting that overall accuracy decreases with increasing window size. Finally, to evaluate the impact of the number of sensors on the performance, chosen systems are modified considering only the sensing unit worn at the lower back

  13. Performance Evaluation of State of the Art Systems for Physical Activity Classification of Older Subjects Using Inertial Sensors in a Real Life Scenario: A Benchmark Study

    PubMed Central

    Awais, Muhammad; Palmerini, Luca; Bourke, Alan K.; Ihlen, Espen A. F.; Helbostad, Jorunn L.; Chiari, Lorenzo

    2016-01-01

    The popularity of using wearable inertial sensors for physical activity classification has dramatically increased in the last decade due to their versatility, low form factor, and low power requirements. Consequently, various systems have been developed to automatically classify daily life activities. However, the scope and implementation of such systems is limited to laboratory-based investigations. Furthermore, these systems are not directly comparable, due to the large diversity in their design (e.g., number of sensors, placement of sensors, data collection environments, data processing techniques, features set, classifiers, cross-validation methods). Hence, the aim of this study is to propose a fair and unbiased benchmark for the field-based validation of three existing systems, highlighting the gap between laboratory and real-life conditions. For this purpose, three representative state-of-the-art systems are chosen and implemented to classify the physical activities of twenty older subjects (76.4 ± 5.6 years). The performance in classifying four basic activities of daily life (sitting, standing, walking, and lying) is analyzed in controlled and free living conditions. To observe the performance of laboratory-based systems in field-based conditions, we trained the activity classification systems using data recorded in a laboratory environment and tested them in real-life conditions in the field. The findings show that the performance of all systems trained with data in the laboratory setting highly deteriorates when tested in real-life conditions, thus highlighting the need to train and test the classification systems in the real-life setting. Moreover, we tested the sensitivity of chosen systems to window size (from 1 s to 10 s) suggesting that overall accuracy decreases with increasing window size. Finally, to evaluate the impact of the number of sensors on the performance, chosen systems are modified considering only the sensing unit worn at the lower back

  14. Storage-Intensive Supercomputing Benchmark Study

    SciTech Connect

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows: SuperMicro X7

  15. Principles for an ETL Benchmark

    NASA Astrophysics Data System (ADS)

    Wyatt, Len; Caufield, Brian; Pol, Daniel

    Conditions in the marketplace for ETL tools suggest that an industry standard benchmark is needed. The benchmark should provide useful data for comparing the performance of ETL systems, be based on a meaningful scenario, and be scalable over a wide range of data set sizes. This paper gives a general scoping of the proposed benchmark and outlines some key decision points. The Transaction Processing Performance Council (TPC) has formed a development subcommittee to define and produce such a benchmark.

  16. Evaluating the utility of dynamical downscaling in agricultural impacts projections.

    PubMed

    Glotter, Michael; Elliott, Joshua; McInerney, David; Best, Neil; Foster, Ian; Moyer, Elisabeth J

    2014-06-17

    Interest in estimating the potential socioeconomic costs of climate change has led to the increasing use of dynamical downscaling--nested modeling in which regional climate models (RCMs) are driven with general circulation model (GCM) output--to produce fine-spatial-scale climate projections for impacts assessments. We evaluate here whether this computationally intensive approach significantly alters projections of agricultural yield, one of the greatest concerns under climate change. Our results suggest that it does not. We simulate US maize yields under current and future CO2 concentrations with the widely used Decision Support System for Agrotechnology Transfer crop model, driven by a variety of climate inputs including two GCMs, each in turn downscaled by two RCMs. We find that no climate model output can reproduce yields driven by observed climate unless a bias correction is first applied. Once a bias correction is applied, GCM- and RCM-driven US maize yields are essentially indistinguishable in all scenarios (<10% discrepancy, equivalent to error from observations). Although RCMs correct some GCM biases related to fine-scale geographic features, errors in yield are dominated by broad-scale (100s of kilometers) GCM systematic errors that RCMs cannot compensate for. These results support previous suggestions that the benefits for impacts assessments of dynamically downscaling raw GCM output may not be sufficient to justify its computational demands. Progress on fidelity of yield projections may benefit more from continuing efforts to understand and minimize systematic error in underlying climate projections.

  17. Summary of monitoring station component evaluation project 2009-2011.

    SciTech Connect

    Hart, Darren M.

    2012-02-01

    Sandia National Laboratories (SNL) is regarded as a center for unbiased expertise in testing and evaluation of geophysical sensors and instrumentation for ground-based nuclear explosion monitoring (GNEM) systems. This project will sustain and enhance our component evaluation capabilities. In addition, new sensor technologies that could greatly improve national monitoring system performance will be sought and characterized. This work directly impacts the Ground-based Nuclear Explosion Monitoring mission by verifying that the performance of monitoring station sensors and instrumentation is characterized and suitable to the mission. It enables the operational monitoring agency to deploy instruments of known capability and to have confidence in operational success. This effort will ensure that our evaluation capabilities are maintained for future use.

  18. Sequoia Messaging Rate Benchmark

    SciTech Connect

    Friedley, Andrew

    2008-01-22

    The purpose of this benchmark is to measure the maximal message rate of a single compute node. The first num_cores ranks are expected to reside on the 'core' compute node for which message rate is being tested. After that, the next num_nbors ranks are neighbors for the first core rank, the next set of num_nbors ranks are neighbors for the second core rank, and so on. For example, testing an 8-core node (num_cores = 8) with 4 neighbors (num_nbors = 4) requires 8 + 8 * 4 - 40 ranks. The first 8 of those 40 ranks are expected to be on the 'core' node being benchmarked, while the rest of the ranks are on separate nodes.

  19. Benchmark Airport Charges

    NASA Technical Reports Server (NTRS)

    de Wit, A.; Cohn, N.

    1999-01-01

    The Netherlands Directorate General of Civil Aviation (DGCA) commissioned Hague Consulting Group (HCG) to complete a benchmark study of airport charges at twenty eight airports in Europe and around the world, based on 1996 charges. This study followed previous DGCA research on the topic but included more airports in much more detail. The main purpose of this new benchmark study was to provide insight into the levels and types of airport charges worldwide and into recent changes in airport charge policy and structure. This paper describes the 1996 analysis. It is intended that this work be repeated every year in order to follow developing trends and provide the most up-to-date information possible.

  20. Evaluation of observation-driven evaporation algorithms: results of the WACMOS-ET project

    NASA Astrophysics Data System (ADS)

    Miralles, Diego G.; Jimenez, Carlos; Ershadi, Ali; McCabe, Matthew F.; Michel, Dominik; Hirschi, Martin; Seneviratne, Sonia I.; Jung, Martin; Wood, Eric F.; (Bob) Su, Z.; Timmermans, Joris; Chen, Xuelong; Fisher, Joshua B.; Mu, Quiaozen; Fernandez, Diego

    2015-04-01

    scales for the 2005-2007 reference period will be disclosed. The skill of these algorithms to close the water balance over the continents will be assessed by comparisons to runoff data. The consistency in forcing data will allow to (a) evaluate the skill of these five algorithms in producing ET over particular ecosystems, (b) facilitate the attribution of the observed differences to either algorithms or driving data, and (c) set up a solid scientific basis for the development of global long-term benchmark ET products. Project progress can be followed on our website http://wacmoset.estellus.eu. REFERENCES Fisher, J. B., Tu, K.P., and Baldocchi, D.D. Global estimates of the land-atmosphere water flux based on monthly AVHRR and ISLSCP-II data, validated at 16 FLUXNET sites. Remote Sens. Environ. 112, 901-919, 2008. Jiménez, C. et al. Global intercomparison of 12 land surface heat flux estimates. J. Geophys. Res. 116, D02102, 2011. Jung, M. et al. Recent decline in the global land evapotranspiration trend due to limited moisture supply. Nature 467, 951-954, 2010. Miralles, D.G. et al. Global land-surface evaporation estimated from satellite-based observations. Hydrol. Earth Syst. Sci. 15, 453-469, 2011. Mu, Q., Zhao, M. & Running, S.W. Improvements to a MODIS global terrestrial evapotranspiration algorithm. Remote Sens. Environ. 115, 1781-1800, 2011. Mueller, B. et al. Benchmark products for land evapotranspiration: LandFlux-EVAL multi- dataset synthesis. Hydrol. Earth Syst. Sci. 17, 3707-3720, 2013. Su, Z. The Surface Energy Balance System (SEBS) for estimation of turbulent heat fluxes. Hydrol. Earth Syst. Sci. 6, 85-99, 2002.

  1. Algebraic Multigrid Benchmark

    SciTech Connect

    2013-05-06

    AMG2013 is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. It has been derived directly from the Boomer AMG solver in the hypre library, a large linear solvers library that is being developed in the Center for Applied Scientific Computing (CASC) at LLNL. The driver provided in the benchmark can build various test problems. The default problem is a Laplace type problem on an unstructured domain with various jumps and an anisotropy in one part.

  2. MPI Multicore Linktest Benchmark

    SciTech Connect

    Schulz, Martin

    2008-01-25

    The MPI Multicore Linktest (LinkTest) measures the aggregate bandwidth from/to a multicore node in a parallel system. It allows the user to specify a variety of different node layout and communication routine variations and reports the maximal observed bandwidth across all specified options. In particular, this benchmark is able to vary the number of tasks on the root node and thereby allows users to study the impact of multicore architectures on MPI communication performance.

  3. American Fuel Cell Bus Project Evaluation. Second Report

    SciTech Connect

    Eudy, Leslie; Post, Matthew

    2015-09-01

    This report presents results of the American Fuel Cell Bus (AFCB) Project, a demonstration of fuel cell electric buses operating in the Coachella Valley area of California. The prototype AFCB was developed as part of the Federal Transit Administration's (FTA's) National Fuel Cell Bus Program. Through the non-profit consortia CALSTART, a team led by SunLine Transit Agency and BAE Systems developed a new fuel cell electric bus for demonstration. SunLine added two more AFCBs to its fleet in 2014 and another in 2015. FTA and the AFCB project team are collaborating with the U.S. Department of Energy (DOE) and DOE's National Renewable Energy Laboratory to evaluate the buses in revenue service. This report summarizes the performance results for the buses through June 2015.

  4. New benchmarks and design criteria for laboratory consolidations.

    PubMed

    Wilson, Linda S

    2003-01-01

    Benchmarks and design criteria previously used for planning consolidated laboratories such as bed size, staffing, and test volumes no longer apply. To achieve greater operational efficiencies, consolidated laboratories should be designed with open, flexible, and adaptable space using work flow/workstations, instrumentation requirements, and the degree of automation as the key design criteria. The primary objective of most consolidations is the reduction of staff with a substantial increase in workload. A critical factor when planning a consolidated laboratory is the ability of the space to accommodate the increase in testing and procedures to serve multiple facilities and growing outreach programs with fewer FTEs. Designing the laboratory starts with a thorough evaluation of work flow, testing procedures, desired adjacencies, and relationships within the laboratory. An area analysis should be developed describing in detail projected space requirements. Consideration should be given for the incorporation of automation/robotics and new, more efficient, and comprehensive instrumentation. Safety, noise, vibration control, lighting, and engineering support systems are all critical issues that also must be effectively addressed and incorporated into the design. Specific issues that will be discussed at this program include projected space requirements; review and development of existing and projected workstations; equipment requirements; lighting options; workload and procedures review; staffing procedures; flexibility/adaptability; relationships and adjacencies; flow diagrams; plan development; cost implications, on-site versus off-site facilities; and new construction versus renovation construction cost comparisons. Using specific examples from consolidated laboratory projects, we have designed a case study presentation by the laboratory director from a recently completed laboratory consolidation project serving a multihospital system. We will discuss the new design

  5. A One-group, One-dimensional Transport Benchmark in Cylindrical Geometry

    SciTech Connect

    Barry Ganapol; Abderrafi M. Ougouag

    2006-06-01

    A 1-D, 1-group computational benchmark in cylndrical geometry is described. This neutron transport benchmark is useful for evaluating reactor concepts that possess azimuthal symmetry such as a pebble-bed reactor.

  6. Pescara benchmark: overview of modelling, testing and identification

    NASA Astrophysics Data System (ADS)

    Bellino, A.; Brancaleoni, F.; Bregant, L.; Carminelli, A.; Catania, G.; Di Evangelista, A.; Gabriele, S.; Garibaldi, L.; Marchesiello, S.; Sorrentino, S.; Spina, D.; Valente, C.; Zuccarino, L.

    2011-07-01

    The `Pescara benchmark' is part of the national research project `BriViDi' (BRIdge VIbrations and DIagnosis) supported by the Italian Ministero dell'Universitá e Ricerca. The project is aimed at developing an integrated methodology for the structural health evaluation of railway r/c, p/c bridges. The methodology should provide for applicability in operating conditions, easy data acquisition through common industrial instrumentation, robustness and reliability against structural and environmental uncertainties. The Pescara benchmark consisted in lab tests to get a consistent and large experimental data base and subsequent data processing. Special tests were devised to simulate the train transit effects in actual field conditions. Prestressed concrete beams of current industrial production both sound and damaged at various severity corrosion levels were tested. The results were collected either in a deterministic setting and in a form suitable to deal with experimental uncertainties. Damage identification was split in two approaches: with or without a reference model. In the first case f.e. models were used in conjunction with non conventional updating techniques. In the second case, specialized output-only identification techniques capable to deal with time-variant and possibly non linear systems were developed. The lab tests allowed validating the above approaches and the performances of classical modal based damage indicators.

  7. Benchmarks for Psychotherapy Efficacy in Adult Major Depression

    ERIC Educational Resources Information Center

    Minami, Takuya; Wampold, Bruce E.; Serlin, Ronald C.; Kircher, John C.; Brown, George S.

    2007-01-01

    This study estimates pretreatment-posttreatment effect size benchmarks for the treatment of major depression in adults that may be useful in evaluating psychotherapy effectiveness in clinical practice. Treatment efficacy benchmarks for major depression were derived for 3 different types of outcome measures: the Hamilton Rating Scale for Depression…

  8. MHEC Survey Establishes Midwest Property Insurance Benchmarks.

    ERIC Educational Resources Information Center

    Midwestern Higher Education Commission Risk Management Institute Research Bulletin, 1994

    1994-01-01

    This publication presents the results of a survey of over 200 midwestern colleges and universities on their property insurance programs and establishes benchmarks to help these institutions evaluate their insurance programs. Findings included the following: (1) 51 percent of respondents currently purchase their property insurance as part of a…

  9. Algorithm and Architecture Independent Benchmarking with SEAK

    SciTech Connect

    Tallent, Nathan R.; Manzano Franco, Joseph B.; Gawande, Nitin A.; Kang, Seung-Hwa; Kerbyson, Darren J.; Hoisie, Adolfy; Cross, Joseph

    2016-05-23

    Many applications of high performance embedded computing are limited by performance or power bottlenecks. We have designed the Suite for Embedded Applications & Kernels (SEAK), a new benchmark suite, (a) to capture these bottlenecks in a way that encourages creative solutions; and (b) to facilitate rigorous, objective, end-user evaluation for their solutions. To avoid biasing solutions toward existing algorithms, SEAK benchmarks use a mission-centric (abstracted from a particular algorithm) and goal-oriented (functional) specification. To encourage solutions that are any combination of software or hardware, we use an end-user black-box evaluation that can capture tradeoffs between performance, power, accuracy, size, and weight. The tradeoffs are especially informative for procurement decisions. We call our benchmarks future proof because each mission-centric interface and evaluation remains useful despite shifting algorithmic preferences. It is challenging to create both concise and precise goal-oriented specifications for mission-centric problems. This paper describes the SEAK benchmark suite and presents an evaluation of sample solutions that highlights power and performance tradeoffs.

  10. Seven Benchmarks for Information Technology Investment.

    ERIC Educational Resources Information Center

    Smallen, David; Leach, Karen

    2002-01-01

    Offers benchmarks to help campuses evaluate their efforts in supplying information technology (IT) services. The first three help understand the IT budget, the next three provide insight into staffing levels and emphases, and the seventh relates to the pervasiveness of institutional infrastructure. (EV)

  11. Radionuclide Inventory Distribution Project Data Evaluation and Verification White Paper

    SciTech Connect

    NSTec Environmental Restoration

    2010-05-17

    Testing of nuclear explosives caused widespread contamination of surface soils on the Nevada Test Site (NTS). Atmospheric tests produced the majority of this contamination. The Radionuclide Inventory and Distribution Program (RIDP) was developed to determine distribution and total inventory of radionuclides in surface soils at the NTS to evaluate areas that may present long-term health hazards. The RIDP achieved this objective with aerial radiological surveys, soil sample results, and in situ gamma spectroscopy. This white paper presents the justification to support the use of RIDP data as a guide for future evaluation and to support closure of Soils Sub-Project sites under the purview of the Federal Facility Agreement and Consent Order. Use of the RIDP data as part of the Data Quality Objective process is expected to provide considerable cost savings and accelerate site closures. The following steps were completed: - Summarize the RIDP data set and evaluate the quality of the data. - Determine the current uses of the RIDP data and cautions associated with its use. - Provide recommendations for enhancing data use through field verification or other methods. The data quality is sufficient to utilize RIDP data during the planning process for site investigation and closure. Project planning activities may include estimating 25-millirem per industrial access year dose rate boundaries, optimizing characterization efforts, projecting final end states, and planning remedial actions. In addition, RIDP data may be used to identify specific radionuclide distributions, and augment other non-radionuclide dose rate data. Finally, the RIDP data can be used to estimate internal and external dose rates. The data quality is sufficient to utilize RIDP data during the planning process for site investigation and closure. Project planning activities may include estimating 25-millirem per industrial access year dose rate boundaries, optimizing characterization efforts, projecting final

  12. Los Alamos National Laboratory computer benchmarking 1982

    SciTech Connect

    Martin, J.L.

    1983-06-01

    Evaluating the performance of computing machinery is a continual effort of the Computer Research and Applications Group of the Los Alamos National Laboratory. This report summarizes the results of the group's benchmarking activities performed between October 1981 and September 1982, presenting compilation and execution times as well as megaflop rates for a set of benchmark codes. Tests were performed on the following computers: Cray Research, Inc. (CRI) Cray-1S; Control Data Corporation (CDC) 7600, 6600, Cyber 73, Cyber 825, Cyber 835, Cyber 855, and Cyber 205; Digital Equipment Corporation (DEC) VAX 11/780 and VAX 11/782; and Apollo Computer, Inc., Apollo.

  13. Toxicological benchmarks for wildlife: 1996 Revision

    SciTech Connect

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets.

  14. Toxicological benchmarks for wildlife: 1994 Revision

    SciTech Connect

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report.

  15. The NAS Parallel Benchmarks

    SciTech Connect

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage

  16. Asotin Creek Instream Habitat Alteration Projects: 1998 Habitat Evaluation Surveys.

    SciTech Connect

    Bumgarner, Joseph D.

    1999-03-01

    The Asotin Creek Model Watershed Master Plan was completed 1994. The plan was developed by a landowner steering committee for the Asotin County Conservation District (ACCD), with technical support from the various Federal, State and local entities. Actions identified within the plan to improve the Asotin Creek ecosystem fall into four main categories, (1) Stream and Riparian, (2) Forestland, (3) Rangeland, and (4) Cropland. Specific actions to be carried out within the stream and in the riparian area to improve fish habitat were, (a) create more pools, (b) increase the amount of large organic debris (LOD), (c) increase the riparian buffer zone through tree planting, and (d) increase fencing to limit livestock access; additionally, the actions are intended to stabilize the river channel, reduce sediment input, and protect private property. Fish species of main concern in Asotin Creek are summer steelhead (Oncorhynchus mykiss), spring chinook (Oncorhynchus tshawytscha), and bull trout (Salvelinus confluentus). Spring chinook in Asotin Creek are considered extinct (Bumgarner et al. 1998); bull trout and summer steelhead are below historical levels and are currently as ''threatened'' under the ESA. In 1998, 16 instream habitat projects were planned by ACCD along with local landowners. The ACCD identified the need for a more detailed analysis of these instream projects to fully evaluate their effectiveness at improving fish habitat. The Washington Department of Fish and Wildlife's (WDFW) Snake River Lab (SRL) was contracted by the ACCD to take pre-construction measurements of the existing habitat (pools, LOD, width, depth, etc.) within each identified site, and to eventually evaluate fish use within these sites. All pre-construction habitat measurements were completed between 6 and 14 July, 1998. 1998 was the first year that this sort of evaluation has occurred. Post construction measurements of habitat structures installed in 1998, and fish usage evaluation, will be

  17. Benchmarking Image Matching for Surface Description

    NASA Astrophysics Data System (ADS)

    Haala, Norbert; Stößel, Wolfgang; Gruber, Michael; Pfeifer, Norbert; Fritsch, Dieter

    2013-04-01

    Semi Global Matching algorithms have encompassed a renaissance to process stereoscopic data sets for surface reconstructions. This method is capable to provide very dense point clouds with sampling distances close to the Ground Sampling Resolution (GSD) of aerial images. EuroSDR, the pan-European organization of Spatial Data Research has initiated a benchmark for dense image matching. The expected outcomes of this benchmark are assessments for suitability, quality measures for dense surface reconstructions and run-time aspects. In particular, aerial image blocks of two sites covering two types of landscapes (urban and rural) are analysed. The benchmark' participants provide their results with respect to several criteria. As a follow-up an overall evaluation is given. Finally, point clouds of rural and urban surfaces delivered by very dense image matching algorithms and software packages are presented and results are compared.

  18. 7 CFR 634.50 - Program and project monitoring and evaluation.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... evaluate the improvement in water quality in the project area and to make projections on a nationwide basis. Water-quality monitoring, evaluation, and analysis will be conducted to evaluate the overall cost and effectiveness of projects and BMPs to provide information on the impact of the program on improved water...

  19. 7 CFR 634.50 - Program and project monitoring and evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... evaluate the improvement in water quality in the project area and to make projections on a nationwide basis. Water-quality monitoring, evaluation, and analysis will be conducted to evaluate the overall cost and effectiveness of projects and BMPs to provide information on the impact of the program on improved water...

  20. 7 CFR 634.50 - Program and project monitoring and evaluation.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... evaluate the improvement in water quality in the project area and to make projections on a nationwide basis. Water-quality monitoring, evaluation, and analysis will be conducted to evaluate the overall cost and effectiveness of projects and BMPs to provide information on the impact of the program on improved water...

  1. 7 CFR 634.50 - Program and project monitoring and evaluation.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... evaluate the improvement in water quality in the project area and to make projections on a nationwide basis. Water-quality monitoring, evaluation, and analysis will be conducted to evaluate the overall cost and effectiveness of projects and BMPs to provide information on the impact of the program on improved water...

  2. 7 CFR 634.50 - Program and project monitoring and evaluation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... evaluate the improvement in water quality in the project area and to make projections on a nationwide basis. Water-quality monitoring, evaluation, and analysis will be conducted to evaluate the overall cost and effectiveness of projects and BMPs to provide information on the impact of the program on improved water...

  3. EVALUATION OF THE WEIGHT-BASED COLLECTION PROJECT IN FARMINGTON, MINNESOTA: A MITE PROGRAM EVALUATION

    EPA Science Inventory

    This project evaluates a test program of a totally automated weight-based refuse disposal rate system. his test program was conducted by the City of Farmington, Minnesota between 1991 and 1993. he intent of the program was to test a mechanism which would automatically assess a fe...

  4. Benchmarking facilities providing care: An international overview of initiatives

    PubMed Central

    Thonon, Frédérique; Watson, Jonathan; Saghatchian, Mahasti

    2015-01-01

    We performed a literature review of existing benchmarking projects of health facilities to explore (1) the rationales for those projects, (2) the motivation for health facilities to participate, (3) the indicators used and (4) the success and threat factors linked to those projects. We studied both peer-reviewed and grey literature. We examined 23 benchmarking projects of different medical specialities. The majority of projects used a mix of structure, process and outcome indicators. For some projects, participants had a direct or indirect financial incentive to participate (such as reimbursement by Medicaid/Medicare or litigation costs related to quality of care). A positive impact was reported for most projects, mainly in terms of improvement of practice and adoption of guidelines and, to a lesser extent, improvement in communication. Only 1 project reported positive impact in terms of clinical outcomes. Success factors and threats are linked to both the benchmarking process (such as organisation of meetings, link with existing projects) and indicators used (such as adjustment for diagnostic-related groups). The results of this review will help coordinators of a benchmarking project to set it up successfully. PMID:26770800

  5. Energy saving in WWTP: Daily benchmarking under uncertainty and data availability limitations.

    PubMed

    Torregrossa, D; Schutz, G; Cornelissen, A; Hernández-Sancho, F; Hansen, J

    2016-07-01

    Efficient management of Waste Water Treatment Plants (WWTPs) can produce significant environmental and economic benefits. Energy benchmarking can be used to compare WWTPs, identify targets and use these to improve their performance. Different authors have performed benchmark analysis on monthly or yearly basis but their approaches suffer from a time lag between an event, its detection, interpretation and potential actions. The availability of on-line measurement data on many WWTPs should theoretically enable the decrease of the management response time by daily benchmarking. Unfortunately this approach is often impossible because of limited data availability. This paper proposes a methodology to perform a daily benchmark analysis under database limitations. The methodology has been applied to the Energy Online System (EOS) developed in the framework of the project "INNERS" (INNovative Energy Recovery Strategies in the urban water cycle). EOS calculates a set of Key Performance Indicators (KPIs) for the evaluation of energy and process performances. In EOS, the energy KPIs take in consideration the pollutant load in order to enable the comparison between different plants. For example, EOS does not analyse the energy consumption but the energy consumption on pollutant load. This approach enables the comparison of performances for plants with different loads or for a single plant under different load conditions. The energy consumption is measured by on-line sensors, while the pollutant load is measured in the laboratory approximately every 14 days. Consequently, the unavailability of the water quality parameters is the limiting factor in calculating energy KPIs. In this paper, in order to overcome this limitation, the authors have developed a methodology to estimate the required parameters and manage the uncertainty in the estimation. By coupling the parameter estimation with an interval based benchmark approach, the authors propose an effective, fast and reproducible

  6. Benchmarking: applications to transfusion medicine.

    PubMed

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal.

  7. Nanomagnet Logic: Architectures, design, and benchmarking

    NASA Astrophysics Data System (ADS)

    Kurtz, Steven J.

    Nanomagnet Logic (NML) is an emerging technology being studied as a possible replacement or supplementary device for Complimentary Metal-Oxide-Semiconductor (CMOS) Field-Effect Transistors (FET) by the year 2020. NML devices offer numerous potential advantages including: low energy operation, steady state non-volatility, radiation hardness and a clear path to fabrication and integration with CMOS. However, maintaining both low-energy operation and non-volatility while scaling from the device to the architectural level is non-trivial as (i) nearest neighbor interactions within NML circuits complicate the modeling of ensemble nanomagnet behavior and (ii) the energy intensive clock structures required for re-evaluation and NML's relatively high latency challenge its ability to offer system-level performance wins against other emerging nanotechnologies. Thus, further research efforts are required to model more complex circuits while also identifying circuit design techniques that balance low-energy operation with steady state non-volatility. In addition, further work is needed to design and model low-power on-chip clocks while simultaneously identifying application spaces where NML systems (including clock overhead) offer sufficient energy savings to merit their inclusion in future processors. This dissertation presents research advancing the understanding and modeling of NML at all levels including devices, circuits, and line clock structures while also benchmarking NML against both scaled CMOS and tunneling FETs (TFET) devices. This is accomplished through the development of design tools and methodologies for (i) quantifying both energy and stability in NML circuits and (ii) evaluating line-clocked NML system performance. The application of these newly developed tools improves the understanding of ideal design criteria (i.e., magnet size, clock wire geometry, etc.) for NML architectures. Finally, the system-level performance evaluation tool offers the ability to

  8. Comparative evaluation of 1D and quasi-2D hydraulic models based on benchmark and real-world applications for uncertainty assessment in flood mapping

    NASA Astrophysics Data System (ADS)

    Dimitriadis, Panayiotis; Tegos, Aristoteles; Oikonomou, Athanasios; Pagana, Vassiliki; Koukouvinos, Antonios; Mamassis, Nikos; Koutsoyiannis, Demetris; Efstratiadis, Andreas

    2016-03-01

    One-dimensional and quasi-two-dimensional hydraulic freeware models (HEC-RAS, LISFLOOD-FP and FLO-2d) are widely used for flood inundation mapping. These models are tested on a benchmark test with a mixed rectangular-triangular channel cross section. Using a Monte-Carlo approach, we employ extended sensitivity analysis by simultaneously varying the input discharge, longitudinal and lateral gradients and roughness coefficients, as well as the grid cell size. Based on statistical analysis of three output variables of interest, i.e. water depths at the inflow and outflow locations and total flood volume, we investigate the uncertainty enclosed in different model configurations and flow conditions, without the influence of errors and other assumptions on topography, channel geometry and boundary conditions. Moreover, we estimate the uncertainty associated to each input variable and we compare it to the overall one. The outcomes of the benchmark analysis are further highlighted by applying the three models to real-world flood propagation problems, in the context of two challenging case studies in Greece.

  9. Benchmark initiative on coupled multiphase flow and geomechanical processes during CO2 injection

    NASA Astrophysics Data System (ADS)

    Benisch, K.; Annewandter, R.; Olden, P.; Mackay, E.; Bauer, S.; Geiger, S.

    2012-12-01

    CO2 injection into deep saline aquifers involves multiple strongly interacting processes such as multiphase flow and geomechanical deformation, which threat to the seal integrity of CO2 repositories. Coupled simulation codes are required to establish realistic prognoses of the coupled process during CO2 injection operations. International benchmark initiatives help to evaluate, to compare and to validate coupled simulation results. However, there is no published code comparison study so far focusing on the impact of coupled multiphase flow and geomechanics on the long-term integrity of repositories, which is required to obtain confidence in the predictive capabilities of reservoir simulators. We address this gap by proposing a benchmark study. A wide participation from academic and industrial institutions is sought, as the aim of building confidence in coupled simulators become more plausible with many participants. Most published benchmark studies on coupled multiphase flow and geomechanical processes have been performed within the field of nuclear waste disposal (e.g. the DECOVALEX project), using single-phase formulation only. As regards CO2 injection scenarios, international benchmark studies have been published comparing isothermal and non-isothermal multiphase flow processes such as the code intercomparison by LBNL, the Stuttgart Benchmark study, the CLEAN benchmark approach and other initiatives. Recently, several codes have been developed or extended to simulate the coupling of hydraulic and geomechanical processes (OpenGeoSys, ELIPSE-Visage, GEM, DuMuX and others), which now enables a comprehensive code comparison. We propose four benchmark tests of increasing complexity, addressing the coupling between multiphase flow and geomechanical processes during CO2 injection. In the first case, a horizontal non-faulted 2D model consisting of one reservoir and one cap rock is considered, focusing on stress and strain regime changes in the storage formation and the

  10. International land Model Benchmarking (ILAMB) Package v001.00

    DOE Data Explorer

    Mu, Mingquan [University of California, Irvine; Randerson, James T. [University of California, Irvine; Riley, William J. [Lawrence Berkeley National Laboratory; Hoffman, Forrest M. [Oak Ridge National Laboratory

    2016-05-02

    As a contribution to International Land Model Benchmarking (ILAMB) Project, we are providing new analysis approaches, benchmarking tools, and science leadership. The goal of ILAMB is to assess and improve the performance of land models through international cooperation and to inform the design of new measurement campaigns and field studies to reduce uncertainties associated with key biogeochemical processes and feedbacks. ILAMB is expected to be a primary analysis tool for CMIP6 and future model-data intercomparison experiments. This team has developed initial prototype benchmarking systems for ILAMB, which will be improved and extended to include ocean model metrics and diagnostics.

  11. International land Model Benchmarking (ILAMB) Package v002.00

    SciTech Connect

    Collier, Nathaniel; Hoffman, Forrest M.; Mu, Mingquan; Randerson, James T.; Riley, William J.

    2016-05-09

    As a contribution to International Land Model Benchmarking (ILAMB) Project, we are providing new analysis approaches, benchmarking tools, and science leadership. The goal of ILAMB is to assess and improve the performance of land models through international cooperation and to inform the design of new measurement campaigns and field studies to reduce uncertainties associated with key biogeochemical processes and feedbacks. ILAMB is expected to be a primary analysis tool for CMIP6 and future model-data intercomparison experiments. This team has developed initial prototype benchmarking systems for ILAMB, which will be improved and extended to include ocean model metrics and diagnostics.

  12. How to achieve and prove performance improvement - 15 years of experience in German wastewater benchmarking.

    PubMed

    Bertzbach, F; Franz, T; Möller, K

    2012-01-01

    This paper shows the results of performance improvement, which have been achieved in benchmarking projects in the wastewater industry in Germany over the last 15 years. A huge number of changes in operational practice and also in achieved annual savings can be shown, induced in particular by benchmarking at process level. Investigation of this question produces some general findings for the inclusion of performance improvement in a benchmarking project and for the communication of its results. Thus, we elaborate on the concept of benchmarking at both utility and process level, which is still a necessary distinction for the integration of performance improvement into our benchmarking approach. To achieve performance improvement via benchmarking it should be made quite clear that this outcome depends, on one hand, on a well conducted benchmarking programme and, on the other, on the individual situation within each participating utility.

  13. Core Benchmarks Descriptions

    SciTech Connect

    Pavlovichev, A.M.

    2001-05-24

    Actual regulations while designing of new fuel cycles for nuclear power installations comprise a calculational justification to be performed by certified computer codes. It guarantees that obtained calculational results will be within the limits of declared uncertainties that are indicated in a certificate issued by Gosatomnadzor of Russian Federation (GAN) and concerning a corresponding computer code. A formal justification of declared uncertainties is the comparison of calculational results obtained by a commercial code with the results of experiments or of calculational tests that are calculated with an uncertainty defined by certified precision codes of MCU type or of other one. The actual level of international cooperation provides an enlarging of the bank of experimental and calculational benchmarks acceptable for a certification of commercial codes that are being used for a design of fuel loadings with MOX fuel. In particular, the work is practically finished on the forming of calculational benchmarks list for a certification of code TVS-M as applied to MOX fuel assembly calculations. The results on these activities are presented.

  14. Design and Application of a Community Land Benchmarking System for Earth System Models

    NASA Astrophysics Data System (ADS)

    Mu, M.; Hoffman, F. M.; Lawrence, D. M.; Riley, W. J.; Keppel-Aleks, G.; Koven, C. D.; Kluzek, E. B.; Mao, J.; Randerson, J. T.

    2015-12-01

    Benchmarking has been widely used to assess the ability of climate models to capture the spatial and temporal variability of observations during the historical era. For the carbon cycle and terrestrial ecosystems, the design and development of an open-source community platform has been an important goal as part of the International Land Model Benchmarking (ILAMB) project. Here we developed a new benchmarking software system that enables the user to specify the models, benchmarks, and scoring metrics, so that results can be tailored to specific model intercomparison projects. Evaluation data sets included soil and aboveground carbon stocks, fluxes of energy, carbon and water, burned area, leaf area, and climate forcing and response variables. We used this system to evaluate simulations from the 5th Phase of the Coupled Model Intercomparison Project (CMIP5) with prognostic atmospheric carbon dioxide levels over the period from 1850 to 2005 (i.e., esmHistorical simulations archived on the Earth System Grid Federation). We found that the multi-model ensemble had a high bias in incoming solar radiation across Asia, likely as a consequence of incomplete representation of aerosol effects in this region, and in South America, primarily as a consequence of a low bias in mean annual precipitation. The reduced precipitation in South America had a larger influence on gross primary production than the high bias in incoming light, and as a consequence gross primary production had a low bias relative to the observations. Although model to model variations were large, the multi-model mean had a positive bias in atmospheric carbon dioxide that has been attributed in past work to weak ocean uptake of fossil emissions. In mid latitudes of the northern hemisphere, most models overestimate latent heat fluxes in the early part of the growing season, and underestimate these fluxes in mid-summer and early fall, whereas sensible heat fluxes show the opposite trend.

  15. A chemical EOR benchmark study of different reservoir simulators

    NASA Astrophysics Data System (ADS)

    Goudarzi, Ali; Delshad, Mojdeh; Sepehrnoori, Kamy

    2016-09-01

    Interest in chemical EOR processes has intensified in recent years due to the advancements in chemical formulations and injection techniques. Injecting Polymer (P), surfactant/polymer (SP), and alkaline/surfactant/polymer (ASP) are techniques for improving sweep and displacement efficiencies with the aim of improving oil production in both secondary and tertiary floods. There has been great interest in chemical flooding recently for different challenging situations. These include high temperature reservoirs, formations with extreme salinity and hardness, naturally fractured carbonates, and sandstone reservoirs with heavy and viscous crude oils. More oil reservoirs are reaching maturity where secondary polymer floods and tertiary surfactant methods have become increasingly important. This significance has added to the industry's interest in using reservoir simulators as tools for reservoir evaluation and management to minimize costs and increase the process efficiency. Reservoir simulators with special features are needed to represent coupled chemical and physical processes present in chemical EOR processes. The simulators need to be first validated against well controlled lab and pilot scale experiments to reliably predict the full field implementations. The available data from laboratory scale include 1) phase behavior and rheological data; and 2) results of secondary and tertiary coreflood experiments for P, SP, and ASP floods under reservoir conditions, i.e. chemical retentions, pressure drop, and oil recovery. Data collected from corefloods are used as benchmark tests comparing numerical reservoir simulators with chemical EOR modeling capabilities such as STARS of CMG, ECLIPSE-100 of Schlumberger, REVEAL of Petroleum Experts. The research UTCHEM simulator from The University of Texas at Austin is also included since it has been the benchmark for chemical flooding simulation for over 25 years. The results of this benchmark comparison will be utilized to improve

  16. Benchmark specifications for EBR-II shutdown heat removal tests

    SciTech Connect

    Sofu, T.; Briggs, L. L.

    2012-07-01

    Argonne National Laboratory (ANL) is hosting an IAEA-coordinated research project on benchmark analyses of sodium-cooled fast reactor passive safety tests performed at the Experimental Breeder Reactor-II (EBR-II). The benchmark project involves analysis of a protected and an unprotected loss of flow tests conducted during an extensive testing program within the framework of the U.S. Integral Fast Reactor program to demonstrate the inherently safety features of EBR-II as a pool-type, sodium-cooled fast reactor prototype. The project is intended to improve the participants' design and safety analysis capabilities for sodium-cooled fast reactors through validation and qualification of safety analysis codes and methods. This paper provides a description of the EBR-II tests included in the program, and outlines the benchmark specifications being prepared to support the IAEA-coordinated research project. (authors)

  17. Evaluating South Carolina's community cardiovascular disease prevention project.

    PubMed Central

    Wheeler, F C; Lackland, D T; Mace, M L; Reddick, A; Hogelin, G; Remington, P L

    1991-01-01

    A community cardiovascular disease prevention program was undertaken as a cooperative effort of the South Carolina Department of Health and Environmental Control and the Centers for Disease Control of the Public Health Service. As part of the evaluation of the project, a large scale community health survey was conducted by the State and Federal agencies. The successful design and implementation of the survey, which included telephone and in-home interviews as well as clinical assessments of participants, is described. Interview response rates were adequate, although physical assessments were completed on only 61 percent of those interviewed. Households without telephones were difficult and costly to identify, and young adults were difficult to locate for survey participation. The survey produced baseline data for program planning and for measuring the success of ongoing intervention efforts. Survey data also have been used to estimate the prevalence of selected cardiovascular disease risk factors. PMID:1910187

  18. Project SOLWIND: Space radiation exposure. [evaluation of particle fluxes

    NASA Technical Reports Server (NTRS)

    Stassinopoulos, E. G.

    1975-01-01

    A special orbital radiation study was conducted for the SOLWIND project to evaluate mission-encountered energetic particle fluxes. Magnetic field calculations were performed with a current field model, extrapolated to the tentative spacecraft launch epoch with linear time terms. Orbital flux integrations for circular flight paths were performed with the latest proton and electron environment models, using new improved computational methods. Temporal variations in the ambient electron environment are considered and partially accounted for. Estimates of average energetic solar proton fluences are given for a one year mission duration at selected integral energies ranging from E greater than 10 to E greater than 100 MeV; the predicted annual fluence is found to relate to the period of maximum solar activity during the next solar cycle. The results are presented in graphical and tabular form; they are analyzed, explained, and discussed.

  19. Evaluating the impact of decision making during construction on transport project outcome.

    PubMed

    Polydoropoulou, Amalia; Roumboutsos, Athena

    2009-11-01

    Decisions made during the project construction phase may bear considerable impacts on the success of transport projects and undermine the ex-ante project evaluation. An innovative and holistic approach has been taken to assess and address this issue by (a) examining the decision process and procedure during project construction, through a field survey, (b) assessing the impact of decisions made during construction on respective transport project and, finally, (c) developing a quality monitoring framework model which links decisions made during the project implementation (construction) phase with the ex-ante and ex-post project evaluations. The framework model is proposed as a guiding and support tool for decision makers.

  20. Evaluation of water quality projects in the Lake Tahoe basin.

    PubMed

    Schuster, S; Grismer, M E

    2004-01-01

    Lake Tahoe is a large sub alpine lake located in the Sierra Nevada Range in the states of California and Nevada. The Lake Tahoe watershed is relatively small (800 km(20) and is made up of soils with a very low nutrient content and when combined with the Lake's enormous volume (156 km(3)) produces water of unparalleled clarity. However, urbanization around the Lake during the past 50 yr has greatly increased nutrient flux into the Lake resulting in increased algae production and rapidly declining water clarity. Lake transition from nitrogen limiting to phosphorous limiting during the last 30 yr suggests the onset of cultural eutrophication of Lake Tahoe. Protecting Lake Tahoe's water quality has become a major public concern and much time, effort, and money has been, and will be, spent on this undertaking. The effectiveness of remedial actions is the subject of some debate. Local regulatory agencies have mandated implementation of best management practices (BMPs) to mitigate the effects of development, sometimes at great additional expense for developers and homeowners who question their effectiveness. Conclusive studies on the BMP effectiveness are also expensive and can be difficult to accomplish such that very few such studies have been completed. However, several project evaluations have been completed and more are underway. Such study usually demonstrates support of the project's effectiveness in decreasing nutrient flux to Lake Tahoe. Here, we review the existing state of knowledge of nutrient loading to the Lake and to highlight the need for further evaluative investigations of BMPs in order to improve their performance in present and future regulatory actions.

  1. Project Familia. Final Evaluation Report, 1993-94. OER Report.

    ERIC Educational Resources Information Center

    New York City Board of Education, Brooklyn, NY. Office of Educational Research.

    Project Familia was an Elementary and Secondary Education Act Title VII project in its second year in 1993-94 in New York City. Project Familia served 77 children at 3 schools who were identified as limited English proficient, special education students in prekindergarten through fifth grade and their parents. The project provided after-school…

  2. District Facilitator Project, E.C.I.A. Chapter 2. Final Evaluation Report, 1982-83.

    ERIC Educational Resources Information Center

    District of Columbia Public Schools, Washington, DC. Div. of Quality Assurance.

    The District Facilitator Project (DFP) works through the National Diffusion Network (NDN) to assist local schools in improving their programs by linking them with exemplary projects from around the country. Evaluation of the project in operation in the District of Columbia public schools in 1982-83 showed that all the project's objectives were…

  3. Aberdeen Area Final Evaluation Report, ESEA Title I Project, Fiscal Year 1974.

    ERIC Educational Resources Information Center

    Bureau of Indian Affairs (Dept. of Interior), Aberdeen, SD. Aberdeen Area Office.

    Compiled from the final evaluation reports of 36 direct instruction projects and 1 Area Technical Assistance project (94 percent of which were contracted and administered by American Indian tribes or Indian school boards), this report is a summative evaluation of 1974 Title I projects in North and South Dakota. A brief introduction describes the…

  4. Evaluation of Title I ESEA Projects, 1974-75: Technical Reports. Report No. 7606.

    ERIC Educational Resources Information Center

    Philadelphia School District, PA. Office of Research and Evaluation.

    Technical reports of individual Title I project evaluations conducted during the 1974-75 school year are contained in this annual volume. It presents information about each project's rationale, expected outcomes, mode of operation, previous evaluative findings, current implementation, and attainment of its objectives. Projects included are:…

  5. Finding the Forest Amid the Trees: Tools for Evaluating Astronomy Education and Public Outreach Projects

    ERIC Educational Resources Information Center

    Bailey, Janelle M.; Slater, Timothy F.

    2004-01-01

    The effective evaluation of educational projects is becoming increasingly important to funding agencies and to the individuals and organizations involved in the projects. This brief "how-to" guide provides an introductory description of the purpose and basic ideas of project evaluation, and uses authentic examples from four different astronomy and…

  6. Thermal Performance Benchmarking

    SciTech Connect

    Feng, Xuhui; Moreno, Gilbert; Bennion, Kevin

    2016-06-07

    The goal for this project is to thoroughly characterize the thermal performance of state-of-the-art (SOA) in-production automotive power electronics and electric motor thermal management systems. Information obtained from these studies will be used to: evaluate advantages and disadvantages of different thermal management strategies; establish baseline metrics for the thermal management systems; identify methods of improvement to advance the SOA; increase the publicly available information related to automotive traction-drive thermal management systems; help guide future electric drive technologies (EDT) research and development (R&D) efforts. The thermal performance results combined with component efficiency and heat generation information obtained by Oak Ridge National Laboratory (ORNL) may then be used to determine the operating temperatures for the EDT components under drive-cycle conditions. In FY16, the 2012 Nissan LEAF power electronics and 2014 Honda Accord Hybrid power electronics thermal management system were characterized. Comparison of the two power electronics thermal management systems was also conducted to provide insight into the various cooling strategies to understand the current SOA in thermal management for automotive power electronics and electric motors.

  7. Benchmark analysis of MCNP{trademark} ENDF/B-VI iron

    SciTech Connect

    Court, J.D.; Hendricks, J.S.

    1994-12-01

    The MCNP ENDF/B-VI iron cross-section data was subjected to four benchmark studies as part of the Hiroshima/Nagasaki dose re-evaluation for the National Academy of Science and the Defense Nuclear Agency. The four benchmark studies were: (1) the iron sphere benchmarks from the Lawrence Livermore Pulsed Spheres; (2) the Oak Ridge National Laboratory Fusion Reactor Shielding Benchmark; (3) a 76-cm diameter iron sphere benchmark done at the University of Illinois; (4) the Oak Ridge National Laboratory Benchmark for Neutron Transport through Iron. MCNP4A was used to model each benchmark and computational results from the ENDF/B-VI iron evaluations were compared to ENDF/B-IV, ENDF/B-V, the MCNP Recommended Data Set (which includes Los Alamos National Laboratory Group T-2 evaluations), and experimental data. The results show that the ENDF/B-VI iron evaluations are as good as, or better than, previous data sets.

  8. Using Evaluability Assessment to Improve Program Evaluation for the Blue-Throated Macaw Environmental Education Project in Bolivia

    ERIC Educational Resources Information Center

    Salvatierra da Silva, Daniela; Jacobson, Susan K.; Monroe, Martha C.; Israel, Glenn D.

    2016-01-01

    An evaluability assessment of a program to save a critically endangered bird helped prepare the Blue-throated Macaw Environmental Education Project for evaluation and program improvement. The evaluability assessment facilitated agreement among key stakeholders on evaluation criteria and intended uses of evaluation information in order to maximize…

  9. Annual Progress Report Fish Research Project Oregon : Project title, Evaluation of Habitat Improvements -- John Day River.

    SciTech Connect

    Olsen, Erik A.

    1984-01-01

    This report summarizes data collected in 1983 to evaluate habitat improvements in Deer, Camp, and Clear creeks, tributaries of the John Day River. The studies are designed to evaluate changes in abundance of spring chinook and summer steelhead due to habitat improvement projects and to contrast fishery benefits with costs of construction and maintenance of each project. Structure types being evaluated are: (1) log weirs, rock weirs, log deflectors, and in stream boulders in Deer Creek; (2) log weirs in Camp Creek; and (3) log weir-boulder combinations and introduced spawning gravel in Clear Creek. Abundance of juvenile steelhead ranged from 16% to 119% higher in the improved (treatment) area than in the unimproved (control) area of Deer Creek. However, abundance of steelhead in Camp Creek was not significantly different between treatment and control areas. Chinook and steelhead abundance in Clear Creek was 50% and 25% lower, respectively in 1983, than the mean abundance estimated in three previous years. The age structure of steelhead was similar between treatment and control areas in Deer and Clear creeks. The treatment area in Camp Creek, however, had a higher percentage of age 2 and older steelhead than the control. Steelhead redd counts in Camp Creek were 36% lower in 1983 than the previous five year average. Steelhead redd counts in Deer Creek were not made in 1983 because of high streamflows. Chinook redds counted in Clear Creek were 64% lower than the five year average. Surface area, volume, cover, and spawning gravel were the same or higher than the corresponding control in each stream except in Deer Creek where there was less available cover and spawning gravel in sections with rock weirs and in those with log deflectors, respectively. Pool:riffle ratios ranged from 57:43 in sections in upper Clear Creek with log weirs to 9:91 in sections in Deer Creek with rock weirs. Smolt production following habitat improvements is estimated for each stream

  10. Thermal Performance Benchmarking; NREL (National Renewable Energy Laboratory)

    SciTech Connect

    Moreno, Gilbert

    2015-06-09

    This project proposes to seek out the SOA power electronics and motor technologies to thermally benchmark their performance. The benchmarking will focus on the thermal aspects of the system. System metrics including the junction-to-coolant thermal resistance and the parasitic power consumption (i.e., coolant flow rates and pressure drop performance) of the heat exchanger will be measured. The type of heat exchanger (i.e., channel flow, brazed, folded-fin) and any enhancement features (i.e., enhanced surfaces) will be identified and evaluated to understand their effect on performance. Additionally, the thermal resistance/conductivity of the power module’s passive stack and motor’s laminations and copper winding bundles will also be measured. The research conducted will allow insight into the various cooling strategies to understand which heat exchangers are most effective in terms of thermal performance and efficiency. Modeling analysis and fluid-flow visualization may also be carried out to better understand the heat transfer and fluid dynamics of the systems.

  11. Benchmark problems for numerical implementations of phase field models

    SciTech Connect

    Jokisaari, A. M.; Voorhees, P. W.; Guyer, J. E.; Warren, J.; Heinonen, O. G.

    2016-10-01

    Here, we present the first set of benchmark problems for phase field models that are being developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST). While many scientific research areas use a limited set of well-established software, the growing phase field community continues to develop a wide variety of codes and lacks benchmark problems to consistently evaluate the numerical performance of new implementations. Phase field modeling has become significantly more popular as computational power has increased and is now becoming mainstream, driving the need for benchmark problems to validate and verify new implementations. We follow the example set by the micromagnetics community to develop an evolving set of benchmark problems that test the usability, computational resources, numerical capabilities and physical scope of phase field simulation codes. In this paper, we propose two benchmark problems that cover the physics of solute diffusion and growth and coarsening of a second phase via a simple spinodal decomposition model and a more complex Ostwald ripening model. We demonstrate the utility of benchmark problems by comparing the results of simulations performed with two different adaptive time stepping techniques, and we discuss the needs of future benchmark problems. The development of benchmark problems will enable the results of quantitative phase field models to be confidently incorporated into integrated computational materials science and engineering (ICME), an important goal of the Materials Genome Initiative.

  12. Benchmark problems for numerical implementations of phase field models

    DOE PAGES

    Jokisaari, A. M.; Voorhees, P. W.; Guyer, J. E.; ...

    2016-10-01

    Here, we present the first set of benchmark problems for phase field models that are being developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST). While many scientific research areas use a limited set of well-established software, the growing phase field community continues to develop a wide variety of codes and lacks benchmark problems to consistently evaluate the numerical performance of new implementations. Phase field modeling has become significantly more popular as computational power has increased and is now becoming mainstream, driving the need for benchmark problems to validate and verifymore » new implementations. We follow the example set by the micromagnetics community to develop an evolving set of benchmark problems that test the usability, computational resources, numerical capabilities and physical scope of phase field simulation codes. In this paper, we propose two benchmark problems that cover the physics of solute diffusion and growth and coarsening of a second phase via a simple spinodal decomposition model and a more complex Ostwald ripening model. We demonstrate the utility of benchmark problems by comparing the results of simulations performed with two different adaptive time stepping techniques, and we discuss the needs of future benchmark problems. The development of benchmark problems will enable the results of quantitative phase field models to be confidently incorporated into integrated computational materials science and engineering (ICME), an important goal of the Materials Genome Initiative.« less

  13. Benchmark Generation and Simulation at Extreme Scale

    SciTech Connect

    Lagadapati, Mahesh; Mueller, Frank; Engelmann, Christian

    2016-01-01

    The path to extreme scale high-performance computing (HPC) poses several challenges related to power, performance, resilience, productivity, programmability, data movement, and data management. Investigating the performance of parallel applications at scale on future architectures and the performance impact of different architectural choices is an important component of HPC hardware/software co-design. Simulations using models of future HPC systems and communication traces from applications running on existing HPC systems can offer an insight into the performance of future architectures. This work targets technology developed for scalable application tracing of communication events. It focuses on extreme-scale simulation of HPC applications and their communication behavior via lightweight parallel discrete event simulation for performance estimation and evaluation. Instead of simply replaying a trace within a simulator, this work promotes the generation of a benchmark from traces. This benchmark is subsequently exposed to simulation using models to reflect the performance characteristics of future-generation HPC systems. This technique provides a number of benefits, such as eliminating the data intensive trace replay and enabling simulations at different scales. The presented work features novel software co-design aspects, combining the ScalaTrace tool to generate scalable trace files, the ScalaBenchGen tool to generate the benchmark, and the xSim tool to assess the benchmark characteristics within a simulator.

  14. Challenges and Benchmarks in Bioimage Analysis.

    PubMed

    Kozubek, Michal

    2016-01-01

    Similar to the medical imaging community, the bioimaging community has recently realized the need to benchmark various image analysis methods to compare their performance and assess their suitability for specific applications. Challenges sponsored by prestigious conferences have proven to be an effective means of encouraging benchmarking and new algorithm development for a particular type of image data. Bioimage analysis challenges have recently complemented medical image analysis challenges, especially in the case of the International Symposium on Biomedical Imaging (ISBI). This review summarizes recent progress in this respect and describes the general process of designing a bioimage analysis benchmark or challenge, including the proper selection of datasets and evaluation metrics. It also presents examples of specific target applications and biological research tasks that have benefited from these challenges with respect to the performance of automatic image analysis methods that are crucial for the given task. Finally, available benchmarks and challenges in terms of common features, possible classification and implications drawn from the results are analysed.

  15. Internal Benchmarking for Institutional Effectiveness

    ERIC Educational Resources Information Center

    Ronco, Sharron L.

    2012-01-01

    Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…

  16. The NAS kernel benchmark program

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.; Barton, J. T.

    1985-01-01

    A collection of benchmark test kernels that measure supercomputer performance has been developed for the use of the NAS (Numerical Aerodynamic Simulation) program at the NASA Ames Research Center. This benchmark program is described in detail and the specific ground rules are given for running the program as a performance test.

  17. Benchmarking: A Process for Improvement.

    ERIC Educational Resources Information Center

    Peischl, Thomas M.

    One problem with the outcome-based measures used in higher education is that they measure quantity but not quality. Benchmarking, or the use of some external standard of quality to measure tasks, processes, and outputs, is partially solving that difficulty. Benchmarking allows for the establishment of a systematic process to indicate if outputs…

  18. Teaching Breast and Testicular Self-Exams: Evaluation of a High School Curriculum Pilot Project.

    ERIC Educational Resources Information Center

    Luther, Stephen L.; And Others

    1985-01-01

    A high school curriculum project was developed to teach students about the importance of breast and testicular self-examination. Questionnaires were used to evaluate the project. Results are discussed. (DF)

  19. Production of Working Reference Materials for the Capability Evaluation Project

    SciTech Connect

    Phillip D. Noll, Jr.; Robert S. Marshall

    1999-03-01

    Nondestructive waste assay (NDA) methods are employed to determine the mass and activity of waste-entrained radionuclides as part of the National TRU (Trans-Uranic) Waste Characterization Program. In support of this program the Idaho National Engineering and Environmental Laboratory Mixed Waste Focus Area developed a plan to acquire capability/performance data on systems proposed for NDA purposes. The Capability Evaluation Project (CEP) was designed to evaluate the NDA systems of commercial contractors by subjecting all participants to identical tests involving 55 gallon drum surrogates containing known quantities and distributions of radioactive materials in the form of sealed-source standards, referred to as working reference materials (WRMs). Although numerous Pu WRMs already exist, the CEP WRM set allows for the evaluation of the capability and performance of systems with respect to waste types/configurations which contain increased amounts of {sup 241}Am relative to weapons grade Pu, waste that is dominantly {sup 241}Am, as well as wastes containing various proportions of depleted uranium. The CEP WRMs consist of a special mixture of PuO{sub 2}/AmO{sub 2} (IAP) and diatomaceous earth (DE) or depleted uranium (DU) oxide and DE and were fabricated at Los Alamos National Laboratory. The IAP WRMS are contained inside a pair of welded inner and outer stainless steel containers. The DU WRMs are singly contained within a stainless steel container equivalent to the outer container of the IAP standards. This report gives a general overview and discussion relating to the production and certification of the CEP WRMs.

  20. FireHose Streaming Benchmarks

    SciTech Connect

    Karl Anderson, Steve Plimpton

    2015-01-27

    The FireHose Streaming Benchmarks are a suite of stream-processing benchmarks defined to enable comparison of streaming software and hardware, both quantitatively vis-a-vis the rate at which they can process data, and qualitatively by judging the effort involved to implement and run the benchmarks. Each benchmark has two parts. The first is a generator which produces and outputs datums at a high rate in a specific format. The second is an analytic which reads the stream of datums and is required to perform a well-defined calculation on the collection of datums, typically to find anomalous datums that have been created in the stream by the generator. The FireHose suite provides code for the generators, sample code for the analytics (which users are free to re-implement in their own custom frameworks), and a precise definition of each benchmark calculation.

  1. Benchmarking Software Assurance Implementation

    DTIC Science & Technology

    2011-05-18

    product The chicken#. (a.k.a. Process Focused Assessment ) – Management Systems ( ISO 9001 , ISO 27001, ISO 2000) – Capability Maturity Models (CMMI...How – Executive leadership commitment – Translate ROI to project manager vocabulary (cost, schedule, quality ) – Start small and build – Use...collaboration Vocabulary Reserved Words Software Acquisition Information Assurance Project Management System Engineering Software Engineering Software

  2. Review of Evaluation Procedures Used in Project POWER.

    ERIC Educational Resources Information Center

    Ohio State Univ., Columbus. Center on Education and Training for Employment.

    Project POWER is a workplace literacy program conducted by Triton College. The project offers courses in English as a Second Language (ESL) and Adult Basic Education (ABE) to employers who are willing to pay their employees for half their class time and for 15 percent of the instructional costs. By the end of January 1990, the project had…

  3. Project HEED. Final Evaluation Report, 1974-1975.

    ERIC Educational Resources Information Center

    Edington, Everett D.; Pettibone, Timothy J.

    Project HEED's (Heed Ethnic Education Deplorization) main emphasis in 1974-75 was to develop reading and cultural awareness skills for kindergarten through 4th grades in the 7 project schools on American Indian reservations in Arizona. In its 4th year of operation, the project (funded under Elementary and Secondary Education Title III) involved…

  4. Project Recurso, 1989-1990. Final Evaluation Report. OREA Report.

    ERIC Educational Resources Information Center

    Rivera, Natasha

    This report presents final (fifth year) results of Project Recurso, a federally funded project which provided 147 Spanish-speaking special education students (grades 3-5) in 12 New York City schools with instruction in English as a Second Language (ESL), Native Language Arts (NLA), and bilingual content area subjects. The project also provided…

  5. Evaluating a "Second Life" Problem-Based Learning (PBL) Demonstrator Project: What Can We Learn?

    ERIC Educational Resources Information Center

    Beaumont, Chris; Savin-Baden, Maggi; Conradi, Emily; Poulton, Terry

    2014-01-01

    This article reports the findings of a demonstrator project to evaluate how effectively Immersive Virtual Worlds (IVWs) could support problem-based learning. The project designed, created and evaluated eight scenarios within "Second Life" (SL) for undergraduate courses in health care management and paramedic training. Evaluation was…

  6. An Analysis of Internally Funded Learning and Teaching Project Evaluation in Higher Education

    ERIC Educational Resources Information Center

    Huber, Elaine; Harvey, Marina

    2016-01-01

    Purpose: In the higher education sector, the evaluation of learning and teaching projects is assuming a role as a quality and accountability indicator. The purpose of this paper is to investigate how learning and teaching project evaluation is approached and critiques alignment between evaluation theory and practice. Design/Methodology/Approach:…

  7. 34 CFR 366.60 - What are the project evaluation standards?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 2 2010-07-01 2010-07-01 false What are the project evaluation standards? 366.60... Evaluation Standards and Compliance Indicators § 366.60 What are the project evaluation standards? To be eligible to receive funds under this part, an applicant must agree to comply with the following...

  8. Evaluation of Title I ESEA Projects, 1975-1976: Technical Reports. Report No. 77124.

    ERIC Educational Resources Information Center

    Philadelphia School District, PA. Office of Research and Evaluation.

    Technical reports of individual Title I project evaluations conducted during the 1975-76 school year are presented. The volume contains extensive information about each project's rationale, expected outcomes, mode of operation, previous evaluative findings, current implementation, and attainment of its objectives. The Title I evaluations contained…

  9. Measurement Analysis When Benchmarking Java Card Platforms

    NASA Astrophysics Data System (ADS)

    Paradinas, Pierre; Cordry, Julien; Bouzefrane, Samia

    The advent of the Java Card standard has been a major turning point in smart card technology. With the growing acceptance of this standard, understanding the performance behaviour of these platforms is becoming crucial. To meet this need, we present in this paper, a benchmark framework that enables performance evaluation at the bytecode level. This paper focuses on the validity of our time measurements on smart cards.

  10. Finding a benchmark for monitoring hospital cleanliness.

    PubMed

    Mulvey, D; Redding, P; Robertson, C; Woodall, C; Kingsmore, P; Bedwell, D; Dancer, S J

    2011-01-01

    This study evaluated three methods for monitoring hospital cleanliness. The aim was to find a benchmark that could indicate risk to patients from a contaminated environment. We performed visual monitoring, ATP bioluminescence and microbiological screening of five clinical surfaces before and after detergent-based cleaning on two wards over a four-week period. Five additional sites that were not featured in the routine domestic specification were also sampled. Measurements from all three methods were integrated and compared in order to choose appropriate levels for routine monitoring. We found that visual assessment did not reflect ATP values nor environmental contamination with microbial flora including Staphylococcus aureus and meticillin-resistant S. aureus (MRSA). There was a relationship between microbial growth categories and the proportion of ATP values exceeding a chosen benchmark but neither reliably predicted the presence of S. aureus or MRSA. ATP values were occasionally diverse. Detergent-based cleaning reduced levels of organic soil by 32% (95% confidence interval: 16-44%; P<0.001) but did not necessarily eliminate indicator staphylococci, some of which survived the cleaning process. An ATP benchmark value of 100 relative light units offered the closest correlation with microbial growth levels <2.5 cfu/cm(2) (receiver operating characteristic ROC curve sensitivity: 57%; specificity: 57%). In conclusion, microbiological and ATP monitoring confirmed environmental contamination, persistence of hospital pathogens and measured the effect on the environment from current cleaning practices. This study has provided provisional benchmarks to assist with future assessment of hospital cleanliness. Further work is required to refine practical sampling strategy and choice of benchmarks.

  11. US-VISIT Identity Matching Algorithm Evaluation Program: ADIS Algorithm Evaluation Project Plan Update

    SciTech Connect

    Grant, C W; Lenderman, J S; Gansemer, J D

    2011-02-24

    This document is an update to the 'ADIS Algorithm Evaluation Project Plan' specified in the Statement of Work for the US-VISIT Identity Matching Algorithm Evaluation Program, as deliverable II.D.1. The original plan was delivered in August 2010. This document modifies the plan to reflect modified deliverables reflecting delays in obtaining a database refresh. This document describes the revised schedule of the program deliverables. The detailed description of the processes used, the statistical analysis processes and the results of the statistical analysis will be described fully in the program deliverables. The US-VISIT Identity Matching Algorithm Evaluation Program is work performed by Lawrence Livermore National Laboratory (LLNL) under IAA HSHQVT-07-X-00002 P00004 from the Department of Homeland Security (DHS).

  12. The ZOG Technology Demonstration Project: A System Evaluation of USS CARL VINSON (CVN 70)

    DTIC Science & Technology

    1984-12-01

    rNPRDC TR 85-14 DECEMBER 1984 ~JI 00o THE ZOG TECHNOLOGY DEMONSTRATION PROJECT : 00 A SYSTEM EVALUATION OF USS CARL VINSON ICVN 70) Lfl Reproduced...California 92152 NPR.DC TR 85-14 December 1984 THE ZOG TECHNOLOGY DEMONSTRATION PROJECT : A SYSTEM EVALUATION ON USS CARL VINSON (CVN 70) Nicholas H...ZOG TECHNOLOGY DEMONSTRATION PROJECT : A SYSTEM EVALUATION ON THE USS CARL VINSON (CVN 70) Van Matre, Nicholas; Moy, Melvyn, C.; McCann, Patrick, H

  13. Benchmarking in academic pharmacy departments.

    PubMed

    Bosso, John A; Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O; Ross, Leigh Ann

    2010-10-11

    Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation.

  14. An Assessment of the Ways Local Grant Programs Perceive, Implement, and Utilize Program Evaluation: Local Project Evaluation Through the Looking Glass or Project Directors in Wonderland.

    ERIC Educational Resources Information Center

    Hipps, Jerome A.; Friedman, Sanford I.

    Directors of 39 projects funded by the federal Consumers' Education Program were interviewed about their attitudes toward federally mandated evaluation. The projects were varied, and included activities such as consumer workshops; development of curriculum, materials or policy; inservice training; consumer advocacy/counseling; and television…

  15. A Study of the Norm-Referenced Procedure for Evaluating Project Effectiveness as Applied in the Evaluation of Project Information Packages. Research Memorandum.

    ERIC Educational Resources Information Center

    Kaskowitz, David H.; Norwood, Charles R.

    Project Information Packages (PIPs) are informative kits that describe remedial educational programs and contain instructions for installing the projects in a new site. Six such PIPs were evaluated using a norm-referenced procedure applied to standardized test scores. Pretest scores were compared to posttest scores which were calculated according…

  16. Graphite and Beryllium Reflector Critical Assemblies of UO2 (Benchmark Experiments 2 and 3)

    SciTech Connect

    Margaret A. Marshall; John D. Bess

    2012-11-01

    INTRODUCTION A series of experiments was carried out in 1962-65 at the Oak Ridge National Laboratory Critical Experiments Facility (ORCEF) for use in space reactor research programs. A core containing 93.2 wt% enriched UO2 fuel rods was used in these experiments. The first part of the experimental series consisted of 252 tightly-packed fuel rods (1.27-cm triangular pitch) with graphite reflectors [1], the second part used 252 graphite-reflected fuel rods organized in a 1.506-cm triangular-pitch array [2], and the final part of the experimental series consisted of 253 beryllium-reflected fuel rods in a 1.506-cm-triangular-pitch configuration and in a 7-tube-cluster configuration [3]. Fission rate distribution and cadmium ratio measurements were taken for all three parts of the experimental series. Reactivity coefficient measurements were taken for various materials placed in the beryllium reflected core. All three experiments in the series have been evaluated for inclusion in the International Reactor Physics Experiment Evaluation Project (IRPhEP) [4] and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbooks, [5]. The evaluation of the first experiment in the series was discussed at the 2011 ANS Winter meeting [6]. The evaluations of the second and third experiments are discussed below. These experiments are of interest as benchmarks because they support the validation of compact reactor designs with similar characteristics to the design parameters for a space nuclear fission surface power systems [7].

  17. A Heterogeneous Medium Analytical Benchmark

    SciTech Connect

    Ganapol, B.D.

    1999-09-27

    A benchmark, called benchmark BLUE, has been developed for one-group neutral particle (neutron or photon) transport in a one-dimensional sub-critical heterogeneous plane parallel medium with surface illumination. General anisotropic scattering is accommodated through the Green's Function Method (GFM). Numerical Fourier transform inversion is used to generate the required Green's functions which are kernels to coupled integral equations that give the exiting angular fluxes. The interior scalar flux is then obtained through quadrature. A compound iterative procedure for quadrature order and slab surface source convergence provides highly accurate benchmark qualities (4- to 5- places of accuracy) results.

  18. Phase-covariant quantum benchmarks

    SciTech Connect

    Calsamiglia, J.; Aspachs, M.; Munoz-Tapia, R.; Bagan, E.

    2009-05-15

    We give a quantum benchmark for teleportation and quantum storage experiments suited for pure and mixed test states. The benchmark is based on the average fidelity over a family of phase-covariant states and certifies that an experiment cannot be emulated by a classical setup, i.e., by a measure-and-prepare scheme. We give an analytical solution for qubits, which shows important differences with standard state estimation approach, and compute the value of the benchmark for coherent and squeezed states, both pure and mixed.

  19. California commercial building energy benchmarking

    SciTech Connect

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the identities of building owners might be revealed and

  20. Algebra Project DR K-12 Cohorts--Demonstration Project: Summative Evaluation Report

    ERIC Educational Resources Information Center

    St. John, Mark

    2014-01-01

    The Algebra Project DR K-12, funded by the National Science Foundation as a Research and Development Project, addressed the challenge of offering significant STEM content for students to ensure public literacy and workforce readiness. The project's primary purpose was to test the feasibility and effectiveness of a model for establishing four-year…

  1. Off-Reservation Boarding School Project (ORBS Project). Research and Evaluation Report No. 11.

    ERIC Educational Resources Information Center

    Bureau of Indian Affairs (Dept. of Interior), Albuquerque, NM.

    Pilot projects to experiment with methods of achieving the objectives of the Off-Reservation Boarding School Project (ORBS) were conducted at Sherman Indian High School, Riverside, California, and at Chilocco Indian High School, Chilocco, Oklahoma. The general objectives for the ORBS Project at each school were to review long range goals, to…

  2. An evaluation of meniscal collagenous structure using optical projection tomography

    PubMed Central

    2013-01-01

    Background The collagenous structure of menisci is a complex network of circumferentially oriented fascicles and interwoven radially oriented tie-fibres. To date, examination of this micro- architecture has been limited to two-dimensional imaging techniques. The purpose of this study was to evaluate the ability of the three-dimensional imaging technique; optical projection tomography (OPT), to visualize the collagenous structure of the meniscus. If successful, this technique would be the first to visualize the macroscopic orientation of collagen fascicles in 3-D in the meniscus and could further refine load bearing mechanisms in the tissue. OPT is an imaging technique capable of imaging samples on the meso-scale (1-10 mm) at a micro-scale resolution. The technique, similar to computed tomography, takes two-dimensional images of objects from incremental angles around the object and reconstructs them using a back projection algorithm to determine three-dimensional structure. Methods Bovine meniscal samples were imaged from four locations (outer main body, femoral surface, tibial surface and inner main body) to determine the variation in collagen orientation throughout the tissue. Bovine stifles (n = 2) were obtained from a local abattoir and the menisci carefully dissected. Menisci were fixed in methanol and subsequently cut using a custom cutting jig (n = 4 samples per meniscus). Samples were then mounted in agarose, dehydrated in methanol and subsequently cleared using benzyl alcohol benzyl benzoate (BABB) and imaged using OPT. Results Results indicate circumferential, radial and oblique collagenous orientations at the contact surfaces and in the inner third of the main body of the meniscus. Imaging identified fascicles ranging from 80-420 μm in diameter. Transition zones where fascicles were found to have a woven or braided appearance were also identified. The outer-third of the main body was composed of fascicles oriented predominantly in the

  3. Parallel Ada benchmarks for the SVMS

    NASA Technical Reports Server (NTRS)

    Collard, Philippe E.

    1990-01-01

    The use of parallel processing paradigm to design and develop faster and more reliable computers appear to clearly mark the future of information processing. NASA started the development of such an architecture: the Spaceborne VHSIC Multi-processor System (SVMS). Ada will be one of the languages used to program the SVMS. One of the unique characteristics of Ada is that it supports parallel processing at the language level through the tasking constructs. It is important for the SVMS project team to assess how efficiently the SVMS architecture will be implemented, as well as how efficiently Ada environment will be ported to the SVMS. AUTOCLASS II, a Bayesian classifier written in Common Lisp, was selected as one of the benchmarks for SVMS configurations. The purpose of the R and D effort was to provide the SVMS project team with the version of AUTOCLASS II, written in Ada, that would make use of Ada tasking constructs as much as possible so as to constitute a suitable benchmark. Additionally, a set of programs was developed that would measure Ada tasking efficiency on parallel architectures as well as determine the critical parameters influencing tasking efficiency. All this was designed to provide the SVMS project team with a set of suitable tools in the development of the SVMS architecture.

  4. Parent Leadership Training Project, October 1, 1970-September 30, 1972. Independent Evaluator's Report.

    ERIC Educational Resources Information Center

    Arter, Rhetta M.

    The Parent Leadership Training Project (PLTP) through Adult Basic Education was established as a two-year demonstration project designed to increase the reading skills of adults (16 and over) through a language-experience approach, using topics selected by the participants. The independent project evaluation covers the entire operational period…

  5. Chesterfield County Public Schools Social Studies Skills Evaluation Project 1976-1977.

    ERIC Educational Resources Information Center

    Weber, Larry; Fleming, Dan

    Virginia sponsored four evaluation projects during the 1976-1977 academic year. Four city school districts conducted projects to upgrade social studies skills of their students. This guide explains the conduct of that project in Chesterfield County Public Schools. Experimental and control groups were established at grades 7-8, grades 9-10, and…

  6. Project Talented and Gifted Second Evaluation Report: ESEA Title III Region II.

    ERIC Educational Resources Information Center

    Khatena, Joe

    Presented in the annual (1974-75) evaluation of Project Talented and Gifted are results of an appraisal of over 50 student participants (10- to 12-years-old) and the project staff and resource personnel. The project is described as a 3-month institute to provide experiences in areas such as learning to use creative thinking and problem-solving…

  7. Evaluating success criteria and project monitoring in river enhancement within an adaptive management framework

    USGS Publications Warehouse

    O'Donnell, T. K.; Galat, D.L.

    2008-01-01

    Objective setting, performance measures, and accountability are important components of an adaptive-management approach to river-enhancement programs. Few lessons learned by river-enhancement practitioners in the United States have been documented and disseminated relative to the number of projects implemented. We conducted scripted telephone surveys with river-enhancement project managers and practitioners within the Upper Mississippi River Basin (UMRB) to determine the extent of setting project success criteria, monitoring, evaluation of monitoring data, and data dissemination. Investigation of these elements enabled a determination of those that inhibited adaptive management. Seventy river enhancement projects were surveyed. Only 34% of projects surveyed incorporated a quantified measure of project success. Managers most often relied on geophysical attributes of rivers when setting project success criteria, followed by biological communities. Ninety-one percent of projects that performed monitoring included biologic variables, but the lack of data collection before and after project completion and lack of field-based reference or control sites will make future assessments of ecologic success difficult. Twenty percent of projects that performed monitoring evaluated ???1 variable but did not disseminate their evaluations outside their organization. Results suggest greater incentives may be required to advance the science of river enhancement. Future river-enhancement programs within the UMRB and elsewhere can increase knowledge gained from individual projects by offering better guidance on setting success criteria before project initiation and evaluation through established monitoring protocols. ?? 2007 Springer Science+Business Media, LLC.

  8. Workplace ESL Literacy in Diverse Small Business Contexts: Final Evaluation Report on Project EXCEL.

    ERIC Educational Resources Information Center

    Hemphill, David F.

    Project EXCEL, a workplace literacy project involving four small business enterprises in San Francisco, is evaluated. The project focused on literacy and basic skills training for limited-English-proficient (LEP) workers. The businesses included the following: a communications and mass mailing firm; a dessert wholesale company; a Mexican…

  9. Rationale, design, and methods for process evaluation in the Childhood Obesity Research Demonstration project

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The cross-site process evaluation plan for the Childhood Obesity Research Demonstration (CORD) project is described here. The CORD project comprises 3 unique demonstration projects designed to integrate multi-level, multi-setting health care and public health interventions over a 4-year funding peri...

  10. Incorporating Asymmetric Dependency Patterns in the Evaluation of IS/IT projects Using Real Option Analysis

    ERIC Educational Resources Information Center

    Burke, John C.

    2012-01-01

    The objective of my dissertation is to create a general approach to evaluating IS/IT projects using Real Option Analysis (ROA). This is an important problem because an IT Project Portfolio (ITPP) can represent hundreds of projects, millions of dollars of investment and hundreds of thousands of employee hours. Therefore, any advance in the…

  11. Evaluating the Effectiveness of Collaborative Computer-Intensive Projects in an Undergraduate Psychometrics Course

    ERIC Educational Resources Information Center

    Barchard, Kimberly A.; Pace, Larry A.

    2010-01-01

    Undergraduate psychometrics classes often use computer-intensive active learning projects. However, little research has examined active learning or computer-intensive projects in psychometrics courses. We describe two computer-intensive collaborative learning projects used to teach the design and evaluation of psychological tests. Course…

  12. Follow-Up Evaluation Project. From July 1, 1981 to June 30, 1983. Final Report.

    ERIC Educational Resources Information Center

    Santa Fe Community Coll., Gainesville, FL.

    A project was undertaken to revise a model competency-based trade and industrial education program that had been developed for use in Florida schools in a project that was implemented earlier. During the followup evaluation, the project staff compiled task listings for each of the following trade and industrial education program areas: automotive;…

  13. Evaluating the High School Lunar Research Projects Program

    NASA Astrophysics Data System (ADS)

    Shaner, A. J.; Shipp, S. S.; Allen, J.; Kring, D. A.

    2012-12-01

    The Center for Lunar Science and Exploration (CLSE), a collaboration between the Lunar and Planetary Institute and NASA's Johnson Space Center, is one of seven member teams of the NASA Lunar Science Institute (NLSI). In addition to research and exploration activities, the CLSE team is deeply invested in education and outreach. In support of NASA's and NLSI's objective to train the next generation of scientists, CLSE's High School Lunar Research Projects program is a conduit through which high school students can actively participate in lunar science and learn about pathways into scientific careers. The objectives of the program are to enhance 1) student views of the nature of science; 2) student attitudes toward science and science careers; and 3) student knowledge of lunar science. In its first three years, approximately 140 students and 28 teachers from across the United States have participated in the program. Before beginning their research, students undertake Moon 101, a guided-inquiry activity designed to familiarize them with lunar science and exploration. Following Moon 101, and guided by a lunar scientist mentor, teams choose a research topic, ask their own research question, and design their own research approach to direct their investigation. At the conclusion of their research, teams present their results to a panel of lunar scientists. This panel selects four posters to be presented at the annual Lunar Science Forum held at NASA Ames. The top scoring team travels to the forum to present their research. Three instruments have been developed or modified to evaluate the extent to which the High School Lunar Research Projects meets its objectives. These three instruments measure changes in student views of the nature of science, attitudes towards science and science careers, and knowledge of lunar science. Exit surveys for teachers, students, and mentors were also developed to elicit general feedback about the program and its impact. The nature of science

  14. Grand Junction Projects Office Remedial Action Project Building 2 public dose evaluation. Final report

    SciTech Connect

    Morris, R.

    1996-05-01

    Building 2 on the U.S. Department of Energy (DOE) Grand Junction Projects Office (GJPO) site, which is operated by Rust Geotech, is part of the GJPO Remedial Action Program. This report describes measurements and modeling efforts to evaluate the radiation dose to members of the public who might someday occupy or tear down Building 2. The assessment of future doses to those occupying or demolishing Building 2 is based on assumptions about future uses of the building, measured data when available, and predictive modeling when necessary. Future use of the building is likely to be as an office facility. The DOE sponsored program, RESRAD-BUILD, Version. 1.5 was chosen for the modeling tool. Releasing the building for unrestricted use instead of demolishing it now could save a substantial amount of money compared with the baseline cost estimate because the site telecommunications system, housed in Building 2, would not be disabled and replaced. The information developed in this analysis may be used as part of an as low as reasonably achievable (ALARA) cost/benefit determination regarding disposition of Building 2.

  15. The WACMOS-ET project - Part 2: Evaluation of global terrestrial evaporation data sets

    NASA Astrophysics Data System (ADS)

    Miralles, D. G.; Jiménez, C.; Jung, M.; Michel, D.; Ershadi, A.; McCabe, M. F.; Hirschi, M.; Martens, B.; Dolman, A. J.; Fisher, J. B.; Mu, Q.; Seneviratne, S. I.; Wood, E. F.; Fernaìndez-Prieto, D.

    2015-10-01

    The WACMOS-ET project aims to advance the development of land evaporation estimates at global and regional scales. Its main objective is the derivation, validation and inter-comparison of a group of existing evaporation retrieval algorithms driven by a common forcing data set. Three commonly used process-based evaporation methodologies are evaluated: the Penman-Monteith algorithm behind the official Moderate Resolution Imaging Spectroradiometer (MODIS) evaporation product (PM-MOD), the Global Land Evaporation Amsterdam Model (GLEAM), and the Priestley and Taylor Jet Propulsion Laboratory model (PT-JPL). The resulting global spatiotemporal variability of evaporation, the closure of regional water budgets and the discrete estimation of land evaporation components or sources (i.e. transpiration, interception loss and direct soil evaporation) are investigated using river discharge data, independent global evaporation data sets and results from previous studies. In a companion article (Part 1), Michel et al. (2015) inspect the performance of these three models at local scales using measurements from eddy-covariance towers, and include the assessment the Surface Energy Balance System (SEBS) model. In agreement with Part 1, our results here indicate that the Priestley and Taylor based products (PT-JPL and GLEAM) perform overall best for most ecosystems and climate regimes. While all three products adequately represent the expected average geographical patterns and seasonality, there is a tendency from PM-MOD to underestimate the flux in the tropics and subtropics. Overall, results from GLEAM and PT-JPL appear more realistic when compared against surface water balances from 837 globally-distributed catchments, and against separate evaporation estimates from ERA-Interim and the Model Tree Ensemble (MTE). Nonetheless, all products manifest large dissimilarities during conditions of water stress and drought, and deficiencies in the way evaporation is partitioned into its

  16. The WACMOS-ET project - Part 2: Evaluation of global terrestrial evaporation data sets

    NASA Astrophysics Data System (ADS)

    Miralles, D. G.; Jiménez, C.; Jung, M.; Michel, D.; Ershadi, A.; McCabe, M. F.; Hirschi, M.; Martens, B.; Dolman, A. J.; Fisher, J. B.; Mu, Q.; Seneviratne, S. I.; Wood, E. F.; Fernández-Prieto, D.

    2016-02-01

    The WAter Cycle Multi-mission Observation Strategy - EvapoTranspiration (WACMOS-ET) project aims to advance the development of land evaporation estimates on global and regional scales. Its main objective is the derivation, validation, and intercomparison of a group of existing evaporation retrieval algorithms driven by a common forcing data set. Three commonly used process-based evaporation methodologies are evaluated: the Penman-Monteith algorithm behind the official Moderate Resolution Imaging Spectroradiometer (MODIS) evaporation product (PM-MOD), the Global Land Evaporation Amsterdam Model (GLEAM), and the Priestley-Taylor Jet Propulsion Laboratory model (PT-JPL). The resulting global spatiotemporal variability of evaporation, the closure of regional water budgets, and the discrete estimation of land evaporation components or sources (i.e. transpiration, interception loss, and direct soil evaporation) are investigated using river discharge data, independent global evaporation data sets and results from previous studies. In a companion article (Part 1), Michel et al. (2016) inspect the performance of these three models at local scales using measurements from eddy-covariance towers and include in the assessment the Surface Energy Balance System (SEBS) model. In agreement with Part 1, our results indicate that the Priestley and Taylor products (PT-JPL and GLEAM) perform best overall for most ecosystems and climate regimes. While all three evaporation products adequately represent the expected average geographical patterns and seasonality, there is a tendency in PM-MOD to underestimate the flux in the tropics and subtropics. Overall, results from GLEAM and PT-JPL appear more realistic when compared to surface water balances from 837 globally distributed catchments and to separate evaporation estimates from ERA-Interim and the model tree ensemble (MTE). Nonetheless, all products show large dissimilarities during conditions of water stress and drought and

  17. Project CHAMP, 1985-1986. OEA Evaluation Report.

    ERIC Educational Resources Information Center

    New York City Board of Education, Brooklyn. Office of Educational Assessment.

    The Chinese Achievement and Mastery program, Project CHAMP, was a bilingual (Chinese/English) project offered at three high schools in Manhattan. The major goals were to enable Chinese students of limited English proficiency (LEP) to learn English and to master content in mathematics, science, global history, computer mathematics, and native…

  18. Project Aprendizaje, 1988-89. Evaluation Section Report. OREA Report.

    ERIC Educational Resources Information Center

    Berney, Tomi D.; Velasquez, Clara

    In it's first year, Project Aprendizaje served 250 students from the Dominican Republic and Puerto Rico at Seward Park High School in Manhattan (New York). Project objectives were to improve participants' language skills in Spanish and English, help participants successfully complete content area courses needed for graduation, and provide career…

  19. Project PROBE, 1985-1986. OEA Evaluation Report.

    ERIC Educational Resources Information Center

    New York City Board of Education, Brooklyn. Office of Educational Assessment.

    In its second year of operation, Project PROBE (Professions Oriented Bilingual Education) experienced difficulty in meeting some of its instructional objectives. The project had sought to provide instructional and supportive services to 200 Spanish-speaking students from Latin America at Louis D. Brandeis High School (Manhattan, New York) and to…

  20. Alberta Education Energy Conservation Project. Phase II: Internal Evaluation.

    ERIC Educational Resources Information Center

    Sundmark, Dana

    This report is based on the Alberta Education Energy Conservation Project - Phase II. The project was a follow-up to an earlier study, extending from June 1980 to June 1983, in which government funding and engineering manpower were used to conduct an energy management program in 52 selected pilot schools in 5 areas of the province. The report…

  1. Project HEED. Final Evaluation Report, 1973-74.

    ERIC Educational Resources Information Center

    Edington, Everett D.; Pettibone, Timothy

    1973-74 approximately 1,100 Indian students in grades 1 through 8 participated in Project HEED (Heed Ethnic Educational Depolarization) in Arizona. The project target sites were 59 classrooms at Sacaton, Sells, Peach Springs, San Carlos, Topowa, Many Farms, St. Charles Mission, and Hoteville. Primary objectives were: (1) improvement in reading…

  2. Project HEED, Title III, Section 306. Final Evaluation Report.

    ERIC Educational Resources Information Center

    Hughes, Orval D.

    Project HEED (Heed Ethnic Educational Depolarization) involves over 1,000 Indian children in grades 1-8 in Arizona. The project target sites are 48 classrooms at Sells, Topowa, San Carlos, Many Farms, Hotevilla, Peach Springs, and Sacaton. Objectives are to increase: (1) reading achievement, (2) affective behavior of teachers, (3) motivation by…

  3. Production Workshop Project, DPPF: 1971-72 Evaluation.

    ERIC Educational Resources Information Center

    Kilbane, Marian; Fleming, Margaret

    The Production Workshop Project was designed to promote the educational rehabilitation of selected ninth-grade students. Programs in block-scheduled academic instruction were integrated with vocational training in a Production Workshop setting. The 1971-72 Project activities served a total of 243 students--117 boys and 126 girls. Approximately 68…

  4. Project SAIL: An Evaluation of a Dropout Prevention Program.

    ERIC Educational Resources Information Center

    Thompson, John L.; And Others

    Project SAIL (Student Advocates Inspire Learning) is a Title IV-C Project located in Hopkins, Minnesota, designed to prevent students from dropping out of school by keeping them successfully involved in the mainstream environment. This study presents a review of other dropout prevention approaches, describes the intervention strategies involved in…

  5. Incentives in Education Project, Impact Evaluation Report. Final Report.

    ERIC Educational Resources Information Center

    Planar Corp., Washington, DC.

    This report describes results of a demonstration project carried out in four cities during 1971-72. The project aimed at exploring the feasibility and impact of two different forms of money incentives payments. In one form -- the "Teacher-Only" model -- the teachers in a school were offered a series of bonuses ranging from $150 to $600 per class…

  6. Evaluating Evolution: Naturalistic Inquiry and the Perseus Project.

    ERIC Educational Resources Information Center

    Neuman, Delia

    1991-01-01

    Describes the Perseus Project, a Harvard University-based effort to develop a hypermedia library of text and images concerning classical Greece. Explores the role of naturalistic inquiry (NI) in the Project. Reports that NI has helped researchers uncover unanticipated demands upon instructors, students, and developers in working with hypermedia.…

  7. An Evaluation of the Multivariate Methodology of the Project.

    ERIC Educational Resources Information Center

    Harman, Harry H.

    Presented at a symposium on "The Structure of Concept Attainment Abilities Project: Final Report and Critique," this paper provides the methodological aspects of the project. The discussion centers around a "Guide to the Multivariate Methods," which is provided in the paper. The basic guide-posts are the types of analysis and the types of content.…

  8. Project Head Start: Evaluation and Research Summary 1965-1967.

    ERIC Educational Resources Information Center

    Office of Economic Opportunity, Washington, DC.

    Project Head Start has as its goal the improvement of the child's physical health, intellectual performance, social attitudes, and sense of self. The project involves over half a million children each year, including children in both summer and yearlong programs. About 40 percent of Head Start pupils are Negro, about 30 percent are white, and the…

  9. Copernicus Project: Learning with Laptops: Year 1 Evaluation Report.

    ERIC Educational Resources Information Center

    Fouts, Jeffrey T.; Stuen, Carol

    The Copernicus Project is a multi-district effort designed to incorporate technology, specifically the laptop computer, into the instructional and learning process of the public schools. Participants included six school districts in Washington state, the Toshiba and Microsoft Corporations, and parents. The project called for a 1 to 1…

  10. Project CARIBE, 1985-1986. OEA Evaluation Report.

    ERIC Educational Resources Information Center

    New York City Board of Education, Brooklyn. Office of Educational Assessment.

    In 1985-86, the second year of funding, Project CARIBE proposed to increase career awareness among Spanish-speaking students of limited English proficiency (LEP) through a computer-literacy program. The project operated at two schools in Brooklyn, New York, Eastern District High School and Clara Barton High School, but after the number of…

  11. Evaluation of the Warm Springs Career Exploration Project.

    ERIC Educational Resources Information Center

    Owens, Thomas R.

    The Warm Springs Career Exploration Project (WSCEP) is an adaptation of experience-based career education (EBCE) for Native Americans residing on the Warm Springs Indian Reservation. The project served as a full-time program for American Indian students aged 16-19 who had dropped out of school, and also served as a part-time career development…

  12. Logic system aids in evaluation of project readiness

    NASA Technical Reports Server (NTRS)

    Maris, S. J.; Obrien, T. J.

    1966-01-01

    Measurement Operational Readiness Requirements /MORR/ assignments logic is used for determining the readiness of a complex project to go forward as planned. The system used logic network which assigns qualities to all important criteria in a project and establishes a logical sequence of measurements to determine what the conditions are.

  13. A risk evaluation for the fuel retrieval sub-project

    SciTech Connect

    Carlisle, B.S.

    1996-10-11

    This study reviews the technical, schedule and budget baselines of the sub-project to assure all significant issues have been identified on the sub-project issues management list. The issue resolution dates are identified and resolution plans established. Those issues that could adversely impact procurement activities have been uniquely identified on the list and a risk assessment completed.

  14. Kinder Lernen Deutsch Materials Evaluation Project: Grades K-8.

    ERIC Educational Resources Information Center

    American Association of Teachers of German.

    The Kinder Lernen Deutsch (Children Learn German) project, begun in 1987, is designed to promote German as a second language in grades K-8. The project is premised on the idea that the German program will contribute to the total development of the child and the child's personality. Included in this guide are a selection of recommended core…

  15. Benchmarking Measures of Network Influence

    NASA Astrophysics Data System (ADS)

    Bramson, Aaron; Vandermarliere, Benjamin

    2016-09-01

    Identifying key agents for the transmission of diseases (ideas, technology, etc.) across social networks has predominantly relied on measures of centrality on a static base network or a temporally flattened graph of agent interactions. Various measures have been proposed as the best trackers of influence, such as degree centrality, betweenness, and k-shell, depending on the structure of the connectivity. We consider SIR and SIS propagation dynamics on a temporally-extruded network of observed interactions and measure the conditional marginal spread as the change in the magnitude of the infection given the removal of each agent at each time: its temporal knockout (TKO) score. We argue that this TKO score is an effective benchmark measure for evaluating the accuracy of other, often more practical, measures of influence. We find that none of the network measures applied to the induced flat graphs are accurate predictors of network propagation influence on the systems studied; however, temporal networks and the TKO measure provide the requisite targets for the search for effective predictive measures.

  16. Benchmarking Measures of Network Influence

    PubMed Central

    Bramson, Aaron; Vandermarliere, Benjamin

    2016-01-01

    Identifying key agents for the transmission of diseases (ideas, technology, etc.) across social networks has predominantly relied on measures of centrality on a static base network or a temporally flattened graph of agent interactions. Various measures have been proposed as the best trackers of influence, such as degree centrality, betweenness, and k-shell, depending on the structure of the connectivity. We consider SIR and SIS propagation dynamics on a temporally-extruded network of observed interactions and measure the conditional marginal spread as the change in the magnitude of the infection given the removal of each agent at each time: its temporal knockout (TKO) score. We argue that this TKO score is an effective benchmark measure for evaluating the accuracy of other, often more practical, measures of influence. We find that none of the network measures applied to the induced flat graphs are accurate predictors of network propagation influence on the systems studied; however, temporal networks and the TKO measure provide the requisite targets for the search for effective predictive measures. PMID:27670635

  17. Community Affairs Training Evaluation; Project CATE: DOES Handbook.

    ERIC Educational Resources Information Center

    Texas Univ., Austin. Research and Development Center for Teacher Education.

    Decision Oriented Evaluation System (DOES) for community development training presents a system for training evaluation in prototypic form. This handbook provides a comprehensive overview of training evaluation methodology as well as details on specific functions involved in the training evaluation process. This model for evaluation is broken into…

  18. Data-Intensive Benchmarking Suite

    SciTech Connect

    2008-11-26

    The Data-Intensive Benchmark Suite is a set of programs written for the study of data-or storage-intensive science and engineering problems, The benchmark sets cover: general graph searching (basic and Hadoop Map/Reduce breadth-first search), genome sequence searching, HTTP request classification (basic and Hadoop Map/Reduce), low-level data communication, and storage device micro-beachmarking

  19. Benchmarking hypercube hardware and software

    NASA Technical Reports Server (NTRS)

    Grunwald, Dirk C.; Reed, Daniel A.

    1986-01-01

    It was long a truism in computer systems design that balanced systems achieve the best performance. Message passing parallel processors are no different. To quantify the balance of a hypercube design, an experimental methodology was developed and the associated suite of benchmarks was applied to several existing hypercubes. The benchmark suite includes tests of both processor speed in the absence of internode communication and message transmission speed as a function of communication patterns.

  20. Benchmarks for GADRAS performance validation.

    SciTech Connect

    Mattingly, John K.; Mitchell, Dean James; Rhykerd, Charles L., Jr.

    2009-09-01

    The performance of the Gamma Detector Response and Analysis Software (GADRAS) was validated by comparing GADRAS model results to experimental measurements for a series of benchmark sources. Sources for the benchmark include a plutonium metal sphere, bare and shielded in polyethylene, plutonium oxide in cans, a highly enriched uranium sphere, bare and shielded in polyethylene, a depleted uranium shell and spheres, and a natural uranium sphere. The benchmark experimental data were previously acquired and consist of careful collection of background and calibration source spectra along with the source spectra. The calibration data were fit with GADRAS to determine response functions for the detector in each experiment. A one-dimensional model (pie chart) was constructed for each source based on the dimensions of the benchmark source. The GADRAS code made a forward calculation from each model to predict the radiation spectrum for the detector used in the benchmark experiment. The comparisons between the GADRAS calculation and the experimental measurements are excellent, validating that GADRAS can correctly predict the radiation spectra for these well-defined benchmark sources.

  1. Research on computer systems benchmarking

    NASA Technical Reports Server (NTRS)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  2. VENUS-F: A fast lead critical core for benchmarking

    SciTech Connect

    Kochetkov, A.; Wagemans, J.; Vittiglio, G.

    2011-07-01

    The zero-power thermal neutron water-moderated facility VENUS at SCK-CEN has been extensively used for benchmarking in the past. In accordance with GEN-IV design tasks (fast reactor systems and accelerator driven systems), the VENUS facility was modified in 2007-2010 into the fast neutron facility VENUS-F with solid core components. This paper introduces the projects GUINEVERE and FREYA, which are being conducted at the VENUS-F facility, and it presents the measurement results obtained at the first critical core. Throughout the projects other fast lead benchmarks also will be investigated. The measurement results of the different configurations can all be used as fast neutron benchmarks. (authors)

  3. Extensive Evaluation of Using a Game Project in a Software Architecture Course

    ERIC Educational Resources Information Center

    Wang, Alf Inge

    2011-01-01

    This article describes an extensive evaluation of introducing a game project to a software architecture course. In this project, university students have to construct and design a type of software architecture, evaluate the architecture, implement an application based on the architecture, and test this implementation. In previous years, the domain…

  4. Special Education Music and Dance: An ESEA Title III Project Evaluation.

    ERIC Educational Resources Information Center

    Johnson, Dorothy H.; And Others

    Reported are the evaluation results on the 1969-70 segment (the first project period) of the Special Education Music and Dance Program in Shoreline School District 412 (Seattle, Washington), an ESEA Title III project. The program, which is presented as a pilot attempt to develop functional program objectives and evaluation tools, provides music…

  5. ANNUAL EVALUATION REPORT OF CONNECTICUT TITLE I PROJECTS FOR FISCAL YEAR 1966.

    ERIC Educational Resources Information Center

    ROBY, WALLACE

    THIS EVALUATION BY THE CONNECTICUT DEPARTMENT OF EDUCATION OF THE ELEMENTARY AND SECONDARY EDUCATION ACT TITLE I PROJECTS CAUTIONS ABOUT MAKING GENERALIZATIONS ABOUT THE EFFECTIVENESS OF PROJECTS WHICH HAVE BEEN IN OPERATION FOR ONLY A BRIEF PERIOD. THE REPORT NOTES, HOWEVER, THAT SUCH AN EVALUATION CAN BE USEFUL IN ESTABLISHING BASELINE DATA AND…

  6. Evaluation of a Locally Developed Social Studies Curriculum Project: Improving Citizenship Education.

    ERIC Educational Resources Information Center

    Napier, John D.; Hepburn, Mary A.

    Evaluation results from the Improving Citizenship Education (ICE) Project are presented. The purpose of the ICE project was to design and test a model for improving the political/citizenship knowledge and attitudes of K-12 students by infusing citizenship education into an existing social studies curriculum. This evaluation examined the…

  7. Model-Based Engineering and Manufacturing CAD/CAM Benchmark

    SciTech Connect

    Domm, T.D.; Underwood, R.S.

    1999-04-26

    The Benehmark Project was created from a desire to identify best practices and improve the overall efficiency and performance of the Y-12 Plant's systems and personnel supprting the manufacturing mission. The mission of the benchmark team was to search out industry leaders in manufacturing and evaluate lheir engineering practices and processes to determine direction and focus fm Y-12 modmizadon efforts. The companies visited included several large established companies and anew, small, high-tech machining firm. As a result of this efforL changes are recommended that will enable Y-12 to become a more responsive cost-effective manufacturing facility capable of suppordng the needs of the Nuclear Weapons Complex (NW@) and Work Fw Others into the 21' century. The benchmark team identified key areas of interest, both focused and gencml. The focus arm included Human Resources, Information Management, Manufacturing Software Tools, and Standarda/ Policies and Practices. Areas of general interest included Inhstructure, Computer Platforms and Networking, and Organizational Structure. The method for obtaining the desired information in these areas centered on the creation of a benchmark questionnaire. The questionnaire was used throughout each of the visits as the basis for information gathering. The results of this benchmark showed that all companies are moving in the direction of model-based engineering and manufacturing. There was evidence that many companies are trying to grasp how to manage current and legacy data. In terms of engineering design software tools, the companies contacted were using both 3-D solid modeling and surfaced Wire-frame models. The manufacturing computer tools were varie4 with most companies using more than one software product to generate machining data and none currently performing model-based manufacturing (MBM) ftom a common medel. The majority of companies were closer to identifying or using a single computer-aided design (CAD) system than a

  8. The TEAM evaluation approach to Project FAMUS, a pan-Canadian risk register for primary care.

    PubMed Central

    Grant, A.; Lussier, Y.; Delisle, E.; Dubois, S.; Bernier, R.

    1992-01-01

    The application of the TEAM--Total Evaluation and Acceptance Methodology--to the development of Project FAMUS--Family Medicine, University of Sherbrooke--is described. Project FAMUS is concerned with the establishment of a pan-Canadian risk register, the data being provided from a network of 800 family physicians distributed across Canada. Emphasis is on the first phase of the project and the overall evaluation strategy. PMID:1482968

  9. Evaluation of the El Dorado Micellar-Polymer Demonstration Project

    SciTech Connect

    VanHorn, L.E.

    1983-06-01

    The El Dorado Micellar-Polymer Demonstration Project has been a cooperative venture between Cities Service Company and the United States Department of Energy. The objective of the project was to determine if it was technically and economically feasible to produce commercial volumes of oil using a micellar-polymer process in the El Dorado field. The project was designed to allow a side-by-side comparison of two distinctly different micellar-polymer processes in the same field in order that the associated benefits and problems of each could be determined.

  10. Evaluation of Representative Smart Grid Investment Project Technologies: Demand Response

    SciTech Connect

    Fuller, Jason C.; Prakash Kumar, Nirupama; Bonebrake, Christopher A.

    2012-02-14

    This document is one of a series of reports estimating the benefits of deploying technologies similar to those implemented on the Smart Grid Investment Grant (SGIG) projects. Four technical reports cover the various types of technologies deployed in the SGIG projects, distribution automation, demand response, energy storage, and renewables integration. A fifth report in the series examines the benefits of deploying these technologies on a national level. This technical report examines the impacts of a limited number of demand response technologies and implementations deployed in the SGIG projects.

  11. Benchmarking: a method for continuous quality improvement in health.

    PubMed

    Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe

    2012-05-01

    Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields of application and examples of how it has been used in the healthcare sector. Benchmarking is often thought to consist simply of comparing indicators and is not perceived in its entirety, that is, as a tool based on voluntary and active collaboration among several organizations to create a spirit of competition and to apply best practices. The key feature of benchmarking is its integration within a comprehensive and participatory policy of continuous quality improvement (CQI). Conditions for successful benchmarking focus essentially on careful preparation of the process, monitoring of the relevant indicators, staff involvement and inter-organizational visits. Compared to methods previously implemented in France (CQI and collaborative projects), benchmarking has specific features that set it apart as a healthcare innovation. This is especially true for healthcare or medical-social organizations, as the principle of inter-organizational visiting is not part of their culture. Thus, this approach will need to be assessed for feasibility and acceptability before it is more widely promoted.

  12. Benchmarking: A Method for Continuous Quality Improvement in Health

    PubMed Central

    Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe

    2012-01-01

    Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields of application and examples of how it has been used in the healthcare sector. Benchmarking is often thought to consist simply of comparing indicators and is not perceived in its entirety, that is, as a tool based on voluntary and active collaboration among several organizations to create a spirit of competition and to apply best practices. The key feature of benchmarking is its integration within a comprehensive and participatory policy of continuous quality improvement (CQI). Conditions for successful benchmarking focus essentially on careful preparation of the process, monitoring of the relevant indicators, staff involvement and inter-organizational visits. Compared to methods previously implemented in France (CQI and collaborative projects), benchmarking has specific features that set it apart as a healthcare innovation. This is especially true for healthcare or medical–social organizations, as the principle of inter-organizational visiting is not part of their culture. Thus, this approach will need to be assessed for feasibility and acceptability before it is more widely promoted. PMID:23634166

  13. Transparency benchmarking on audio watermarks and steganography

    NASA Astrophysics Data System (ADS)

    Kraetzer, Christian; Dittmann, Jana; Lang, Andreas

    2006-02-01

    The evaluation of transparency plays an important role in the context of watermarking and steganography algorithms. This paper introduces a general definition of the term transparency in the context of steganography, digital watermarking and attack based evaluation of digital watermarking algorithms. For this purpose the term transparency is first considered individually for each of the three application fields (steganography, digital watermarking and watermarking algorithm evaluation). From the three results a general definition for the overall context is derived in a second step. The relevance and applicability of the definition given is evaluated in practise using existing audio watermarking and steganography algorithms (which work in time, frequency and wavelet domain) as well as an attack based evaluation suite for audio watermarking benchmarking - StirMark for Audio (SMBA). For this purpose selected attacks from the SMBA suite are modified by adding transparency enhancing measures using a psychoacoustic model. The transparency and robustness of the evaluated audio watermarking algorithms by using the original and modifid attacks are compared. The results of this paper show hat transparency benchmarking will lead to new information regarding the algorithms under observation and their usage. This information can result in concrete recommendations for modification, like the ones resulting from the tests performed here.

  14. Diagnostic Evaluation and Adjustment Facility (Project D. E. A. F.)

    ERIC Educational Resources Information Center

    Hairston, Ernest E.

    1971-01-01

    The project expands the rehabilitation program of Goodwill Industries of Central Ohio with in-depth vocational rehabilitation services to the deaf, particularly the multiply handicapped deaf with poor or no communication skills. (KW)

  15. How to avoid 'death by benchmarking'.

    PubMed

    Wofford, Dave; Libby, Darin

    2015-08-01

    Hospitals and health systems should adopt four key principles and practices when applying benchmarks to determine physician compensation: Acknowledge that a lower percentile may be appropriate. Use the median as the all-in benchmark. Use peer benchmarks when available. Use alternative benchmarks.

  16. Benchmarking for the Learning and Skills Sector.

    ERIC Educational Resources Information Center

    Owen, Jane

    This document is designed to introduce practitioners in the United Kingdom's learning and skills sector to the principles and practice of benchmarking. The first section defines benchmarking and differentiates metric, diagnostic, and process benchmarking. The remainder of the booklet details the following steps of the benchmarking process: (1) get…

  17. Beach Nourishment Project Response and Design Evaluation: Ocean City, Maryland

    DTIC Science & Technology

    1993-08-01

    monitoring fill behavior and their relative positions to the shoreface-attached shoals .......... .. 39 Figure 18. Photograph of the sled used in nearshore...understanding the behavior of beach nourishment projects, and the objectives of this report are to document the project from its inception to the present and to...accumulation on the shoal has been irregular, with an overall average rate of 39,000 cuI yd/year. This irregular behavior ma\\ indicate an approach to

  18. An Approach to Naturalistic Evaluation: A Study of the Social Implications of an International Development Project.

    ERIC Educational Resources Information Center

    Lee, Rebecca A.; Shute, J. C. M.

    1991-01-01

    A naturalistic approach to evaluation is illustrated through the description of the evaluation of a small-scale agricultural project in a village in Mali, West Africa. The evaluation considered program impact as well as the quality of the conclusions drawn using the illuminative model of evaluation. (SLD)

  19. Testing the robustness of Citizen Science projects: Evaluating the results of pilot project COMBER

    PubMed Central

    Faulwetter, Sarah; Dailianis, Thanos; Smith, Vincent Stuart; Koulouri, Panagiota; Dounas, Costas; Arvanitidis, Christos

    2016-01-01

    Abstract Background Citizen Science (CS) as a term implies a great deal of approaches and scopes involving many different fields of science. The number of the relevant projects globally has been increased significantly in the recent years. Large scale ecological questions can be answered only through extended observation networks and CS projects can support this effort. Although the need of such projects is apparent, an important part of scientific community cast doubt on the reliability of CS data sets. New information The pilot CS project COMBER has been created in order to provide evidence to answer the aforementioned question in the coastal marine biodiversity monitoring. The results of the current analysis show that a carefully designed CS project with clear hypotheses, wide participation and data sets validation, can be a valuable tool for the large scale and long term changes in marine biodiversity pattern change and therefore for relevant management and conservation issues. PMID:28174507

  20. Issues in Benchmarking Human Reliability Analysis Methods: A Literature Review

    SciTech Connect

    Ronald L. Boring; Stacey M. L. Hendrickson; John A. Forester; Tuan Q. Tran; Erasmia Lois

    2010-06-01

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessments (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study comparing and evaluating HRA methods in assessing operator performance in simulator experiments is currently underway. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.