Science.gov

Sample records for benchmark evaluation project

  1. Criticality safety benchmark evaluation project: Recovering the past

    SciTech Connect

    Trumble, E.F.

    1997-06-01

    A very brief summary of the Criticality Safety Benchmark Evaluation Project of the Westinghouse Savannah River Company is provided in this paper. The purpose of the project is to provide a source of evaluated criticality safety experiments in an easily usable format. Another project goal is to search for any experiments that may have been lost or contain discrepancies, and to determine if they can be used. Results of evaluated experiments are being published as US DOE handbooks.

  2. INTEGRAL BENCHMARKS AVAILABLE THROUGH THE INTERNATIONAL REACTOR PHYSICS EXPERIMENT EVALUATION PROJECT AND THE INTERNATIONAL CRITICALITY SAFETY BENCHMARK EVALUATION PROJECT

    SciTech Connect

    J. Blair Briggs; Lori Scott; Enrico Sartori; Yolanda Rugama

    2008-09-01

    Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) continue to expand their efforts and broaden their scope to identify, evaluate, and provide integral benchmark data for method and data validation. Benchmark model specifications provided by these two projects are used heavily by the international reactor physics, nuclear data, and criticality safety communities. Thus far, 14 countries have contributed to the IRPhEP, and 20 have contributed to the ICSBEP. The status of the IRPhEP and ICSBEP is discussed in this paper, and the future of the two projects is outlined and discussed. Selected benchmarks that have been added to the IRPhEP and ICSBEP handbooks since PHYSOR’06 are highlighted, and the future of the two projects is discussed.

  3. The International Criticality Safety Benchmark Evaluation Project on the Internet

    SciTech Connect

    Briggs, J.B.; Brennan, S.A.; Scott, L.

    2000-07-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in October 1992 by the US Department of Energy's (DOE's) defense programs and is documented in the Transactions of numerous American Nuclear Society and International Criticality Safety Conferences. The work of the ICSBEP is documented as an Organization for Economic Cooperation and Development (OECD) handbook, International Handbook of Evaluated Criticality Safety Benchmark Experiments. The ICSBEP Internet site was established in 1996 and its address is http://icsbep.inel.gov/icsbep. A copy of the ICSBEP home page is shown in Fig. 1. The ICSBEP Internet site contains the five primary links. Internal sublinks to other relevant sites are also provided within the ICSBEP Internet site. A brief description of each of the five primary ICSBEP Internet site links is given.

  4. Benchmark Data Through The International Reactor Physics Experiment Evaluation Project (IRPHEP)

    SciTech Connect

    J. Blair Briggs; Dr. Enrico Sartori

    2005-09-01

    The International Reactor Physics Experiments Evaluation Project (IRPhEP) was initiated by the Organization for Economic Cooperation and Development (OECD) Nuclear Energy Agency’s (NEA) Nuclear Science Committee (NSC) in June of 2002. The IRPhEP focus is on the derivation of internationally peer reviewed benchmark models for several types of integral measurements, in addition to the critical configuration. While the benchmarks produced by the IRPhEP are of primary interest to the Reactor Physics Community, many of the benchmarks can be of significant value to the Criticality Safety and Nuclear Data Communities. Benchmarks that support the Next Generation Nuclear Plant (NGNP), for example, also support fuel manufacture, handling, transportation, and storage activities and could challenge current analytical methods. The IRPhEP is patterned after the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and is closely coordinated with the ICSBEP. This paper highlights the benchmarks that are currently being prepared by the IRPhEP that are also of interest to the Criticality Safety Community. The different types of measurements and associated benchmarks that can be expected in the first publication and beyond are described. The protocol for inclusion of IRPhEP benchmarks as ICSBEP benchmarks and for inclusion of ICSBEP benchmarks as IRPhEP benchmarks is detailed. The format for IRPhEP benchmark evaluations is described as an extension of the ICSBEP format. Benchmarks produced by the IRPhEP add new dimension to criticality safety benchmarking efforts and expand the collection of available integral benchmarks for nuclear data testing. The first publication of the "International Handbook of Evaluated Reactor Physics Benchmark Experiments" is scheduled for January of 2006.

  5. The Activities of the International Criticality Safety Benchmark Evaluation Project (ICSBEP)

    SciTech Connect

    Briggs, Joseph Blair

    2001-10-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organization for Economic Cooperation and Development (OECD) – Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Yugoslavia, Kazakhstan, Spain, and Israel are now participating. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled “International Handbook of Evaluated Criticality Safety Benchmark Experiments”. The 2001 Edition of the Handbook contains benchmark specifications for 2642 critical or subcritical configurations that are intended for use in validation efforts and for testing basic nuclear data.

  6. Benchmarking in water project analysis

    NASA Astrophysics Data System (ADS)

    Griffin, Ronald C.

    2008-11-01

    The with/without principle of cost-benefit analysis is examined for the possible bias that it brings to water resource planning. Theory and examples for this question are established. Because benchmarking against the demonstrably low without-project hurdle can detract from economic welfare and can fail to promote efficient policy, improvement opportunities are investigated. In lieu of the traditional, without-project benchmark, a second-best-based "difference-making benchmark" is proposed. The project authorizations and modified review processes instituted by the U.S. Water Resources Development Act of 2007 may provide for renewed interest in these findings.

  7. Growth and Expansion of the International Criticality Safety Benchmark Evaluation Project and the Newly Organized International Reactor Physics Experiment Evaluation Project

    SciTech Connect

    J. Blair Briggs; Lori Scott; Yolanda Rugama; Enrico Satori

    2007-05-01

    Since ICNC 2003, the International Criticality Safety Benchmark Evaluation Project (ICSBEP) has continued to expand its efforts and broaden its scope. Criticality-alarm / shielding type benchmarks and fundamental physics measurements that are relevant to criticality safety applications are not only included in the scope of the project, but benchmark data are also included in the latest version of the handbook. A considerable number of improvements have been made to the searchable database, DICE and the criticality-alarm / shielding benchmarks and fundamental physics measurements have been included in the database. There were 12 countries participating on the ICSBEP in 2003. That number has increased to 18 with recent contributions of data and/or resources from Brazil, Czech Republic, Poland, India, Canada, and China. South Africa, Germany, Argentina, and Australia have been invited to participate. Since ICNC 2003, the contents of the “International Handbook of Evaluated Criticality Safety Benchmark Experiments” have increased from 350 evaluations (28,000 pages) containing benchmark specifications for 3070 critical or subcritical configurations to 442 evaluations (over 38,000 pages) containing benchmark specifications for 3957 critical or subcritical configurations, 23 criticality-alarm-placement / shielding configurations with multiple dose points for each, and 20 configurations that have been categorized as fundamental physics measurements that are relevant to criticality safety applications in the 2006 Edition of the ICSBEP Handbook. Approximately 30 new evaluations and 250 additional configurations are expected to be added to the 2007 Edition of the Handbook. Since ICNC 2003, a reactor physics counterpart to the ICSBEP, The International Reactor Physics Experiment Evaluation Project (IRPhEP) was initiated. Beginning in 1999, the IRPhEP was conducted as a pilot activity by the by the Organization of Economic Cooperation and Development (OECD) Nuclear Energy

  8. The Impact Hydrocode Benchmark and Validation Project

    NASA Astrophysics Data System (ADS)

    Pierazzo, E.; Artemieva, N.; Asphaug, E.; Baldwin, E. C.; Cazamias, J.; Coker, R.; Collins, G. S.; Crawford, D. A.; Davison, T.; Elbeshausen, D.; Holsapple, K. A.; Housen, K. R.; Korycansky, D. G.; Wünnemann, K.

    When properly benchmarked and validated against observations computer models offer a powerful tool for understanding the mechanics of impact crater formation. We present results from a project to benchmark and validate shock physics codes.

  9. National healthcare capital project benchmarking--an owner's perspective.

    PubMed

    Kahn, Noah

    2009-01-01

    Few sectors of the economy have been left unscathed in these economic times. Healthcare construction has been less affected than residential and nonresidential construction sectors, but driven by re-evaluation of healthcare system capital plans, projects are now being put on hold or canceled. The industry is searching for ways to improve the value proposition for project delivery and process controls. In other industries, benchmarking component costs has led to significant, sustainable reductions in costs and cost variations. Kaiser Permanente and the Construction Industry Institute (CII), a research component of the University of Texas at Austin, an industry leader in benchmarking, have joined with several other organizations to work on a national benchmarking and metrics program to gauge the performance of healthcare facility projects. This initiative will capture cost, schedule, delivery method, change, functional, operational, and best practice metrics. This program is the only one of its kind. The CII Web-based interactive reporting system enables a company to view its information and mine industry data. Benchmarking is a tool for continuous improvement that is capable not only of grading outcomes; it can inform all aspects of the healthcare design and construction process and ultimately help moderate the increasing cost of delivering healthcare.

  10. Providing Nuclear Criticality Safety Analysis Education through Benchmark Experiment Evaluation

    SciTech Connect

    John D. Bess; J. Blair Briggs; David W. Nigg

    2009-11-01

    One of the challenges that today's new workforce of nuclear criticality safety engineers face is the opportunity to provide assessment of nuclear systems and establish safety guidelines without having received significant experience or hands-on training prior to graduation. Participation in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and/or the International Reactor Physics Experiment Evaluation Project (IRPhEP) provides students and young professionals the opportunity to gain experience and enhance critical engineering skills.

  11. Benchmark Evaluation of Plutonium Nitrate Solution Arrays

    SciTech Connect

    M. A. Marshall; J. D. Bess

    2011-09-01

    In October and November of 1981 thirteen approach-to-critical experiments were performed on a remote split table machine (RSTM) in the Critical Mass Laboratory of Pacific Northwest Laboratory (PNL) in Richland, Washington, using planar arrays of polyethylene bottles filled with plutonium (Pu) nitrate solution. Arrays of up to sixteen bottles were used to measure the critical number of bottles and critical array spacing with a tight fitting Plexiglas{reg_sign} reflector on all sides of the arrays except the top. Some experiments used Plexiglas shells fitted around each bottles to determine the effect of moderation on criticality. Each bottle contained approximately 2.4 L of Pu(NO3)4 solution with a Pu content of 105 g Pu/L and a free acid molarity H+ of 5.1. The plutonium was of low 240Pu (2.9 wt.%) content. These experiments were performed to fill a gap in experimental data regarding criticality limits for storing and handling arrays of Pu solution in reprocessing facilities. Of the thirteen approach-to-critical experiments eleven resulted in extrapolations to critical configurations. Four of the approaches were extrapolated to the critical number of bottles; these were not evaluated further due to the large uncertainty associated with the modeling of a fraction of a bottle. The remaining seven approaches were extrapolated to critical array spacing of 3-4 and 4-4 arrays; these seven critical configurations were evaluation for inclusion as acceptable benchmark experiments in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbook. Detailed and simple models of these configurations were created and the associated bias of these simplifications was determined to range from 0.00116 and 0.00162 {+-} 0.00006 ?keff. Monte Carlo analysis of all models was completed using MCNP5 with ENDF/BVII.0 neutron cross section libraries. A thorough uncertainty analysis of all critical, geometric, and material parameters was performed using parameter

  12. Benchmark testing of {sup 233}U evaluations

    SciTech Connect

    Wright, R.Q.; Leal, L.C.

    1997-07-01

    In this paper we investigate the adequacy of available {sup 233}U cross-section data (ENDF/B-VI and JENDL-3) for calculation of critical experiments. An ad hoc revised {sup 233}U evaluation is also tested and appears to give results which are improved relative to those obtained with either ENDF/B-VI or JENDL-3 cross sections. Calculations of k{sub eff} were performed for ten fast benchmarks and six thermal benchmarks using the three cross-section sets. Central reaction-rate-ratio calculations were also performed.

  13. IT-benchmarking of clinical workflows: concept, implementation, and evaluation.

    PubMed

    Thye, Johannes; Straede, Matthias-Christopher; Liebe, Jan-David; Hübner, Ursula

    2014-01-01

    Due to the emerging evidence of health IT as opportunity and risk for clinical workflows, health IT must undergo a continuous measurement of its efficacy and efficiency. IT-benchmarks are a proven means for providing this information. The aim of this study was to enhance the methodology of an existing benchmarking procedure by including, in particular, new indicators of clinical workflows and by proposing new types of visualisation. Drawing on the concept of information logistics, we propose four workflow descriptors that were applied to four clinical processes. General and specific indicators were derived from these descriptors and processes. 199 chief information officers (CIOs) took part in the benchmarking. These hospitals were assigned to reference groups of a similar size and ownership from a total of 259 hospitals. Stepwise and comprehensive feedback was given to the CIOs. Most participants who evaluated the benchmark rated the procedure as very good, good, or rather good (98.4%). Benchmark information was used by CIOs for getting a general overview, advancing IT, preparing negotiations with board members, and arguing for a new IT project.

  14. Performance Evaluation and Benchmarking of Intelligent Systems

    SciTech Connect

    Madhavan, Raj; Messina, Elena; Tunstel, Edward

    2009-09-01

    To design and develop capable, dependable, and affordable intelligent systems, their performance must be measurable. Scientific methodologies for standardization and benchmarking are crucial for quantitatively evaluating the performance of emerging robotic and intelligent systems technologies. There is currently no accepted standard for quantitatively measuring the performance of these systems against user-defined requirements; and furthermore, there is no consensus on what objective evaluation procedures need to be followed to understand the performance of these systems. The lack of reproducible and repeatable test methods has precluded researchers working towards a common goal from exchanging and communicating results, inter-comparing system performance, and leveraging previous work that could otherwise avoid duplication and expedite technology transfer. Currently, this lack of cohesion in the community hinders progress in many domains, such as manufacturing, service, healthcare, and security. By providing the research community with access to standardized tools, reference data sets, and open source libraries of solutions, researchers and consumers will be able to evaluate the cost and benefits associated with intelligent systems and associated technologies. In this vein, the edited book volume addresses performance evaluation and metrics for intelligent systems, in general, while emphasizing the need and solutions for standardized methods. To the knowledge of the editors, there is not a single book on the market that is solely dedicated to the subject of performance evaluation and benchmarking of intelligent systems. Even books that address this topic do so only marginally or are out of date. The research work presented in this volume fills this void by drawing from the experiences and insights of experts gained both through theoretical development and practical implementation of intelligent systems in a variety of diverse application domains. The book presents

  15. Ground truth and benchmarks for performance evaluation

    NASA Astrophysics Data System (ADS)

    Takeuchi, Ayako; Shneier, Michael; Hong, Tsai Hong; Chang, Tommy; Scrapper, Christopher; Cheok, Geraldine S.

    2003-09-01

    Progress in algorithm development and transfer of results to practical applications such as military robotics requires the setup of standard tasks, of standard qualitative and quantitative measurements for performance evaluation and validation. Although the evaluation and validation of algorithms have been discussed for over a decade, the research community still faces a lack of well-defined and standardized methodology. The range of fundamental problems include a lack of quantifiable measures of performance, a lack of data from state-of-the-art sensors in calibrated real-world environments, and a lack of facilities for conducting realistic experiments. In this research, we propose three methods for creating ground truth databases and benchmarks using multiple sensors. The databases and benchmarks will provide researchers with high quality data from suites of sensors operating in complex environments representing real problems of great relevance to the development of autonomous driving systems. At NIST, we have prototyped a High Mobility Multi-purpose Wheeled Vehicle (HMMWV) system with a suite of sensors including a Riegl ladar, GDRS ladar, stereo CCD, several color cameras, Global Position System (GPS), Inertial Navigation System (INS), pan/tilt encoders, and odometry . All sensors are calibrated with respect to each other in space and time. This allows a database of features and terrain elevation to be built. Ground truth for each sensor can then be extracted from the database. The main goal of this research is to provide ground truth databases for researchers and engineers to evaluate algorithms for effectiveness, efficiency, reliability, and robustness, thus advancing the development of algorithms.

  16. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; Fatoohi, Rod

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  17. Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program

    DOE PAGES

    Bess, John D.; Montierth, Leland; Köberl, Oliver; Snoj, Luka

    2014-10-09

    Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the ²³⁵U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of keff with MCNP5 and ENDF/B-VII.0 neutron nuclear data are greatermore » than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of keff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  18. Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program

    SciTech Connect

    Bess, John D.; Montierth, Leland; Köberl, Oliver; Snoj, Luka

    2014-10-09

    Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the ²³⁵U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of keff with MCNP5 and ENDF/B-VII.0 neutron nuclear data are greater than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of keff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.

  19. Hospital nursing benchmarks: the California Nursing Outcomes Coalition project.

    PubMed

    Brown, D S; Donaldson, N; Aydin, C E; Carlson, N

    2001-01-01

    The California Nursing Outcomes Coalition (CalNOC) project is an initiative that has become the largest ongoing nursing quality measurement repository in the nation. Launched in 1996 by California nursing leaders concerned with trends in hospital care, CalNOC has created reliable quality benchmark data to define patient safety thresholds in California. This article describes CalNOC's effort, which aligns with the strategy of the National Quality Forum for measuring and reporting healthcare quality. By tracing the evolution of the CalNOC project and its future potential, we hope to encourage other grassroots efforts to build the database repositories needed for healthcare quality measurement in the 21st century.

  20. Benchmark Evaluation of the NRAD Reactor LEU Core Startup Measurements

    SciTech Connect

    J. D. Bess; T. L. Maddock; M. A. Marshall

    2011-09-01

    The Neutron Radiography (NRAD) reactor is a 250-kW TRIGA-(Training, Research, Isotope Production, General Atomics)-conversion-type reactor at the Idaho National Laboratory; it is primarily used for neutron radiography analysis of irradiated and unirradiated fuels and materials. The NRAD reactor was converted from HEU to LEU fuel with 60 fuel elements and brought critical on March 31, 2010. This configuration of the NRAD reactor has been evaluated as an acceptable benchmark experiment and is available in the 2011 editions of the International Handbook of Evaluated Criticality Safety Benchmark Experiments (ICSBEP Handbook) and the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook). Significant effort went into precisely characterizing all aspects of the reactor core dimensions and material properties; detailed analyses of reactor parameters minimized experimental uncertainties. The largest contributors to the total benchmark uncertainty were the 234U, 236U, Er, and Hf content in the fuel; the manganese content in the stainless steel cladding; and the unknown level of water saturation in the graphite reflector blocks. A simplified benchmark model of the NRAD reactor was prepared with a keff of 1.0012 {+-} 0.0029 (1s). Monte Carlo calculations with MCNP5 and KENO-VI and various neutron cross section libraries were performed and compared with the benchmark eigenvalue for the 60-fuel-element core configuration; all calculated eigenvalues are between 0.3 and 0.8% greater than the benchmark value. Benchmark evaluations of the NRAD reactor are beneficial in understanding biases and uncertainties affecting criticality safety analyses of storage, handling, or transportation applications with LEU-Er-Zr-H fuel.

  1. NASA Indexing Benchmarks: Evaluating Text Search Engines

    NASA Technical Reports Server (NTRS)

    Esler, Sandra L.; Nelson, Michael L.

    1997-01-01

    The current proliferation of on-line information resources underscores the requirement for the ability to index collections of information and search and retrieve them in a convenient manner. This study develops criteria for analytically comparing the index and search engines and presents results for a number of freely available search engines. A product of this research is a toolkit capable of automatically indexing, searching, and extracting performance statistics from each of the focused search engines. This toolkit is highly configurable and has the ability to run these benchmark tests against other engines as well. Results demonstrate that the tested search engines can be grouped into two levels. Level one engines are efficient on small to medium sized data collections, but show weaknesses when used for collections 100MB or larger. Level two search engines are recommended for data collections up to and beyond 100MB.

  2. Project W-320 thermal hydraulic model benchmarking and baselining

    SciTech Connect

    Sathyanarayana, K.

    1998-09-28

    Project W-320 will be retrieving waste from Tank 241-C-106 and transferring the waste to Tank 241-AY-102. Waste in both tanks must be maintained below applicable thermal limits during and following the waste transfer. Thermal hydraulic process control models will be used for process control of the thermal limits. This report documents the process control models and presents a benchmarking of the models with data from Tanks 241-C-106 and 241-AY-102. Revision 1 of this report will provide a baselining of the models in preparation for the initiation of sluicing.

  3. Benchmarking Evaluation Results for Prototype Extravehicular Activity Gloves

    NASA Technical Reports Server (NTRS)

    Aitchison, Lindsay; McFarland, Shane

    2012-01-01

    The Space Suit Assembly (SSA) Development Team at NASA Johnson Space Center has invested heavily in the advancement of rear-entry planetary exploration suit design but largely deferred development of extravehicular activity (EVA) glove designs, and accepted the risk of using the current flight gloves, Phase VI, for unique mission scenarios outside the Space Shuttle and International Space Station (ISS) Program realm of experience. However, as design reference missions mature, the risks of using heritage hardware have highlighted the need for developing robust new glove technologies. To address the technology gap, the NASA Game-Changing Technology group provided start-up funding for the High Performance EVA Glove (HPEG) Project in the spring of 2012. The overarching goal of the HPEG Project is to develop a robust glove design that increases human performance during EVA and creates pathway for future implementation of emergent technologies, with specific aims of increasing pressurized mobility to 60% of barehanded capability, increasing the durability by 100%, and decreasing the potential of gloves to cause injury during use. The HPEG Project focused initial efforts on identifying potential new technologies and benchmarking the performance of current state of the art gloves to identify trends in design and fit leading to establish standards and metrics against which emerging technologies can be assessed at both the component and assembly levels. The first of the benchmarking tests evaluated the quantitative mobility performance and subjective fit of four prototype gloves developed by Flagsuit LLC, Final Frontier Designs, LLC Dover, and David Clark Company as compared to the Phase VI. All of the companies were asked to design and fabricate gloves to the same set of NASA provided hand measurements (which corresponded to a single size of Phase Vi glove) and focus their efforts on improving mobility in the metacarpal phalangeal and carpometacarpal joints. Four test

  4. Quality in E-Learning--A Conceptual Framework Based on Experiences from Three International Benchmarking Projects

    ERIC Educational Resources Information Center

    Ossiannilsson, E.; Landgren, L.

    2012-01-01

    Between 2008 and 2010, Lund University took part in three international benchmarking projects, "E-xcellence+," the "eLearning Benchmarking Exercise 2009," and the "First Dual-Mode Distance Learning Benchmarking Club." A comparison of these models revealed a rather high level of correspondence. From this finding and from desktop studies of the…

  5. The Impact Hydrocode Benchmark and Validation Project: Initial Results

    NASA Astrophysics Data System (ADS)

    Pierazzo, E.; Artemieva, N.; Asphaug, E.; Cazamias, J.; Coker, R.; Collins, G. S.; Gisler, G.; Holsapple, K. A.; Housen, K. R.; Ivanov, B.; Johnson, C.; Korycansky, D. G.; Melosh, H. J.; Taylor, E. A.; Turtle, E. P.; Wünnemann, K.

    2007-03-01

    This work presents initial results of a validation and benchmarking effort from the impact cratering and explosion community. Several impact codes routinely used to model impact and explosion events are being compared using simple benchmark tests.

  6. A benchmark for fault tolerant flight control evaluation

    NASA Astrophysics Data System (ADS)

    Smaili, H.; Breeman, J.; Lombaerts, T.; Stroosma, O.

    2013-12-01

    A large transport aircraft simulation benchmark (REconfigurable COntrol for Vehicle Emergency Return - RECOVER) has been developed within the GARTEUR (Group for Aeronautical Research and Technology in Europe) Flight Mechanics Action Group 16 (FM-AG(16)) on Fault Tolerant Control (2004 2008) for the integrated evaluation of fault detection and identification (FDI) and reconfigurable flight control strategies. The benchmark includes a suitable set of assessment criteria and failure cases, based on reconstructed accident scenarios, to assess the potential of new adaptive control strategies to improve aircraft survivability. The application of reconstruction and modeling techniques, based on accident flight data, has resulted in high-fidelity nonlinear aircraft and fault models to evaluate new Fault Tolerant Flight Control (FTFC) concepts and their real-time performance to accommodate in-flight failures.

  7. Performance Evaluation and Benchmarking of Next Intelligent Systems

    SciTech Connect

    del Pobil, Angel; Madhavan, Raj; Bonsignorio, Fabio

    2009-10-01

    Performance Evaluation and Benchmarking of Intelligent Systems presents research dedicated to the subject of performance evaluation and benchmarking of intelligent systems by drawing from the experiences and insights of leading experts gained both through theoretical development and practical implementation of intelligent systems in a variety of diverse application domains. This contributed volume offers a detailed and coherent picture of state-of-the-art, recent developments, and further research areas in intelligent systems. The chapters cover a broad range of applications, such as assistive robotics, planetary surveying, urban search and rescue, and line tracking for automotive assembly. Subsystems or components described in this book include human-robot interaction, multi-robot coordination, communications, perception, and mapping. Chapters are also devoted to simulation support and open source software for cognitive platforms, providing examples of the type of enabling underlying technologies that can help intelligent systems to propagate and increase in capabilities. Performance Evaluation and Benchmarking of Intelligent Systems serves as a professional reference for researchers and practitioners in the field. This book is also applicable to advanced courses for graduate level students and robotics professionals in a wide range of engineering and related disciplines including computer science, automotive, healthcare, manufacturing, and service robotics.

  8. 239Pu Resonance Evaluation for Thermal Benchmark System Calculations

    NASA Astrophysics Data System (ADS)

    Leal, L. C.; Noguere, G.; de Saint Jean, C.; Kahler, A. C.

    2014-04-01

    Analyses of thermal plutonium solution critical benchmark systems have indicated a deficiency in the 239Pu resonance evaluation. To investigate possible solutions to this issue, the Organisation for Economic Co-operation and Development (OECD) Nuclear Energy Agency (NEA) Working Party for Evaluation Cooperation (WPEC) established Subgroup 34 to focus on the reevaluation of the 239Pu resolved resonance parameters. In addition, the impacts of the prompt neutron multiplicity (νbar) and the prompt neutron fission spectrum (PFNS) have been investigated. The objective of this paper is to present the results of the 239Pu resolved resonance evaluation effort.

  9. COVE 2A Benchmarking calculations using NORIA; Yucca Mountain Site Characterization Project

    SciTech Connect

    Carrigan, C.R.; Bixler, N.E.; Hopkins, P.L.; Eaton, R.R.

    1991-10-01

    Six steady-state and six transient benchmarking calculations have been performed, using the finite element code NORIA, to simulate one-dimensional infiltration into Yucca Mountain. These calculations were made to support the code verification (COVE 2A) activity for the Yucca Mountain Site Characterization Project. COVE 2A evaluates the usefulness of numerical codes for analyzing the hydrology of the potential Yucca Mountain site. Numerical solutions for all cases were found to be stable. As expected, the difficulties and computer-time requirements associated with obtaining solutions increased with infiltration rate. 10 refs., 128 figs., 5 tabs.

  10. Benchmark characterization

    NASA Technical Reports Server (NTRS)

    Conte, Thomas M.; Hwu, Wen-Mei W.

    1991-01-01

    An abstract system of benchmark characteristics that makes it possible, in the beginning of the design stage, to design with benchmark performance in mind is presented. The benchmark characteristics for a set of commonly used benchmarks are then shown. The benchmark set used includes some benchmarks from the Systems Performance Evaluation Cooperative (SPEC). The SPEC programs are industry-standard applications that use specific inputs. Processor, memory-system, and operating-system characteristics are addressed.

  11. Limitations of Community College Benchmarking and Benchmarks

    ERIC Educational Resources Information Center

    Bers, Trudy H.

    2006-01-01

    This chapter distinguishes between benchmarks and benchmarking, describes a number of data and cultural limitations to benchmarking projects, and suggests that external demands for accountability are the dominant reason for growing interest in benchmarking among community colleges.

  12. Thermal Performance Benchmarking (Presentation)

    SciTech Connect

    Moreno, G.

    2014-11-01

    This project will benchmark the thermal characteristics of automotive power electronics and electric motor thermal management systems. Recent vehicle systems will be benchmarked to establish baseline metrics, evaluate advantages and disadvantages of different thermal management systems, and identify areas of improvement to advance the state-of-the-art.

  13. Middleware Evaluation and Benchmarking for Use in Mission Operations Centers

    NASA Technical Reports Server (NTRS)

    Antonucci, Rob; Waktola, Waka

    2005-01-01

    Middleware technologies have been promoted as timesaving, cost-cutting alternatives to the point-to-point communication used in traditional mission operations systems. However, missions have been slow to adopt the new technology. The lack of existing middleware-based missions has given rise to uncertainty about middleware's ability to perform in an operational setting. Most mission architects are also unfamiliar with the technology and do not know the benefits and detriments to architectural choices - or even what choices are available. We will present the findings of a study that evaluated several middleware options specifically for use in a mission operations system. We will address some common misconceptions regarding the applicability of middleware-based architectures, and we will identify the design decisions and tradeoffs that must be made when choosing a middleware solution. The Middleware Comparison and Benchmark Study was conducted at NASA Goddard Space Flight Center to comprehensively evaluate candidate middleware products, compare and contrast the performance of middleware solutions with the traditional point- to-point socket approach, and assess data delivery and reliability strategies. The study focused on requirements of the Global Precipitation Measurement (GPM) mission, validating the potential use of middleware in the GPM mission ground system. The study was jointly funded by GPM and the Goddard Mission Services Evolution Center (GMSEC), a virtual organization for providing mission enabling solutions and promoting the use of appropriate new technologies for mission support. The study was broken into two phases. To perform the generic middleware benchmarking and performance analysis, a network was created with data producers and consumers passing data between themselves. The benchmark monitored the delay, throughput, and reliability of the data as the characteristics were changed. Measurements were taken under a variety of topologies, data demands

  14. Using Web-Based Peer Benchmarking to Manage the Client-Based Project

    ERIC Educational Resources Information Center

    Raska, David; Keller, Eileen Weisenbach; Shaw, Doris

    2013-01-01

    The complexities of integrating client-based projects into marketing courses provide challenges for the instructor but produce richness of context and active learning for the student. This paper explains the integration of Web-based peer benchmarking as a means of improving student performance on client-based projects within a single semester in…

  15. Improving HEI Productivity and Performance through Project Management: Implications from a Benchmarking Case Study

    ERIC Educational Resources Information Center

    Bryde, David; Leighton, Diana

    2009-01-01

    As higher education institutions (HEIs) look to be more commercial in their outlook they are likely to become more dependent on the successful implementation of projects. This article reports a benchmarking survey of PM maturity in a HEI, with the purpose of assessing its capability to implement projects. Data were collected via questionnaires…

  16. Model evaluation using a community benchmarking system for land surface models

    NASA Astrophysics Data System (ADS)

    Mu, M.; Hoffman, F. M.; Lawrence, D. M.; Riley, W. J.; Keppel-Aleks, G.; Kluzek, E. B.; Koven, C. D.; Randerson, J. T.

    2014-12-01

    Evaluation of atmosphere, ocean, sea ice, and land surface models is an important step in identifying deficiencies in Earth system models and developing improved estimates of future change. For the land surface and carbon cycle, the design of an open-source system has been an important objective of the International Land Model Benchmarking (ILAMB) project. Here we evaluated CMIP5 and CLM models using a benchmarking system that enables users to specify models, data sets, and scoring systems so that results can be tailored to specific model intercomparison projects. Our scoring system used information from four different aspects of global datasets, including climatological mean spatial patterns, seasonal cycle dynamics, interannual variability, and long-term trends. Variable-to-variable comparisons enable investigation of the mechanistic underpinnings of model behavior, and allow for some control of biases in model drivers. Graphics modules allow users to evaluate model performance at local, regional, and global scales. Use of modular structures makes it relatively easy for users to add new variables, diagnostic metrics, benchmarking datasets, or model simulations. Diagnostic results are automatically organized into HTML files, so users can conveniently share results with colleagues. We used this system to evaluate atmospheric carbon dioxide, burned area, global biomass and soil carbon stocks, net ecosystem exchange, gross primary production, ecosystem respiration, terrestrial water storage, evapotranspiration, and surface radiation from CMIP5 historical and ESM historical simulations. We found that the multi-model mean often performed better than many of the individual models for most variables. We plan to publicly release a stable version of the software during fall of 2014 that has land surface, carbon cycle, hydrology, radiation and energy cycle components.

  17. BENCHMARK EVALUATION OF THE START-UP CORE REACTOR PHYSICS MEASUREMENTS OF THE HIGH TEMPERATURE ENGINEERING TEST REACTOR

    SciTech Connect

    John Darrell Bess

    2010-05-01

    The benchmark evaluation of the start-up core reactor physics measurements performed with Japan’s High Temperature Engineering Test Reactor, in support of the Next Generation Nuclear Plant Project and Very High Temperature Reactor Program activities at the Idaho National Laboratory, has been completed. The evaluation was performed using MCNP5 with ENDF/B-VII.0 nuclear data libraries and according to guidelines provided for inclusion in the International Reactor Physics Experiment Evaluation Project Handbook. Results provided include updated evaluation of the initial six critical core configurations (five annular and one fully-loaded). The calculated keff eigenvalues agree within 1s of the benchmark values. Reactor physics measurements that were evaluated include reactivity effects measurements such as excess reactivity during the core loading process and shutdown margins for the fully-loaded core, four isothermal temperature reactivity coefficient measurements for the fully-loaded core, and axial reaction rate measurements in the instrumentation columns of three core configurations. The calculated values agree well with the benchmark experiment measurements. Fully subcritical and warm critical configurations of the fully-loaded core were also assessed. The calculated keff eigenvalues for these two configurations also agree within 1s of the benchmark values. The reactor physics measurement data can be used in the validation and design development of future High Temperature Gas-cooled Reactor systems.

  18. Benchmark Evaluation of Plutonium Hemispheres Reflected by Steel and Oil

    SciTech Connect

    John Darrell Bess

    2008-06-01

    During the period from June 1967 through September 1969 a series of critical experiments was performed at the Rocky Flats Critical Mass Laboratory with spherical and hemispherical plutonium assemblies as nested hemishells as part of a Nuclear Safety Facility Experimental Program to evaluate operational safety margins for the Rocky Flats Plant. These assemblies were both bare and fully or partially oil-reflected. Many of these experiments were subcritical with an extrapolation to critical configurations or critical at a particular oil height. Existing records reveal that 167 experiments were performed over the course of 28 months. Unfortunately, much of the data was not recorded. A reevaluation of the experiments had been summarized in a report for future experimental and computational analyses. This report examines only fifteen partially oil-reflected hemispherical assemblies. Fourteen of these assemblies also had close-fitting stainless-steel hemishell reflectors, used to determine the effective critical reflector height of oil with varying steel-reflector thickness. The experiments and their uncertainty in keff values were evaluated to determine their potential as valid criticality benchmark experiments of plutonium.

  19. Benchmarking Data Sets for the Evaluation of Virtual Ligand Screening Methods: Review and Perspectives.

    PubMed

    Lagarde, Nathalie; Zagury, Jean-François; Montes, Matthieu

    2015-07-27

    Virtual screening methods are commonly used nowadays in drug discovery processes. However, to ensure their reliability, they have to be carefully evaluated. The evaluation of these methods is often realized in a retrospective way, notably by studying the enrichment of benchmarking data sets. To this purpose, numerous benchmarking data sets were developed over the years, and the resulting improvements led to the availability of high quality benchmarking data sets. However, some points still have to be considered in the selection of the active compounds, decoys, and protein structures to obtain optimal benchmarking data sets.

  20. BENCHMARK EVALUATION OF THE INITIAL ISOTHERMAL PHYSICS MEASUREMENTS AT THE FAST FLUX TEST FACILITY

    SciTech Connect

    John Darrell Bess

    2010-05-01

    The benchmark evaluation of the initial isothermal physics tests performed at the Fast Flux Test Facility, in support of Fuel Cycle Research and Development and Generation-IV activities at the Idaho National Laboratory, has been completed. The evaluation was performed using MCNP5 with ENDF/B-VII.0 nuclear data libraries and according to guidelines provided for inclusion in the International Reactor Physics Experiment Evaluation Project Handbook. Results provided include evaluation of the initial fully-loaded core critical, two neutron spectra measurements near the axial core center, 32 reactivity effects measurements (21 control rod worths, two control rod bank worths, six differential control rod worths, two shutdown margins, and one excess reactivity), isothermal temperature coefficient, and low-energy electron and gamma spectra measurements at the core center. All measurements were performed at 400 ºF. There was good agreement between the calculated and benchmark values for the fully-loaded core critical eigenvalue, reactivity effects measurements, and isothermal temperature coefficient. General agreement between benchmark experiment measurements and calculated spectra for neutrons and low-energy gammas at the core midplane exists, but calculations of the neutron spectra below the core and the low-energy gamma spectra at core midplane did not agree well. Homogenization of core components may have had a significant impact upon computational assessment of these effects. Future work includes development of a fully-heterogeneous model for comprehensive evaluation. The reactor physics measurement data can be used in nuclear data adjustment and validation of computational methods for advanced fuel cycle and nuclear reactor systems using Liquid Metal Fast Reactor technology.

  1. Evaluating the Joint Theater Trauma Registry as a data source to benchmark casualty care.

    PubMed

    O'Connell, Karen M; Littleton-Kearney, Marguerite T; Bridges, Elizabeth; Bibb, Sandra C

    2012-05-01

    Just as data from civilian trauma registries have been used to benchmark and evaluate civilian trauma care, data contained within the Joint Theater Trauma Registry (JTTR) present a unique opportunity to benchmark combat care. Using the iterative steps of the benchmarking process, we evaluated data in the JTTR for suitability and established benchmarks for 24-hour mortality in casualties with polytrauma and a moderate or severe blunt traumatic brain injury (TBI). Mortality at 24 hours was greatest in those with polytrauma and a severe blunt TBI. No mortality was seen in casualties with polytrauma and a moderate blunt TBI. Secondary insults after TBI, especially hypothermia and hypoxemia, increased the odds of 24-hour mortality. Data contained in the JTTR were found to be suitable for establishing benchmarks. JTTR data may be useful in establishing benchmarks for other outcomes and types of combat injuries.

  2. Learning from Follow Up Surveys of Graduates: The Austin Teacher Program and the Benchmark Project. A Discussion Paper.

    ERIC Educational Resources Information Center

    Baker, Thomas E.

    This paper describes Austin College's (Texas) participation in the Benchmark Project, a collaborative followup study of teacher education graduates and their principals, focusing on the second round of data collection. The Benchmark Project was a collaboration of 11 teacher preparation programs that gathered and analyzed data comparing graduates…

  3. Development of an ICSBEP Benchmark Evaluation, Nearly 20 Years of Experience

    SciTech Connect

    J. Blair Briggs; John D. Bess

    2011-06-01

    The basic structure of all ICSBEP benchmark evaluations is essentially the same and includes (1) a detailed description of the experiment; (2) an evaluation of the experiment, including an exhaustive effort to quantify the effects of uncertainties on measured quantities; (3) a concise presentation of benchmark-model specifications; (4) sample calculation results; and (5) a summary of experimental references. Computer code input listings and other relevant information are generally preserved in appendixes. Details of an ICSBEP evaluation is presented.

  4. Monitoring Based Commissioning: Benchmarking Analysis of 24 UC/CSU/IOU Projects

    SciTech Connect

    Mills, Evan; Mathew, Paul

    2009-04-01

    Buildings rarely perform as intended, resulting in energy use that is higher than anticipated. Building commissioning has emerged as a strategy for remedying this problem in non-residential buildings. Complementing traditional hardware-based energy savings strategies, commissioning is a 'soft' process of verifying performance and design intent and correcting deficiencies. Through an evaluation of a series of field projects, this report explores the efficacy of an emerging refinement of this practice, known as monitoring-based commissioning (MBCx). MBCx can also be thought of as monitoring-enhanced building operation that incorporates three components: (1) Permanent energy information systems (EIS) and diagnostic tools at the whole-building and sub-system level; (2) Retro-commissioning based on the information from these tools and savings accounting emphasizing measurement as opposed to estimation or assumptions; and (3) On-going commissioning to ensure efficient building operations and measurement-based savings accounting. MBCx is thus a measurement-based paradigm which affords improved risk-management by identifying problems and opportunities that are missed with periodic commissioning. The analysis presented in this report is based on in-depth benchmarking of a portfolio of MBCx energy savings for 24 buildings located throughout the University of California and California State University systems. In the course of the analysis, we developed a quality-control/quality-assurance process for gathering and evaluating raw data from project sites and then selected a number of metrics to use for project benchmarking and evaluation, including appropriate normalizations for weather and climate, accounting for variations in central plant performance, and consideration of differences in building types. We performed a cost-benefit analysis of the resulting dataset, and provided comparisons to projects from a larger commissioning 'Meta-analysis' database. A total of 1120

  5. Thermal and mechanical codes first benchmark exercise: Part 1, Thermal analysis; Yucca Mountain Project

    SciTech Connect

    Costin, L.S.; Bauer, S.J.

    1990-06-01

    Thermal and mechanical models for intact and jointed rock mass behavior are being developed, verified, and validated at Sandia National Laboratories for the Yucca Mountain Project. Benchmarking is an essential part of this effort and is the primary tool used to demonstrate verification of engineering software used to solve thermomechanical problems. This report presents the results of the first phase of the first thermomechanical benchmark exercise. In the first phase of this exercise, three finite element codes for nonlinear heat conduction and one coupled thermoelastic boundary element code were used to solve the thermal portion of the benchmark problem. The codes used by the participants in this study were DOT, COYOTE, SPECTROM-41, and HEFF. The problem solved by each code was a two-dimensional idealization of a series of drifts whose dimensions approximate those of the underground layout in the conceptual design of a prospective repository for high-level radioactive waste at Yucca Mountain. 20 refs., 50 figs., 3 tabs.

  6. Putting Data to Work: Interim Recommendations from The Benchmarking Project

    ERIC Educational Resources Information Center

    Miles, Marty; Maguire, Sheila; Woodruff-Bolte, Stacy; Clymer, Carol

    2010-01-01

    As public and private funders have focused on evaluating the effectiveness of workforce development programs, a myriad of data collection systems and reporting processes have taken shape. Navigating these systems takes significant time and energy and often saps frontline providers' capacity to use data internally for program improvement.…

  7. Benchmark for evaluation and validation of reactor simulations (BEAVRS)

    SciTech Connect

    Horelik, N.; Herman, B.; Forget, B.; Smith, K.

    2013-07-01

    Advances in parallel computing have made possible the development of high-fidelity tools for the design and analysis of nuclear reactor cores, and such tools require extensive verification and validation. This paper introduces BEAVRS, a new multi-cycle full-core Pressurized Water Reactor (PWR) depletion benchmark based on two operational cycles of a commercial nuclear power plant that provides a detailed description of fuel assemblies, burnable absorbers, in-core fission detectors, core loading patterns, and numerous in-vessel components. This benchmark enables analysts to develop extremely detailed reactor core models that can be used for testing and validation of coupled neutron transport, thermal-hydraulics, and fuel isotopic depletion. The benchmark also provides measured reactor data for Hot Zero Power (HZP) physics tests, boron letdown curves, and three-dimensional in-core flux maps from fifty-eight instrumented assemblies. Initial comparisons between calculations performed with MIT's OpenMC Monte Carlo neutron transport code and measured cycle 1 HZP test data are presented, and these results display an average deviation of approximately 100 pcm for the various critical configurations and control rod worth measurements. Computed HZP radial fission detector flux maps also agree reasonably well with the available measured data. All results indicate that this benchmark will be extremely useful in validation of coupled-physics codes and uncertainty quantification of in-core physics computational predictions. The detailed BEAVRS specification and its associated data package is hosted online at the MIT Computational Reactor Physics Group web site (http://crpg.mit.edu/), where future revisions and refinements to the benchmark specification will be made publicly available. (authors)

  8. Applications of Integral Benchmark Data

    SciTech Connect

    Giuseppe Palmiotti; Teruhiko Kugo; Fitz Trumble; Albert C. Kahler; Dale Lancaster

    2014-10-09

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) provide evaluated integral benchmark data that may be used for validation of reactor physics / nuclear criticality safety analytical methods and data, nuclear data testing, advanced modeling and simulation, and safety analysis licensing activities. The handbooks produced by these programs are used in over 30 countries. Five example applications are presented in this paper: (1) Use of IRPhEP Data in Uncertainty Analyses and Cross Section Adjustment, (2) Uncertainty Evaluation Methods for Reactor Core Design at JAEA Using Reactor Physics Experimental Data, (3) Application of Benchmarking Data to a Broad Range of Criticality Safety Problems, (4) Cross Section Data Testing with ICSBEP Benchmarks, and (5) Use of the International Handbook of Evaluated Reactor Physics Benchmark Experiments to Support the Power Industry.

  9. Towards a common benchmark for long-term process control and monitoring performance evaluation.

    PubMed

    Rosen, C; Jeppsson, U; Vanrolleghem, P A

    2004-01-01

    The COST/IWA benchmark simulation model has been available for seven years. Its primary purpose has been to create a platform for control strategy benchmarking of biological wastewater treatment processes. The fact that the benchmark has resulted in more than 100 publications, not only in Europe but also worldwide, demonstrates the interest for such a tool in the research community. In this paper, an extension of the benchmark simulation model no. 1 (BSM1) is proposed. It aims at facilitating evaluation of two closely related operational tasks: long-term control strategy performance and process monitoring performance. The motivation for the extension is that these two tasks typically act on longer time scales. The extension proposed here consists of 1) prolonging the evaluation period to one year (including influent files), 2) specifying time varying process parameters and 3) including sensor and actuator failures. The prolonged evaluation period is necessary to obtain a relevant and realistic assessment of the effects of such disturbances. Also, a prolonged evaluation period allows for a number of long-term control actions/handles that cannot be evaluated in a realistic fashion in the one week BSM1 evaluation period. In the paper, models for influent file design, parameter changes and sensor failures, initialization procedure and evaluation criteria are discussed. Important remaining topics, for which consensus is required, are identified. The potential of a long-term benchmark is illustrated with an example of process monitoring algorithm benchmarking.

  10. Evaluating the Information Power Grid using the NAS Grid Benchmarks

    NASA Technical Reports Server (NTRS)

    VanderWijngaartm Rob F.; Frumkin, Michael A.

    2004-01-01

    The NAS Grid Benchmarks (NGB) are a collection of synthetic distributed applications designed to rate the performance and functionality of computational grids. We compare several implementations of the NGB to determine programmability and efficiency of NASA's Information Power Grid (IPG), whose services are mostly based on the Globus Toolkit. We report on the overheads involved in porting existing NGB reference implementations to the IPG. No changes were made to the component tasks of the NGB can still be improved.

  11. State Education Agency Communications Process: Benchmark and Best Practices Project. Benchmark and Best Practices Project. Issue No. 01

    ERIC Educational Resources Information Center

    Zavadsky, Heather

    2014-01-01

    The role of state education agencies (SEAs) has shifted significantly from low-profile, compliance activities like managing federal grants to engaging in more complex and politically charged tasks like setting curriculum standards, developing accountability systems, and creating new teacher evaluation systems. The move from compliance-monitoring…

  12. An Overview of the International Reactor Physics Experiment Evaluation Project

    SciTech Connect

    Briggs, J. Blair; Gulliford, Jim

    2014-10-09

    Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties associated with advanced modeling and simulation accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. Two Organization for Economic Cooperation and Development (OECD) Nuclear Energy Agency (NEA) activities, the International Criticality Safety Benchmark Evaluation Project (ICSBEP), initiated in 1992, and the International Reactor Physics Experiment Evaluation Project (IRPhEP), initiated in 2003, have been identifying existing integral experiment data, evaluating those data, and providing integral benchmark specifications for methods and data validation for nearly two decades. Data provided by those two projects will be of use to the international reactor physics, criticality safety, and nuclear data communities for future decades. An overview of the IRPhEP and a brief update of the ICSBEP are provided in this paper.

  13. Preliminary Benchmark Evaluation of Japan’s High Temperature Engineering Test Reactor

    SciTech Connect

    John Darrell Bess

    2009-05-01

    A benchmark model of the initial fully-loaded start-up core critical of Japan’s High Temperature Engineering Test Reactor (HTTR) was developed to provide data in support of ongoing validation efforts of the Very High Temperature Reactor Program using publicly available resources. The HTTR is a 30 MWt test reactor utilizing graphite moderation, helium coolant, and prismatic TRISO fuel. The benchmark was modeled using MCNP5 with various neutron cross-section libraries. An uncertainty evaluation was performed by perturbing the benchmark model and comparing the resultant eigenvalues. The calculated eigenvalues are approximately 2-3% greater than expected with an uncertainty of ±0.70%. The primary sources of uncertainty are the impurities in the core and reflector graphite. The release of additional HTTR data could effectively reduce the benchmark model uncertainties and bias. Sensitivity of the results to the graphite impurity content might imply that further evaluation of the graphite content could significantly improve calculated results. Proper characterization of graphite for future Next Generation Nuclear Power reactor designs will improve computational modeling capabilities. Current benchmarking activities include evaluation of the annular HTTR cores and assessment of the remaining start-up core physics experiments, including reactivity effects, reactivity coefficient, and reaction-rate distribution measurements. Long term benchmarking goals might include analyses of the hot zero-power critical, rise-to-power tests, and other irradiation, safety, and technical evaluations performed with the HTTR.

  14. Concept of using a benchmark part to evaluate rapid prototype processes

    NASA Technical Reports Server (NTRS)

    Cariapa, Vikram

    1994-01-01

    A conceptual benchmark part for guiding manufacturers and users of rapid prototyping technologies is proposed. This is based on a need to have some tool to evaluate the development of this technology and to assist the user in judiciously selecting a process. The benchmark part is designed to have unique product details and features. The extent to which a rapid prototyping process can reproduce these features becomes a measure of the capability of the process. Since rapid prototyping is a dynamic technology, this benchmark part should be used to continuously monitor process capability of existing and developing technologies. Development of this benchmark part is, therefore, based on an understanding of the properties required from prototypes and characteristics of various rapid prototyping processes and measuring equipment that is used for evaluation.

  15. Towards a benchmark simulation model for plant-wide control strategy performance evaluation of WWTPs.

    PubMed

    Jeppsson, U; Rosen, C; Alex, J; Copp, J; Gernaey, K V; Pons, M N; Vanrolleghem, P A

    2006-01-01

    The COST/IWA benchmark simulation model has been available for seven years. Its primary purpose has been to create a platform for control strategy benchmarking of activated sludge processes. The fact that the benchmark has resulted in more than 100 publications, not only in Europe but also worldwide, demonstrates the interest in such a tool within the research community In this paper, an extension of the benchmark simulation model no 1 (BSM1) is proposed. This extension aims at facilitating control strategy development and performance evaluation at a plant-wide level and, consequently, includes both pre-treatment of wastewater as well as the processes describing sludge treatment. The motivation for the extension is the increasing interest and need to operate and control wastewater treatment systems not only at an individual process level but also on a plant-wide basis. To facilitate the changes, the evaluation period has been extended to one year. A prolonged evaluation period allows for long-term control strategies to be assessed and enables the use of control handles that cannot be evaluated in a realistic fashion in the one-week BSM1 evaluation period. In the paper, the extended plant layout is proposed and the new suggested process models are described briefly. Models for influent file design, the benchmarking procedure and the evaluation criteria are also discussed. And finally, some important remaining topics, for which consensus is required, are identified.

  16. Key findings of the US Cystic Fibrosis Foundation's clinical practice benchmarking project.

    PubMed

    Boyle, Michael P; Sabadosa, Kathryn A; Quinton, Hebe B; Marshall, Bruce C; Schechter, Michael S

    2014-04-01

    Benchmarking is the process of using outcome data to identify high-performing centres and determine practices associated with their outstanding performance. The US Cystic Fibrosis Foundation (CFF) Patient Registry contains centre-specific outcomes data for all CFF-certified paediatric and adult cystic fibrosis (CF) care programmes in the USA. The CFF benchmarking project analysed these registry data, adjusting for differences in patient case mix known to influence outcomes, and identified the top-performing US paediatric and adult CF care programmes for pulmonary and nutritional outcomes. Separate multidisciplinary paediatric and adult benchmarking teams each visited 10 CF care programmes, five in the top quintile for pulmonary outcomes and five in the top quintile for nutritional outcomes. Key practice patterns and approaches present in both paediatric and adult programmes with outstanding clinical outcomes were identified and could be summarised as systems, attitudes, practices, patient/family empowerment and projects. These included: (1) the presence of strong leadership and a well-functioning care team working with a systematic approach to providing consistent care; (2) high expectations for outcomes among providers and families; (3) early and aggressive management of clinical declines, avoiding reliance on 'rescues'; and (4) patients/families that were engaged, empowered and well informed on disease management and its rationale. In summary, assessment of practice patterns at CF care centres with top-quintile pulmonary and nutritional outcomes provides insight into characteristic practices that may aid in optimising patient outcomes.

  17. Overview of the 2014 Edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook)

    SciTech Connect

    John D. Bess; J. Blair Briggs; Jim Gulliford; Ian Hill

    2014-10-01

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) is a widely recognized world class program. The work of the IRPhEP is documented in the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook). Integral data from the IRPhEP Handbook is used by reactor safety and design, nuclear data, criticality safety, and analytical methods development specialists, worldwide, to perform necessary validations of their calculational techniques. The IRPhEP Handbook is among the most frequently quoted reference in the nuclear industry and is expected to be a valuable resource for future decades.

  18. The ORSphere Benchmark Evaluation and Its Potential Impact on Nuclear Criticality Safety

    SciTech Connect

    John D. Bess; Margaret A. Marshall; J. Blair Briggs

    2013-10-01

    In the early 1970’s, critical experiments using an unreflected metal sphere of highly enriched uranium (HEU) were performed with the focus to provide a “very accurate description…as an ideal benchmark for calculational methods and cross-section data files.” Two near-critical configurations of the Oak Ridge Sphere (ORSphere) were evaluated as acceptable benchmark experiments for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments (ICSBEP Handbook). The results from those benchmark experiments were then compared with additional unmoderated and unreflected HEU metal benchmark experiment configurations currently found in the ICSBEP Handbook. For basic geometries (spheres, cylinders, and slabs) the eigenvalues calculated using MCNP5 and ENDF/B-VII.0 were within 3 of their respective benchmark values. There appears to be generally a good agreement between calculated and benchmark values for spherical and slab geometry systems. Cylindrical geometry configurations tended to calculate low, including more complex bare HEU metal systems containing cylinders. The ORSphere experiments do not calculate within their 1s uncertainty and there is a possibility that the effect of the measured uncertainties for the GODIVA I benchmark may need reevaluated. There is significant scatter in the calculations for the highly-correlated ORCEF cylinder experiments, which are constructed from close-fitting HEU discs and annuli. Selection of a nuclear data library can have a larger impact on calculated eigenvalue results than the variation found within calculations of a given experimental series, such as the ORCEF cylinders, using a single nuclear data set.

  19. Benchmark Testing of a New 56Fe Evaluation for Criticality Safety Applications

    SciTech Connect

    Leal, Luiz C; Ivanov, E.

    2015-01-01

    The SAMMY code was used to evaluate resonance parameters of the 56Fe cross section in the resolved resonance energy range of 0–2 MeV using transmission data, capture, elastic, inelastic, and double differential elastic cross sections. The resonance analysis was performed with the code SAMMY that fits R-matrix resonance parameters using the generalized least-squares technique (Bayes’ theory). The evaluation yielded a set of resonance parameters that reproduced the experimental data very well, along with a resonance parameter covariance matrix for data uncertainty calculations. Benchmark tests were conducted to assess the evaluation performance in benchmark calculations.

  20. DICE: Database for the International Criticality Safety Benchmark Evaluation Program Handbook

    SciTech Connect

    Nouri, Ali; Nagel, Pierre; Briggs, J. Blair; Ivanova, Tatiana

    2003-09-15

    The 2002 edition of the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments' (ICSBEP Handbook) spans more than 26 000 pages and contains 330 evaluations with benchmark specifications for 2881 critical or near-critical configurations. With such a large content, it became evident that the users needed more than a broad and qualitative classification of experiments to make efficient use of the ICSBEP Handbook. This paper describes the features of Database for the International Handbook of Evaluated Criticality Safety Benchmark Experiments (DICE), which is a database for the ICSBEP Handbook. The DICE program contains a relational database loaded with selected information from each configuration and a users' interface that enables one to query the database and to extract specific parameters. Summary descriptions of each experimental configuration can also be obtained. In addition, plotting capabilities provide the means of comparing neutron spectra and sensitivity coefficients for a set of configurations.

  1. Reactor Physics and Criticality Benchmark Evaluations for Advanced Nuclear Fuel - Final Technical Report

    SciTech Connect

    William Anderson; James Tulenko; Bradley Rearden; Gary Harms

    2008-09-11

    The nuclear industry interest in advanced fuel and reactor design often drives towards fuel with uranium enrichments greater than 5 wt% 235U. Unfortunately, little data exists, in the form of reactor physics and criticality benchmarks, for uranium enrichments ranging between 5 and 10 wt% 235U. The primary purpose of this project is to provide benchmarks for fuel similar to what may be required for advanced light water reactors (LWRs). These experiments will ultimately provide additional information for application to the criticality-safety bases for commercial fuel facilities handling greater than 5 wt% 235U fuel.

  2. Associations between CMS's Clinical Performance Measures project benchmarks, profit structure, and mortality in dialysis units.

    PubMed

    Szczech, L A; Klassen, P S; Chua, B; Hedayati, S S; Flanigan, M; McClellan, W M; Reddan, D N; Rettig, R A; Frankenfield, D L; Owen, W F

    2006-06-01

    Prior studies observing greater mortality in for-profit dialysis units have not captured information about benchmarks of care. This study was undertaken to examine the association between profit status and mortality while achieving benchmarks. Utilizing data from the US Renal Data System and the Centers for Medicare & Medicaid Services' end-stage renal disease (ESRD) Clinical Performance Measures project, hemodialysis units were categorized as for-profit or not-for-profit. Associations with mortality at 1 year were estimated using Cox regression. Two thousand six hundred and eighty-five dialysis units (31,515 patients) were designated as for-profit and 1018 (15,085 patients) as not-for-profit. Patients in for-profit facilities were more likely to be older, black, female, diabetic, and have higher urea reduction ratio (URR), hematocrit, serum albumin, and transferrin saturation. Patients (19.4 and 18.6%) in for-profit and not-for-profit units died, respectively. In unadjusted analyses, profit status was not associated with mortality (hazard ratio (HR)=1.04, P=0.09). When added to models with profit status, the following resulted in a significant association between profit status (for-profit vs not-for-profit) and increasing mortality risk: URR, hematocrit, albumin, and ESRD Network. In adjusted models, patients in for-profit facilities had a greater death risk (HR 1.09, P=0.004). More patients in for-profit units met clinical benchmarks. Survival among patients in for-profit units was similar to not-for-profit units. This suggests that in the contemporary era, interventions in for-profit dialysis units have not impaired their ability to deliver performance benchmarks and do not affect survival. PMID:16732194

  3. TOSPAC calculations in support of the COVE 2A benchmarking activity; Yucca Mountain Site Characterization Project

    SciTech Connect

    Gauthier, J.H.; Zieman, N.B.; Miller, W.B.

    1991-10-01

    The purpose of the the Code Verification (COVE) 2A benchmarking activity is to assess the numerical accuracy of several computer programs for the Yucca Mountain Site Characterization Project of the Department of Energy. This paper presents a brief description of the computer program TOSPAC and a discussion of the calculational effort and results generated by TOSPAC for the COVE 2A problem set. The calculations were performed twice. The initial calculations provided preliminary results for comparison with the results from other COVE 2A participants. TOSPAC was modified in response to the comparison and the final calculations included a correction and several enhancements to improve efficiency. 8 refs.

  4. Benchmark Evaluation of Start-Up and Zero-Power Measurements at the High-Temperature Engineering Test Reactor

    SciTech Connect

    Bess, John D.; Fujimoto, Nozomu

    2014-10-09

    Benchmark models were developed to evaluate six cold-critical and two warm-critical, zero-power measurements of the HTTR. Additional measurements of a fully-loaded subcritical configuration, core excess reactivity, shutdown margins, six isothermal temperature coefficients, and axial reaction-rate distributions were also evaluated as acceptable benchmark experiments. Insufficient information is publicly available to develop finely-detailed models of the HTTR as much of the design information is still proprietary. However, the uncertainties in the benchmark models are judged to be of sufficient magnitude to encompass any biases and bias uncertainties incurred through the simplification process used to develop the benchmark models. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the impurity content of the various graphite blocks that comprise the HTTR. Monte Carlo calculations of keff are between approximately 0.9 % and 2.7 % greater than the benchmark values. Reevaluation of the HTTR models as additional information becomes available could improve the quality of this benchmark and possibly reduce the computational biases. High-quality characterization of graphite impurities would significantly improve the quality of the HTTR benchmark assessment. Simulation of the other reactor physics measurements are in good agreement with the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.

  5. Benchmark Evaluation of Start-Up and Zero-Power Measurements at the High-Temperature Engineering Test Reactor

    DOE PAGES

    Bess, John D.; Fujimoto, Nozomu

    2014-10-09

    Benchmark models were developed to evaluate six cold-critical and two warm-critical, zero-power measurements of the HTTR. Additional measurements of a fully-loaded subcritical configuration, core excess reactivity, shutdown margins, six isothermal temperature coefficients, and axial reaction-rate distributions were also evaluated as acceptable benchmark experiments. Insufficient information is publicly available to develop finely-detailed models of the HTTR as much of the design information is still proprietary. However, the uncertainties in the benchmark models are judged to be of sufficient magnitude to encompass any biases and bias uncertainties incurred through the simplification process used to develop the benchmark models. Dominant uncertainties in themore » experimental keff for all core configurations come from uncertainties in the impurity content of the various graphite blocks that comprise the HTTR. Monte Carlo calculations of keff are between approximately 0.9 % and 2.7 % greater than the benchmark values. Reevaluation of the HTTR models as additional information becomes available could improve the quality of this benchmark and possibly reduce the computational biases. High-quality characterization of graphite impurities would significantly improve the quality of the HTTR benchmark assessment. Simulation of the other reactor physics measurements are in good agreement with the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  6. Large Core Code Evaluation Working Group Benchmark Problem Four: neutronics and burnup analysis of a large heterogeneous fast reactor. Part 1. Analysis of benchmark results. [LMFBR

    SciTech Connect

    Cowan, C.L.; Protsik, R.; Lewellen, J.W.

    1984-01-01

    The Large Core Code Evaluation Working Group Benchmark Problem Four was specified to provide a stringent test of the current methods which are used in the nuclear design and analyses process. The benchmark specifications provided a base for performing detailed burnup calculations over the first two irradiation cycles for a large heterogeneous fast reactor. Particular emphasis was placed on the techniques for modeling the three-dimensional benchmark geometry, and sensitivity studies were carried out to determine the performance parameter sensitivities to changes in the neutronics and burnup specifications. The results of the Benchmark Four calculations indicated that a linked RZ-XY (Hex) two-dimensional representation of the benchmark model geometry can be used to predict mass balance data, power distributions, regionwise fuel exposure data and burnup reactivities with good accuracy when compared with the results of direct three-dimensional computations. Most of the small differences in the results of the benchmark analyses by the different participants were attributed to ambiguities in carrying out the regionwise flux renormalization calculations throughout the burnup step.

  7. How Can the eCampus Be Organized and Run To Address Traditional Concerns, but Maintain an Innovative Approach to Providing Educational Access? Project Eagle Evaluation Question #3. Benchmarking St. Petersburg College: A Report to Leadership.

    ERIC Educational Resources Information Center

    Burkhart, Joyce

    This paper discusses the findings of St. Petersburg College's (SPC) (Florida) evaluation question: "How can the eCampus be organized and run to address traditional faculty concerns, but maintain an innovative approach to providing educational access?" In order to evaluate this question, a list was compiled of faculty issues identified by…

  8. Benchmarking and Performance Measurement.

    ERIC Educational Resources Information Center

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  9. Benchmark Evaluation of the HTR-PROTEUS Absorber Rod Worths (Core 4)

    SciTech Connect

    John D. Bess; Leland M. Montierth

    2014-06-01

    PROTEUS was a zero-power research reactor at the Paul Scherrer Institute (PSI) in Switzerland. The critical assembly was constructed from a large graphite annulus surrounding a central cylindrical cavity. Various experimental programs were investigated in PROTEUS; during the years 1992 through 1996, it was configured as a pebble-bed reactor and designated HTR-PROTEUS. Various critical configurations were assembled with each accompanied by an assortment of reactor physics experiments including differential and integral absorber rod measurements, kinetics, reaction rate distributions, water ingress effects, and small sample reactivity effects [1]. Four benchmark reports were previously prepared and included in the March 2013 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook) [2] evaluating eleven critical configurations. A summary of that effort was previously provided [3] and an analysis of absorber rod worth measurements for Cores 9 and 10 have been performed prior to this analysis and included in PROTEUS-GCR-EXP-004 [4]. In the current benchmark effort, absorber rod worths measured for Core Configuration 4, which was the only core with a randomly-packed pebble loading, have been evaluated for inclusion as a revision to the HTR-PROTEUS benchmark report PROTEUS-GCR-EXP-002.

  10. Evaluation of mobile phone camera benchmarking using objective camera speed and image quality metrics

    NASA Astrophysics Data System (ADS)

    Peltoketo, Veli-Tapani

    2014-11-01

    When a mobile phone camera is tested and benchmarked, the significance of image quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. However, the speed or rapidity metrics of the mobile phone's camera system has not been used with the quality metrics even if the camera speed has become a more and more important camera performance feature. There are several tasks in this work. First, the most important image quality and speed-related metrics of a mobile phone's camera system are collected from the standards and papers and, also, novel speed metrics are identified. Second, combinations of the quality and speed metrics are validated using mobile phones on the market. The measurements are done toward application programming interface of different operating systems. Finally, the results are evaluated and conclusions are made. The paper defines a solution to combine different image quality and speed metrics to a single benchmarking score. A proposal of the combined benchmarking metric is evaluated using measurements of 25 mobile phone cameras on the market. The paper is a continuation of a previous benchmarking work expanded with visual noise measurement and updates of the latest mobile phone versions.

  11. How Can St. Petersburg College Leverage Technology To Increase Access to Courses and Programs for an Expanded Pool of Learners? Project Eagle Evaluation Question #4. Benchmarking St. Petersburg College: A Report to Leadership.

    ERIC Educational Resources Information Center

    Burkhart, Joyce

    This report discusses St. Petersburg College's (SPC) (Florida) evaluation question, "How can St. Petersburg College leverage technology to increase access to courses and programs for an expanded pool of learners?" The report summarizes both nationwide/worldwide best practices and current SPC efforts related to four strategies: (1) an E-learning…

  12. What Are the Appropriate Models for St. Petersburg College and the University Partnership Center To Expand Access to Bachelor's and Master's Degrees? Project Eagle Evaluation Question #5. Benchmarking St. Petersburg College: A Report to Leadership.

    ERIC Educational Resources Information Center

    Burkhart, Joyce

    St. Petersburg College (SPC) (Florida), formerly a two-year community college, now offers four-year degrees. This paper discusses the findings of SPC's evaluation question focusing on what the appropriate models are for St. Petersburg College and the University Partnership Center (UPC) to increase access to bachelor's and master's programs.…

  13. Benchmarking the Remote-Handled Waste Facility at the West Valley Demonstration Project

    SciTech Connect

    O. P. Mendiratta; D. K. Ploetz

    2000-02-29

    ABSTRACT Facility decontamination activities at the West Valley Demonstration Project (WVDP), the site of a former commercial nuclear spent fuel reprocessing facility near Buffalo, New York, have resulted in the removal of radioactive waste. Due to high dose and/or high contamination levels of this waste, it needs to be handled remotely for processing and repackaging into transport/disposal-ready containers. An initial conceptual design for a Remote-Handled Waste Facility (RHWF), completed in June 1998, was estimated to cost $55 million and take 11 years to process the waste. Benchmarking the RHWF with other facilities around the world, completed in November 1998, identified unique facility design features and innovative waste pro-cessing methods. Incorporation of the benchmarking effort has led to a smaller yet fully functional, $31 million facility. To distinguish it from the June 1998 version, the revised design is called the Rescoped Remote-Handled Waste Facility (RRHWF) in this topical report. The conceptual design for the RRHWF was completed in June 1999. A design-build contract was approved by the Department of Energy in September 1999.

  14. Identification of Integral Benchmarks for Nuclear Data Testing Using DICE (Database for the International Handbook of Evaluated Criticality Safety Benchmark Experiments)

    SciTech Connect

    J. Blair Briggs; A. Nichole Ellis; Yolanda Rugama; Nicolas Soppera; Manuel Bossant

    2011-08-01

    Typical users of the International Criticality Safety Evaluation Project (ICSBEP) Handbook have specific criteria to which they desire to find matching experiments. Depending on the application, those criteria may consist of any combination of physical or chemical characteristics and/or various neutronic parameters. The ICSBEP Handbook contains a structured format helping the user narrow the search for experiments of interest. However, with nearly 4300 different experimental configurations and the ever increasing addition of experimental data, the necessity to perform multiple criteria searches have rendered these features insufficient. As a result, a relational database was created with information extracted from the ICSBEP Handbook. A users’ interface was designed by OECD and DOE to allow the interrogation of this database. The database and the corresponding users’ interface are referred to as DICE. DICE currently offers the capability to perform multiple criteria searches that go beyond simple fuel, physical form and spectra and includes expanded general information, fuel form, moderator/coolant, neutron-absorbing material, cladding, reflector, separator, geometry, benchmark results, spectra, and neutron balance parameters. DICE also includes the capability to display graphical representations of neutron spectra, detailed neutron balance, sensitivity coefficients for capture, fission, elastic scattering, inelastic scattering, nu-bar and mu-bar, as well as several other features.

  15. RESULTS FOR THE INTERMEDIATE-SPECTRUM ZEUS BENCHMARK OBTAINED WITH NEW 63,65Cu CROSS-SECTION EVALUATIONS

    SciTech Connect

    Sobes, Vladimir; Leal, Luiz C

    2014-01-01

    The four HEU, intermediate-spectrum, copper-reflected Zeus experiments have shown discrepant results between measurement and calculation for the last several major releases of the ENDF library. The four benchmarks show a trend in reported C/E values with increasing energy of average lethargy causing fission. Recently, ORNL has made improvements to the evaluations of three key isotopes involved in the benchmark cases in question. Namely, an updated evaluation for 235U and evaluations of 63,65Cu. This paper presents the benchmarking results of the four intermediate-spectrum Zeus cases using the three updated evaluations.

  16. Evaluation of HEU-Beryllium Benchmark Experiments to Improve Computational Analysis of Space Reactors

    SciTech Connect

    Bess, John; Bledsoe, Keith C; Rearden, Bradley T

    2011-01-01

    An assessment was previously performed to evaluate modeling capabilities and quantify preliminary biases and uncertainties associated with the modeling methods and data utilized in designing a nuclear reactor such as a beryllium-reflected, highly-enriched-uranium (HEU)-O2 fission surface power (FSP) system for space nuclear power. The conclusion of the previous study was that current capabilities could preclude the necessity of a cold critical test of the FSP; however, additional testing would reduce uncertainties in the beryllium and uranium cross-section data and the overall uncertainty in the computational models. A series of critical experiments using HEU metal were performed in the 1960s and 1970s in support of criticality safety operations at the Y-12 Plant. Of the hundreds of experiments, three were identified as fast-fission configurations reflected by beryllium metal. These experiments have been evaluated as benchmarks for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE). Further evaluation of the benchmark experiments was performed using the sensitivity and uncertainty analysis capabilities of SCALE 6. The data adjustment methods of SCALE 6 have been employed in the validation of an example FSP design model to reduce the uncertainty due to the beryllium cross section data.

  17. Evaluation of HEU-Beryllium Benchmark Experiments to Improve Computational Analysis of Space Reactors

    SciTech Connect

    John D. Bess; Keith C. Bledsoe; Bradley T. Rearden

    2011-02-01

    An assessment was previously performed to evaluate modeling capabilities and quantify preliminary biases and uncertainties associated with the modeling methods and data utilized in designing a nuclear reactor such as a beryllium-reflected, highly-enriched-uranium (HEU)-O2 fission surface power (FSP) system for space nuclear power. The conclusion of the previous study was that current capabilities could preclude the necessity of a cold critical test of the FSP; however, additional testing would reduce uncertainties in the beryllium and uranium cross-section data and the overall uncertainty in the computational models. A series of critical experiments using HEU metal were performed in the 1960s and 1970s in support of criticality safety operations at the Y-12 Plant. Of the hundreds of experiments, three were identified as fast-fission configurations reflected by beryllium metal. These experiments have been evaluated as benchmarks for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE). Further evaluation of the benchmark experiments was performed using the sensitivity and uncertainty analysis capabilities of SCALE 6. The data adjustment methods of SCALE 6 have been employed in the validation of an example FSP design model to reduce the uncertainty due to the beryllium cross section data.

  18. Development of Conceptual Benchmark Models to Evaluate Complex Hydrologic Model Calibration in Managed Basins Using Python

    NASA Astrophysics Data System (ADS)

    Hughes, J. D.; White, J.

    2013-12-01

    For many numerical hydrologic models it is a challenge to quantitatively demonstrate that complex models are preferable to simpler models. Typically, a decision is made to develop and calibrate a complex model at the beginning of a study. The value of selecting a complex model over simpler models is commonly inferred from use of a model with fewer simplifications of the governing equations because it can be time consuming to develop another numerical code with data processing and parameter estimation functionality. High-level programming languages like Python can greatly reduce the effort required to develop and calibrate simple models that can be used to quantitatively demonstrate the increased value of a complex model. We have developed and calibrated a spatially-distributed surface-water/groundwater flow model for managed basins in southeast Florida, USA, to (1) evaluate the effect of municipal groundwater pumpage on surface-water/groundwater exchange, (2) investigate how the study area will respond to sea-level rise, and (3) explore combinations of these forcing functions. To demonstrate the increased value of this complex model, we developed a two-parameter conceptual-benchmark-discharge model for each basin in the study area. The conceptual-benchmark-discharge model includes seasonal scaling and lag parameters and is driven by basin rainfall. The conceptual-benchmark-discharge models were developed in the Python programming language and used weekly rainfall data. Calibration was implemented with the Broyden-Fletcher-Goldfarb-Shanno method available in the Scientific Python (SciPy) library. Normalized benchmark efficiencies calculated using output from the complex model and the corresponding conceptual-benchmark-discharge model indicate that the complex model has more explanatory power than the simple model driven only by rainfall.

  19. Benchmark Evaluation of the Medium-Power Reactor Experiment Program Critical Configurations

    SciTech Connect

    Margaret A. Marshall; John D. Bess

    2013-02-01

    A series of small, compact critical assembly (SCCA) experiments were performed in 1962-1965 at the Oak Ridge National Laboratory Critical Experiments Facility (ORCEF) for the Medium-Power Reactor Experiment (MPRE) program. The MPRE was a stainless-steel clad, highly enriched uranium (HEU)-O2 fuelled, BeO reflected reactor design to provide electrical power to space vehicles. Cooling and heat transfer were to be achieved by boiling potassium in the reactor core and passing vapor directly through a turbine. Graphite- and beryllium-reflected assemblies were constructed at ORCEF to verify the critical mass, power distribution, and other reactor physics measurements needed to validate reactor calculations and reactor physics methods. The experimental series was broken into three parts, with the third portion of the experiments representing the beryllium-reflected measurements. The latter experiments are of interest for validating current reactor design efforts for a fission surface power reactor. The entire series has been evaluated as acceptable benchmark experiments and submitted for publication in the International Handbook of Evaluated Criticality Safety Benchmark Experiments and in the International Handbook of Evaluated Reactor Physics Benchmark Experiments.

  20. Windows NT Workstation Performance Evaluation Based on Pro/E 2000i BENCHMARK

    SciTech Connect

    DAVIS,SEAN M.

    2000-08-02

    A performance evaluation of several computers was necessary, so an evaluation program, or benchmark, was run on each computer to determine maximum possible performance. The program was used to test the Computer Aided Drafting (CAD) ability of each computer by monitoring the speed with which several functions were executed. The main objective of the benchmarking program was to record assembly loading times and image regeneration times and then compile a composite score that could be compared with the same tests on other computers. The three computers that were tested were the Compaq AP550, the SGI 230, and the Hewlett-PackardP750C. The Compaq and SGI computers each had a Pentium III 733mhz processor, while the Hewlett-Packard had a Pentium III 750mhz processor. The size and speed of Random Access Memory (RAM) in each computer varied, as did the type of graphics card. Each computer that was tested was using Windows NT 4.0 and Pro/ENGINEER{trademark} 2000i CAD benchmark software provided by Standard Performance Evaluation Corporation (SPEC). The benchmarking program came with its own assembly, automatically loaded and ran tests on the assembly, then compiled the time each test took to complete. Due to the automation of the tests, any sort of user error affecting test scores was virtually eliminated. After all the tests were completed, scores were then compiled and compared. The Silicon Graphics 230 was by far the overall winner with a composite score of 8.57. The Compaq AP550 was next with a score of 5.19, while the Hewlett-Packard P750C performed dismally, achieving a score of 3.34. Several factors, including motherboard chipset, graphics card, and the size and speed of RAM, were involved in the differing scores of the three machines. Surprisingly the Hewlett-Packard, which had the fastest processor, came back with the lowest score. The above factors most likely contributed to the poor performance of the Hewlett-Packard. Based on the results of the benchmark test

  1. Automated Generation of Message-Passing Programs: An Evaluation of CAPTools using NAS Benchmarks

    NASA Technical Reports Server (NTRS)

    Hribar, Michelle R.; Jin, Hao-Qiang; Yan, Jerry C.; Bailey, David (Technical Monitor)

    1998-01-01

    Scientists at NASA Ames Research Center have been developing computational aeroscience applications on highly parallel architectures over the past ten years. During the same time period, a steady transition of hardware and system software also occurred, forcing us to expand great efforts into migrating and receding our applications. As applications and machine architectures continue to become increasingly complex, the cost and time required for this process will become prohibitive. Various attempts to exploit software tools to assist and automate the parallelization process have not produced favorable results. In this paper, we evaluate an interactive parallelization tool, CAPTools, for parallelizing serial versions of the NAB Parallel Benchmarks. Finally, we compare the performance of the resulting CAPTools generated code to the hand-coded benchmarks on the Origin 2000 and IBM SP2. Based on these results, a discussion on the feasibility of automated parallelization of aerospace applications is presented along with suggestions for future work.

  2. Gifted Science Project: Evaluation Report.

    ERIC Educational Resources Information Center

    Ott, Susan L.; Emanuel, Elizabeth, Ed.

    The document contains the evaluation report on the Gifted Science Project in Montgomery County, Maryland, a program to identify resources for students in grades 3-8 who are motivated in science. The Project's primary product is a Project Resource File (PRF) listing people, places, and published materials that can be used by individual students. An…

  3. MPI performance evaluation and characterization using a compact application benchmark code

    SciTech Connect

    Worley, P.H.

    1996-06-01

    In this paper the parallel benchmark code PSTSWM is used to evaluate the performance of the vendor-supplied implementations of the MPI message-passing standard on the Intel Paragon, IBM SP2, and Cray Research T3D. This study is meant to complement the performance evaluation of individual MPI commands by providing information on the practical significance of MPI performance on the execution of a communication-intensive application code. In particular, three performance questions are addressed: how important is the communication protocol in determining performance when using MPI, how does MPI performance compare with that of the native communication library, and how efficient are the collective communication routines.

  4. An evaluation of waste radiotoxicity reduction for a fast burner reactor closed fuel cycle: NEA benchmark results

    SciTech Connect

    Grimm, K.N.; Hill, R.N.; Wase, D.C.

    1995-12-01

    As part of a program proposed by the OECD/NEA Working Party on Physics of Plutonium Recycling (WPPR) to evaluate different scenarios for the use of plutonium, fast reactor physics benchmarks were developed. In this paper, the fuel cycle performance of the metal-fueled benchmark is evaluated in detail. Benchmark results assess the reactor performance and toxicity behavior in a closed nuclear fuel cycle for a parametric variation of the conversion ratio between 0.5 and 1.0. Results indicate that a fast burner reactor closed fuel cycle can be utilized to significantly reduce the radiotoxicity destined for ultimate disposal.

  5. Project Change Evaluation Research Brief.

    ERIC Educational Resources Information Center

    Leiderman, Sally A.; Dupree, David M.

    Project Change is a community-driven anti-racism initiative operating in four communities: Albuquerque, New Mexico; El Paso, Texas; Knoxville, Tennessee; and Valdosta, Georgia. The formative evaluation of Project Change began in 1994 when all of the sites were still in planning or early action phases. Findings from the summative evaluation will be…

  6. Project Adelante. Final Evaluation Report.

    ERIC Educational Resources Information Center

    Hubert, John A.; And Others

    An evaluation was conducted of "Project Adelante," an ESEA Title VII project supporting a Spanish-English bilingual education program in Hartford, Connecticut. The federal funding provided personnel for staff development, parent involvement, and evaluation over 5 years of a bilingual education program serving 600 Hispanic children in 3 elementary…

  7. Team Projects and Peer Evaluations

    ERIC Educational Resources Information Center

    Doyle, John Kevin; Meeker, Ralph D.

    2008-01-01

    The authors assign semester- or quarter-long team-based projects in several Computer Science and Finance courses. This paper reports on our experience in designing, managing, and evaluating such projects. In particular, we discuss the effects of team size and of various peer evaluation schemes on team performance and student learning. We report…

  8. Project financial evaluation

    SciTech Connect

    None, None

    2009-01-18

    The project financial section of the Renewable Energy Technology Characterizations describes structures and models to support the technical and economic status of emerging renewable energy options for electricity supply.

  9. Performance evaluation of tile-based Fisher Ratio analysis using a benchmark yeast metabolome dataset.

    PubMed

    Watson, Nathanial E; Parsons, Brendon A; Synovec, Robert E

    2016-08-12

    Performance of tile-based Fisher Ratio (F-ratio) data analysis, recently developed for discovery-based studies using comprehensive two-dimensional gas chromatography coupled with time-of-flight mass spectrometry (GC×GC-TOFMS), is evaluated with a metabolomics dataset that had been previously analyzed in great detail, but while taking a brute force approach. The previously analyzed data (referred to herein as the benchmark dataset) were intracellular extracts from Saccharomyces cerevisiae (yeast), either metabolizing glucose (repressed) or ethanol (derepressed), which define the two classes in the discovery-based analysis to find metabolites that are statistically different in concentration between the two classes. Beneficially, this previously analyzed dataset provides a concrete means to validate the tile-based F-ratio software. Herein, we demonstrate and validate the significant benefits of applying tile-based F-ratio analysis. The yeast metabolomics data are analyzed more rapidly in about one week versus one year for the prior studies with this dataset. Furthermore, a null distribution analysis is implemented to statistically determine an adequate F-ratio threshold, whereby the variables with F-ratio values below the threshold can be ignored as not class distinguishing, which provides the analyst with confidence when analyzing the hit table. Forty-six of the fifty-four benchmarked changing metabolites were discovered by the new methodology while consistently excluding all but one of the benchmarked nineteen false positive metabolites previously identified.

  10. Performance evaluation of tile-based Fisher Ratio analysis using a benchmark yeast metabolome dataset.

    PubMed

    Watson, Nathanial E; Parsons, Brendon A; Synovec, Robert E

    2016-08-12

    Performance of tile-based Fisher Ratio (F-ratio) data analysis, recently developed for discovery-based studies using comprehensive two-dimensional gas chromatography coupled with time-of-flight mass spectrometry (GC×GC-TOFMS), is evaluated with a metabolomics dataset that had been previously analyzed in great detail, but while taking a brute force approach. The previously analyzed data (referred to herein as the benchmark dataset) were intracellular extracts from Saccharomyces cerevisiae (yeast), either metabolizing glucose (repressed) or ethanol (derepressed), which define the two classes in the discovery-based analysis to find metabolites that are statistically different in concentration between the two classes. Beneficially, this previously analyzed dataset provides a concrete means to validate the tile-based F-ratio software. Herein, we demonstrate and validate the significant benefits of applying tile-based F-ratio analysis. The yeast metabolomics data are analyzed more rapidly in about one week versus one year for the prior studies with this dataset. Furthermore, a null distribution analysis is implemented to statistically determine an adequate F-ratio threshold, whereby the variables with F-ratio values below the threshold can be ignored as not class distinguishing, which provides the analyst with confidence when analyzing the hit table. Forty-six of the fifty-four benchmarked changing metabolites were discovered by the new methodology while consistently excluding all but one of the benchmarked nineteen false positive metabolites previously identified. PMID:27393630

  11. The DLESE Evaluation Toolkit Project

    NASA Astrophysics Data System (ADS)

    Buhr, S. M.; Barker, L. J.; Marlino, M.

    2002-12-01

    The Evaluation Toolkit and Community project is a new Digital Library for Earth System Education (DLESE) collection designed to raise awareness of project evaluation within the geoscience education community, and to enable principal investigators, teachers, and evaluators to implement project evaluation more readily. This new resource is grounded in the needs of geoscience educators, and will provide a virtual home for a geoscience education evaluation community. The goals of the project are to 1) provide a robust collection of evaluation resources useful for Earth systems educators, 2) establish a forum and community for evaluation dialogue within DLESE, and 3) disseminate the resources through the DLESE infrastructure and through professional society workshops and proceedings. Collaboration and expertise in education, geoscience and evaluation are necessary if we are to conduct the best possible geoscience education. The Toolkit allows users to engage in evaluation at whichever level best suits their needs, get more evaluation professional development if desired, and access the expertise of other segments of the community. To date, a test web site has been built and populated, initial community feedback from the DLESE and broader community is being garnered, and we have begun to heighten awareness of geoscience education evaluation within our community. The web site contains features that allow users to access professional development about evaluation, search and find evaluation resources, submit resources, find or offer evaluation services, sign up for upcoming workshops, take the user survey, and submit calendar items. The evaluation resource matrix currently contains resources that have met our initial review. The resources are currently organized by type; they will become searchable on multiple dimensions of project type, audience, objectives and evaluation resource type as efforts to develop a collection-specific search engine mature. The peer review

  12. Project Proposals Evaluation

    NASA Astrophysics Data System (ADS)

    Encheva, Sylvia; Tumin, Sharil

    2009-08-01

    Collaboration among various firms has been traditionally used trough single project joint ventures for bonding purposes. Eventhough the performed work is usually beneficial to some extend to all participants, the type of collaboration option to be adapted is strongly influenced by overall purposes and goals that can be achieved. In order to facilitate a choice of collaboration option best suited to a firm's need a computer based model is proposed.

  13. Incorporating specificity into optimization: evaluation of SPA using CSAR 2014 and CASF 2013 benchmarks.

    PubMed

    Yan, Zhiqiang; Wang, Jin

    2016-03-01

    Scoring functions of protein-ligand interactions are widely used in computationally docking software and structure-based drug discovery. Accurate prediction of the binding energy between the protein and the ligand is the main task of the scoring function. The accuracy of a scoring function is normally evaluated by testing it on the benchmarks of protein-ligand complexes. In this work, we report the evaluation analysis of an improved version of scoring function SPecificity and Affinity (SPA). By testing on two independent benchmarks Community Structure-Activity Resource (CSAR) 2014 and Comparative Assessment of Scoring Functions (CASF) 2013, the assessment shows that SPA is relatively more accurate than other compared scoring functions in predicting the interactions between the protein and the ligand. We conclude that the inclusion of the specificity in the optimization can effectively suppress the competitive state on the funnel-like binding energy landscape, and make SPA more accurate in identifying the "native" conformation and scoring the binding decoys. The evaluation of SPA highlights the importance of binding specificity in improving the accuracy of the scoring functions.

  14. The Impact Hydrocode Benchmark and Validation Project: Results of Validation Tests

    NASA Astrophysics Data System (ADS)

    Pierazzo, E.; Artemieva, N. A.; Baldwin, E. C.; Cazamias, J.; Coker, R. F.; Collins, G. S.; Crawford, D. A.; Davison, T.; Holsapple, K. A.; Housen, K. R.; Korycansky, D. G.; Wünnemann, K.

    2008-03-01

    We present our first validation tests of a glass sphere impacting water and an aluminum sphere impacting aluminum as part of the collective validation and benchmarking effort from the impact cratering and explosion community.

  15. GEAR UP Aspirations Project Evaluation

    ERIC Educational Resources Information Center

    Trimble, Brad A.

    2013-01-01

    The purpose of this study was to conduct a formative evaluation of the first two years of the Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP) Aspirations Project (Aspirations) using a Context, Input, Process, and Product (CIPP) model so as to gain an in-depth understanding of the project during the middle school…

  16. Schoolwide Project Evaluations: Workshop Guide.

    ERIC Educational Resources Information Center

    RMC Research Corp., Denver, CO.

    This publication is a guide with the materials necessary for leading a workshop session on Chapter 1 schoolwide project evaluations aimed at meeting federal accountability requirements. As the packet points out, elementary school, middle school, and secondary school projects differ from the traditional Chapter 1 delivery models and as a…

  17. Conceptual Soundness, Metric Development, Benchmarking, and Targeting for PATH Subprogram Evaluation

    SciTech Connect

    Mosey. G.; Doris, E.; Coggeshall, C.; Antes, M.; Ruch, J.; Mortensen, J.

    2009-01-01

    The objective of this study is to evaluate the conceptual soundness of the U.S. Department of Housing and Urban Development (HUD) Partnership for Advancing Technology in Housing (PATH) program's revised goals and establish and apply a framework to identify and recommend metrics that are the most useful for measuring PATH's progress. This report provides an evaluative review of PATH's revised goals, outlines a structured method for identifying and selecting metrics, proposes metrics and benchmarks for a sampling of individual PATH programs, and discusses other metrics that potentially could be developed that may add value to the evaluation process. The framework and individual program metrics can be used for ongoing management improvement efforts and to inform broader program-level metrics for government reporting requirements.

  18. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  19. NASA PC software evaluation project

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Kuan, Julie C.

    1986-01-01

    The USL NASA PC software evaluation project is intended to provide a structured framework for facilitating the development of quality NASA PC software products. The project will assist NASA PC development staff to understand the characteristics and functions of NASA PC software products. Based on the results of the project teams' evaluations and recommendations, users can judge the reliability, usability, acceptability, maintainability and customizability of all the PC software products. The objective here is to provide initial, high-level specifications and guidelines for NASA PC software evaluation. The primary tasks to be addressed in this project are as follows: to gain a strong understanding of what software evaluation entails and how to organize a structured software evaluation process; to define a structured methodology for conducting the software evaluation process; to develop a set of PC software evaluation criteria and evaluation rating scales; and to conduct PC software evaluations in accordance with the identified methodology. Communication Packages, Network System Software, Graphics Support Software, Environment Management Software, General Utilities. This report represents one of the 72 attachment reports to the University of Southwestern Louisiana's Final Report on NASA Grant NGT-19-010-900. Accordingly, appropriate care should be taken in using this report out of context of the full Final Report.

  20. DSM Accuracy Evaluation for the ISPRS Commission I Image Matching Benchmark

    NASA Astrophysics Data System (ADS)

    Kuschk, G.; d'Angelo, P.; Qin, R.; Poli, D.; Reinartz, P.; Cremers, D.

    2014-11-01

    To improve the quality of algorithms for automatic generation of Digital Surface Models (DSM) from optical stereo data in the remote sensing community, the Working Group 4 of Commission I: Geometric and Radiometric Modeling of Optical Airborne and Spaceborne Sensors provides on its website benchmark-test.html"target="_blank">http://www2.isprs.org/commissions/comm1/wg4/benchmark-test.html a benchmark dataset for measuring and comparing the accuracy of dense stereo algorithms. The data provided consists of several optical spaceborne stereo images together with ground truth data produced by aerial laser scanning. In this paper we present our latest work on this benchmark, based upon previous work. As a first point, we noticed that providing the abovementioned test data as geo-referenced satellite images together with their corresponding RPC camera model seems too high a burden for being used widely by other researchers, as a considerable effort still has to be made to integrate the test datas camera model into the researchers local stereo reconstruction framework. To bypass this problem, we now also provide additional rectified input images, which enable stereo algorithms to work out of the box without the need for implementing special camera models. Care was taken to minimize the errors resulting from the rectification transformation and the involved image resampling. We further improved the robustness of the evaluation method against errors in the orientation of the satellite images (with respect to the LiDAR ground truth). To this end we implemented a point cloud alignment of the DSM and the LiDAR reference points using an Iterative Closest Point (ICP) algorithm and an estimation of the best fitting transformation. This way, we concentrate on the errors from the stereo reconstruction and make sure that the result is not biased by errors in the absolute orientation of the satellite images. The evaluation of

  1. GROWTH OF THE INTERNATIONAL CRITICALITY SAFETY AND REACTOR PHYSICS EXPERIMENT EVALUATION PROJECTS

    SciTech Connect

    J. Blair Briggs; John D. Bess; Jim Gulliford

    2011-09-01

    Since the International Conference on Nuclear Criticality Safety (ICNC) 2007, the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) have continued to expand their efforts and broaden their scope. Eighteen countries participated on the ICSBEP in 2007. Now, there are 20, with recent contributions from Sweden and Argentina. The IRPhEP has also expanded from eight contributing countries in 2007 to 16 in 2011. Since ICNC 2007, the contents of the 'International Handbook of Evaluated Criticality Safety Benchmark Experiments1' have increased from 442 evaluations (38000 pages), containing benchmark specifications for 3955 critical or subcritical configurations to 516 evaluations (nearly 55000 pages), containing benchmark specifications for 4405 critical or subcritical configurations in the 2010 Edition of the ICSBEP Handbook. The contents of the Handbook have also increased from 21 to 24 criticality-alarm-placement/shielding configurations with multiple dose points for each, and from 20 to 200 configurations categorized as fundamental physics measurements relevant to criticality safety applications. Approximately 25 new evaluations and 150 additional configurations are expected to be added to the 2011 edition of the Handbook. Since ICNC 2007, the contents of the 'International Handbook of Evaluated Reactor Physics Benchmark Experiments2' have increased from 16 different experimental series that were performed at 12 different reactor facilities to 53 experimental series that were performed at 30 different reactor facilities in the 2011 edition of the Handbook. Considerable effort has also been made to improve the functionality of the searchable database, DICE (Database for the International Criticality Benchmark Evaluation Project) and verify the accuracy of the data contained therein. DICE will be discussed in separate papers at ICNC 2011. The status of the ICSBEP and the IRPh

  2. 5 CFR 470.317 - Project evaluation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 5 Administrative Personnel 1 2012-01-01 2012-01-01 false Project evaluation. 470.317 Section 470... Projects § 470.317 Project evaluation. (a) Compliance evaluation. OPM will review the operation of the...) Results evaluation. All approved project plans will contain an evaluation section to measure the impact...

  3. 5 CFR 470.317 - Project evaluation.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 5 Administrative Personnel 1 2013-01-01 2013-01-01 false Project evaluation. 470.317 Section 470... Projects § 470.317 Project evaluation. (a) Compliance evaluation. OPM will review the operation of the...) Results evaluation. All approved project plans will contain an evaluation section to measure the impact...

  4. A Quantitative Methodology for Determining the Critical Benchmarks for Project 2061 Strand Maps

    ERIC Educational Resources Information Center

    Kuhn, G.

    2008-01-01

    The American Association for the Advancement of Science (AAAS) was tasked with identifying the key science concepts for science literacy in K-12 students in America (AAAS, 1990, 1993). The AAAS Atlas of Science Literacy (2001) has organized roughly half of these science concepts or benchmarks into fifty flow charts. Each flow chart or strand map…

  5. SU-E-J-30: Benchmark Image-Based TCP Calculation for Evaluation of PTV Margins for Lung SBRT Patients

    SciTech Connect

    Li, M; Chetty, I; Zhong, H

    2014-06-01

    Purpose: Tumor control probability (TCP) calculated with accumulated radiation doses may help design appropriate treatment margins. Image registration errors, however, may compromise the calculated TCP. The purpose of this study is to develop benchmark CT images to quantify registration-induced errors in the accumulated doses and their corresponding TCP. Methods: 4DCT images were registered from end-inhale (EI) to end-exhale (EE) using a “demons” algorithm. The demons DVFs were corrected by an FEM model to get realistic deformation fields. The FEM DVFs were used to warp the EI images to create the FEM-simulated images. The two images combined with the FEM DVF formed a benchmark model. Maximum intensity projection (MIP) images, created from the EI and simulated images, were used to develop IMRT plans. Two plans with 3 and 5 mm margins were developed for each patient. With these plans, radiation doses were recalculated on the simulated images and warped back to the EI images using the FEM DVFs to get the accumulated doses. The Elastix software was used to register the FEM-simulated images to the EI images. TCPs calculated with the Elastix-accumulated doses were compared with those generated by the FEM to get the TCP error of the Elastix registrations. Results: For six lung patients, the mean Elastix registration error ranged from 0.93 to 1.98 mm. Their relative dose errors in PTV were between 0.28% and 6.8% for 3mm margin plans, and between 0.29% and 6.3% for 5mm-margin plans. As the PTV margin reduced from 5 to 3 mm, the mean TCP error of the Elastix-reconstructed doses increased from 2.0% to 2.9%, and the mean NTCP errors decreased from 1.2% to 1.1%. Conclusion: Patient-specific benchmark images can be used to evaluate the impact of registration errors on the computed TCPs, and may help select appropriate PTV margins for lung SBRT patients.

  6. Benchmark Data for Evaluation of Aeroacoustic Propagation Codes With Grazing Flow

    NASA Technical Reports Server (NTRS)

    Jones, Michael G.; Watson, Willie R.; Parrott, Tony L.

    2005-01-01

    Increased understanding of the effects of acoustic treatment on the propagation of sound through commercial aircraft engine nacelles is a requirement for more efficient liner design. To this end, one of NASA s goals is to further the development of duct propagation and impedance reduction codes. A number of these codes have been developed over the last three decades. These codes are typically divided into two categories: (1) codes that use the measured complex acoustic pressure field to reduce the acoustic impedance of treatment that is positioned along the wall of the duct, and (2) codes that use the acoustic impedance of the treatment as input and compute the sound field throughout the duct. Clearly, the value of these codes is dependent upon the quality of the data used for their validation. Over the past two decades, data acquired in the NASA Langley Research Center Grazing Incidence Tube have been used by a number of researchers for comparison with their propagation codes. Many of these comparisons have been based upon Grazing Incidence Tube tests that were conducted to study specific liner technology components, and were incomplete for general propagation code validation. Thus, the objective of the current investigation is to provide a quality data set that can be used as a benchmark for evaluation of duct propagation and impedance reduction codes. In order to achieve this objective, two parallel efforts have been undertaken. The first of these is the development of an enhanced impedance eduction code that uses data acquired in the Grazing Incidence Tube. This enhancement is intended to place the benchmark data on as firm a foundation as possible. The second key effort is the acquisition of a comprehensive set of data selected to allow propagation code evaluations over a range of test conditions.

  7. Block Transfer Agreement Evaluation Project

    ERIC Educational Resources Information Center

    Bastedo, Helena

    2010-01-01

    The objective of this project is to evaluate for the British Columbia Council on Admissions and Transfer (BCCAT) the effectiveness of block transfer agreements (BTAs) in the BC Transfer System and recommend steps to be taken to improve their effectiveness. Findings of this study revealed that institutions want to expand block credit transfer;…

  8. BENCHMARK DOSES FOR CHEMICAL MIXTURES: EVALUATION OF A MIXTURE OF 18 PHAHS.

    EPA Science Inventory

    Benchmark doses (BMDs), defined as doses of a substance that are expected to result in a pre-specified level of "benchmark" response (BMR), have been used for quantifying the risk associated with exposure to environmental hazards. The lower confidence limit of the BMD is used as...

  9. Analysis of a benchmark suite to evaluate mixed numeric and symbolic processing

    NASA Technical Reports Server (NTRS)

    Ragharan, Bharathi; Galant, David

    1992-01-01

    The suite of programs that formed the benchmark for a proposed advanced computer is described and analyzed. The features of the processor and its operating system that are tested by the benchmark are discussed. The computer codes and the supporting data for the analysis are given as appendices.

  10. Evaluation of the benchmark dose for point of departure determination for a variety of chemical classes in applied regulatory settings.

    PubMed

    Izadi, Hoda; Grundy, Jean E; Bose, Ranjan

    2012-05-01

    Repeated-dose studies received by the New Substances Assessment and Control Bureau (NSACB) of Health Canada are used to provide hazard information toward risk calculation. These studies provide a point of departure (POD), traditionally the NOAEL or LOAEL, which is used to extrapolate the quantity of substance above which adverse effects can be expected in humans. This project explored the use of benchmark dose (BMD) modeling as an alternative to this approach for studies with few dose groups. Continuous data from oral repeated-dose studies for chemicals previously assessed by NSACB were reanalyzed using U.S. EPA benchmark dose software (BMDS) to determine the BMD and BMD 95% lower confidence limit (BMDL(05) ) for each endpoint critical to NOAEL or LOAEL determination for each chemical. Endpoint-specific benchmark dose-response levels , indicative of adversity, were consistently applied. An overall BMD and BMDL(05) were calculated for each chemical using the geometric mean. The POD obtained from benchmark analysis was then compared with the traditional toxicity thresholds originally used for risk assessment. The BMD and BMDL(05) generally were higher than the NOAEL, but lower than the LOAEL. BMDL(05) was generally constant at 57% of the BMD. Benchmark provided a clear advantage in health risk assessment when a LOAEL was the only POD identified, or when dose groups were widely distributed. Although the benchmark method cannot always be applied, in the selected studies with few dose groups it provided a more accurate estimate of the real no-adverse-effect level of a substance.

  11. Evaluation of anode (electro)catalytic materials for the direct borohydride fuel cell: Methods and benchmarks

    NASA Astrophysics Data System (ADS)

    Olu, Pierre-Yves; Job, Nathalie; Chatenet, Marian

    2016-09-01

    In this paper, different methods are discussed for the evaluation of the potential of a given catalyst, in view of an application as a direct borohydride fuel cell DBFC anode material. Characterizations results in DBFC configuration are notably analyzed at the light of important experimental variables which influence the performances of the DBFC. However, in many practical DBFC-oriented studies, these various experimental variables prevent one to isolate the influence of the anode catalyst on the cell performances. Thus, the electrochemical three-electrode cell is a widely-employed and useful tool to isolate the DBFC anode catalyst and to investigate its electrocatalytic activity towards the borohydride oxidation reaction (BOR) in the absence of other limitations. This article reviews selected results for different types of catalysts in electrochemical cell containing a sodium borohydride alkaline electrolyte. In particular, propositions of common experimental conditions and benchmarks are given for practical evaluation of the electrocatalytic activity towards the BOR in three-electrode cell configuration. The major issue of gaseous hydrogen generation and escape upon DBFC operation is also addressed through a comprehensive review of various results depending on the anode composition. At last, preliminary concerns are raised about the stability of potential anode catalysts upon DBFC operation.

  12. Integral Benchmark Data for Nuclear Data Testing Through the ICSBEP & IRPhEP

    SciTech Connect

    J. Blair Briggs; John D. Bess; Jim Gulliford; Ian Hill

    2013-10-01

    The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and International Reactor Physics Experiment Evaluation Project (IRPhEP) was last discussed directly with the nuclear data community at ND2007. Since ND2007, integral benchmark data that are available for nuclear data testing have increased significantly. The status of the ICSBEP and the IRPhEP is discussed and selected benchmark configurations that have been added to the ICSBEP and IRPhEP Handbooks since ND2007 are highlighted.

  13. Integral Benchmark Data for Nuclear Data Testing Through the ICSBEP & IRPhEP

    NASA Astrophysics Data System (ADS)

    Briggs, J. B.; Bess, J. D.; Gulliford, J.

    2014-04-01

    The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and International Reactor Physics Experiment Evaluation Project (IRPhEP) was last discussed directly with the nuclear data community at ND2007. Since ND2007, integral benchmark data that are available for nuclear data testing have increased significantly. The status of the ICSBEP and the IRPhEP is discussed and selected benchmark configurations that have been added to the ICSBEP and IRPhEP Handbooks since ND2007 are highlighted.

  14. An evaluation, comparison, and accurate benchmarking of several publicly available MS/MS search algorithms: sensitivity and specificity analysis.

    PubMed

    Kapp, Eugene A; Schütz, Frédéric; Connolly, Lisa M; Chakel, John A; Meza, Jose E; Miller, Christine A; Fenyo, David; Eng, Jimmy K; Adkins, Joshua N; Omenn, Gilbert S; Simpson, Richard J

    2005-08-01

    MS/MS and associated database search algorithms are essential proteomic tools for identifying peptides. Due to their widespread use, it is now time to perform a systematic analysis of the various algorithms currently in use. Using blood specimens used in the HUPO Plasma Proteome Project, we have evaluated five search algorithms with respect to their sensitivity and specificity, and have also accurately benchmarked them based on specified false-positive (FP) rates. Spectrum Mill and SEQUEST performed well in terms of sensitivity, but were inferior to MASCOT, X!Tandem, and Sonar in terms of specificity. Overall, MASCOT, a probabilistic search algorithm, correctly identified most peptides based on a specified FP rate. The rescoring algorithm, PeptideProphet, enhanced the overall performance of the SEQUEST algorithm, as well as provided predictable FP error rates. Ideally, score thresholds should be calculated for each peptide spectrum or minimally, derived from a reversed-sequence search as demonstrated in this study based on a validated data set. The availability of open-source search algorithms, such as X!Tandem, makes it feasible to further improve the validation process (manual or automatic) on the basis of "consensus scoring", i.e., the use of multiple (at least two) search algorithms to reduce the number of FPs. complement.

  15. An evaluation, comparison, and accurate benchmarking of several publicly available MS/MS search algorithms: Sensitivity and Specificity analysis.

    SciTech Connect

    Kapp, Eugene; Schutz, Frederick; Connolly, Lisa M.; Chakel, John A.; Meza, Jose E.; Miller, Christine A.; Fenyo, David; Eng, Jimmy K.; Adkins, Joshua N.; Omenn, Gilbert; Simpson, Richard

    2005-08-01

    MS/MS and associated database search algorithms are essential proteomic tools for identifying peptides. Due to their widespread use, it is now time to perform a systematic analysis of the various algorithms currently in use. Using blood specimens used in the HUPO Plasma Proteome Project, we have evaluated five search algorithms with respect to their sensitivity and specificity, and have also accurately benchmarked them based on specified false-positive (FP) rates. Spectrum Mill and SEQUEST performed well in terms of sensitivity, but were inferior to MASCOT, X-Tandem, and Sonar in terms of specificity. Overall, MASCOT, a probabilistic search algorithm, correctly identified most peptides based on a specified FP rate. The rescoring algorithm, Peptide Prophet, enhanced the overall performance of the SEQUEST algorithm, as well as provided predictable FP error rates. Ideally, score thresholds should be calculated for each peptide spectrum or minimally, derived from a reversed-sequence search as demonstrated in this study based on a validated data set. The availability of open-source search algorithms, such as X-Tandem, makes it feasible to further improve the validation process (manual or automatic) on the basis of ''consensus scoring'', i.e., the use of multiple (at least two) search algorithms to reduce the number of FPs. complement.

  16. 5 CFR 470.317 - Project evaluation.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 1 2014-01-01 2014-01-01 false Project evaluation. 470.317 Section 470... MANAGEMENT RESEARCH PROGRAMS AND DEMONSTRATIONS PROJECTS Regulatory Requirements Pertaining to Demonstration Projects § 470.317 Project evaluation. (a) Compliance evaluation. OPM will review the operation of...

  17. Managing for Results in America's Great City Schools 2014: Results from Fiscal Year 2012-13. A Report of the Performance Measurement and Benchmarking Project

    ERIC Educational Resources Information Center

    Council of the Great City Schools, 2014

    2014-01-01

    In 2002 the "Council of the Great City Schools" and its members set out to develop performance measures that could be used to improve business operations in urban public school districts. The Council launched the "Performance Measurement and Benchmarking Project" to achieve these objectives. The purposes of the project was to:…

  18. A Benchmark Data Set to Evaluate the Illumination Robustness of Image Processing Algorithms for Object Segmentation and Classification.

    PubMed

    Khan, Arif Ul Maula; Mikut, Ralf; Reischl, Markus

    2015-01-01

    Developers of image processing routines rely on benchmark data sets to give qualitative comparisons of new image analysis algorithms and pipelines. Such data sets need to include artifacts in order to occlude and distort the required information to be extracted from an image. Robustness, the quality of an algorithm related to the amount of distortion is often important. However, using available benchmark data sets an evaluation of illumination robustness is difficult or even not possible due to missing ground truth data about object margins and classes and missing information about the distortion. We present a new framework for robustness evaluation. The key aspect is an image benchmark containing 9 object classes and the required ground truth for segmentation and classification. Varying levels of shading and background noise are integrated to distort the data set. To quantify the illumination robustness, we provide measures for image quality, segmentation and classification success and robustness. We set a high value on giving users easy access to the new benchmark, therefore, all routines are provided within a software package, but can as well easily be replaced to emphasize other aspects.

  19. Evaluating the 1995 BLS Projections.

    ERIC Educational Resources Information Center

    Rosenthal, Neal H.; Fullerton, Howard N., Jr.; Andreassen, Arthur; Veneri, Carolyn M.

    1997-01-01

    Includes "Introduction" (Neal H. Rosenthal); "Labor Force Projections" (Howard N. Fullerton, Jr.); "Industry Employment Projections" (Arthur Andreassen); and "Occupational Employment Projections" (Carolyn M. Veneri). (JOW)

  20. Yucca Mountain Project thermal and mechanical codes first benchmark exercise: Part 3, Jointed rock mass analysis; Yucca Mountain Site Characterization Project

    SciTech Connect

    Costin, L.S.; Bauer, S.J.

    1991-10-01

    Thermal and mechanical models for intact and jointed rock mass behavior are being developed, verified, and validated at Sandia National Laboratories for the Yucca Mountain Site Characterization Project. Benchmarking is an essential part of this effort and is one of the tools used to demonstrate verification of engineering software used to solve thermomechanical problems. This report presents the results of the third (and final) phase of the first thermomechanical benchmark exercise. In the first phase of this exercise, nonlinear heat conduction code were used to solve the thermal portion of the benchmark problem. The results from the thermal analysis were then used as input to the second and third phases of the exercise, which consisted of solving the structural portion of the benchmark problem. In the second phase of the exercise, a linear elastic rock mass model was used. In the third phase of the exercise, two different nonlinear jointed rock mass models were used to solve the thermostructural problem. Both models, the Sandia compliant joint model and the RE/SPEC joint empirical model, explicitly incorporate the effect of the joints on the response of the continuum. Three different structural codes, JAC, SANCHO, and SPECTROM-31, were used with the above models in the third phase of the study. Each model was implemented in two different codes so that direct comparisons of results from each model could be made. The results submitted by the participants showed that the finite element solutions using each model were in reasonable agreement. Some consistent differences between the solutions using the two different models were noted but are not considered important to verification of the codes. 9 refs., 18 figs., 8 tabs.

  1. Ada compiler evaluation on the Space Station Freedom Software Support Environment project

    NASA Technical Reports Server (NTRS)

    Badal, D. L.

    1989-01-01

    This paper describes the work in progress to select the Ada compilers for the Space Station Freedom Program (SSFP) Software Support Environment (SSE) project. The purpose of the SSE Ada compiler evaluation team is to establish the criteria, test suites, and benchmarks to be used for evaluating Ada compilers for the mainframes, workstations, and the realtime target for flight- and ground-based computers. The combined efforts and cooperation of the customer, subcontractors, vendors, academia and SIGAda groups made it possible to acquire the necessary background information, benchmarks, test suites, and criteria used.

  2. BioBenchmark Toyama 2012: an evaluation of the performance of triple stores on biological data

    PubMed Central

    2014-01-01

    Background Biological databases vary enormously in size and data complexity, from small databases that contain a few million Resource Description Framework (RDF) triples to large databases that contain billions of triples. In this paper, we evaluate whether RDF native stores can be used to meet the needs of a biological database provider. Prior evaluations have used synthetic data with a limited database size. For example, the largest BSBM benchmark uses 1 billion synthetic e-commerce knowledge RDF triples on a single node. However, real world biological data differs from the simple synthetic data much. It is difficult to determine whether the synthetic e-commerce data is efficient enough to represent biological databases. Therefore, for this evaluation, we used five real data sets from biological databases. Results We evaluated five triple stores, 4store, Bigdata, Mulgara, Virtuoso, and OWLIM-SE, with five biological data sets, Cell Cycle Ontology, Allie, PDBj, UniProt, and DDBJ, ranging in size from approximately 10 million to 8 billion triples. For each database, we loaded all the data into our single node and prepared the database for use in a classical data warehouse scenario. Then, we ran a series of SPARQL queries against each endpoint and recorded the execution time and the accuracy of the query response. Conclusions Our paper shows that with appropriate configuration Virtuoso and OWLIM-SE can satisfy the basic requirements to load and query biological data less than 8 billion or so on a single node, for the simultaneous access of 64 clients. OWLIM-SE performs best for databases with approximately 11 million triples; For data sets that contain 94 million and 590 million triples, OWLIM-SE and Virtuoso perform best. They do not show overwhelming advantage over each other; For data over 4 billion Virtuoso works best. 4store performs well on small data sets with limited features when the number of triples is less than 100 million, and our test shows its

  3. Criticality experiments and benchmarks for cross section evaluation: the neptunium case

    NASA Astrophysics Data System (ADS)

    Leong, L. S.; Tassan-Got, L.; Audouin, L.; Paradela, C.; Wilson, J. N.; Tarrio, D.; Berthier, B.; Duran, I.; Le Naour, C.; Stéphan, C.

    2013-03-01

    The 237Np neutron-induced fission cross section has been recently measured in a large energy range (from eV to GeV) at the n_TOF facility at CERN. When compared to previous measurement the n_TOF fission cross section appears to be higher by 5-7% beyond the fission threshold. To check the relevance of n_TOF data, we apply a criticality experiment performed at Los Alamos with a 6 kg sphere of 237Np, surrounded by enriched uranium 235U so as to approach criticality with fast neutrons. The multiplication factor ke f f of the calculation is in better agreement with the experiment (the deviation of 750 pcm is reduced to 250 pcm) when we replace the ENDF/B-VII.0 evaluation of the 237Np fission cross section by the n_TOF data. We also explore the hypothesis of deficiencies of the inelastic cross section in 235U which has been invoked by some authors to explain the deviation of 750 pcm. With compare to inelastic large distortion calculation, it is incompatible with existing measurements. Also we show that the v of 237Np can hardly be incriminated because of the high accuracy of the existing data. Fission rate ratios or averaged fission cross sections measured in several fast neutron fields seem to give contradictory results on the validation of the 237Np cross section but at least one of the benchmark experiments, where the active deposits have been well calibrated for the number of atoms, favors the n_TOF data set. These outcomes support the hypothesis of a higher fission cross section of 237Np.

  4. Benchmarking Clinical Speech Recognition and Information Extraction: New Data, Methods, and Evaluations

    PubMed Central

    Zhou, Liyuan; Hanlen, Leif; Ferraro, Gabriela

    2015-01-01

    Background Over a tenth of preventable adverse events in health care are caused by failures in information flow. These failures are tangible in clinical handover; regardless of good verbal handover, from two-thirds to all of this information is lost after 3-5 shifts if notes are taken by hand, or not at all. Speech recognition and information extraction provide a way to fill out a handover form for clinical proofing and sign-off. Objective The objective of the study was to provide a recorded spoken handover, annotated verbatim transcriptions, and evaluations to support research in spoken and written natural language processing for filling out a clinical handover form. This dataset is based on synthetic patient profiles, thereby avoiding ethical and legal restrictions, while maintaining efficacy for research in speech-to-text conversion and information extraction, based on realistic clinical scenarios. We also introduce a Web app to demonstrate the system design and workflow. Methods We experiment with Dragon Medical 11.0 for speech recognition and CRF++ for information extraction. To compute features for information extraction, we also apply CoreNLP, MetaMap, and Ontoserver. Our evaluation uses cross-validation techniques to measure processing correctness. Results The data provided were a simulation of nursing handover, as recorded using a mobile device, built from simulated patient records and handover scripts, spoken by an Australian registered nurse. Speech recognition recognized 5276 of 7277 words in our 100 test documents correctly. We considered 50 mutually exclusive categories in information extraction and achieved the F1 (ie, the harmonic mean of Precision and Recall) of 0.86 in the category for irrelevant text and the macro-averaged F1 of 0.70 over the remaining 35 nonempty categories of the form in our 101 test documents. Conclusions The significance of this study hinges on opening our data, together with the related performance benchmarks and some

  5. Helical screw expander evaluation project

    NASA Technical Reports Server (NTRS)

    Mckay, R.

    1982-01-01

    A one MW helical rotary screw expander power system for electric power generation from geothermal brine was evaluated. The technology explored in the testing is simple, potentially very efficient, and ideally suited to wellhead installations in moderate to high enthalpy, liquid dominated field. A functional one MW geothermal electric power plant that featured a helical screw expander was produced and then tested with a demonstrated average performance of approximately 45% machine efficiency over a wide range of test conditions in noncondensing, operation on two-phase geothermal fluids. The Project also produced a computer equipped data system, an instrumentation and control van, and a 1000 kW variable load bank, all integrated into a test array designed for operation at a variety of remote test sites. Data are presented for the Utah testing and for the noncondensing phases of the testing in Mexico. Test time logged was 437 hours during the Utah tests and 1101 hours during the Mexico tests.

  6. Evaluation of the potential of benchmarking to facilitate the measurement of chemical persistence in lakes.

    PubMed

    Zou, Hongyan; MacLeod, Matthew; McLachlan, Michael S

    2014-01-01

    The persistence of chemicals in the environment is rarely measured in the field due to a paucity of suitable methods. Here we explore the potential of chemical benchmarking to facilitate the measurement of persistence in lake systems using a multimedia chemical fate model. The model results show that persistence in a lake can be assessed by quantifying the ratio of test chemical and benchmark chemical at as few as two locations: the point of emission and the outlet of the lake. Appropriate selection of benchmark chemicals also allows pseudo-first-order rate constants for physical removal processes such as volatilization and sediment burial to be quantified. We use the model to explore how the maximum persistence that can be measured in a particular lake depends on the partitioning properties of the test chemical of interest and the characteristics of the lake. Our model experiments demonstrate that combining benchmarking techniques with good experimental design and sensitive environmental analytical chemistry may open new opportunities for quantifying chemical persistence, particularly for relatively slowly degradable chemicals for which current methods do not perform well.

  7. ARL Physics Web Pages: An Evaluation by Established, Transitional and Emerging Benchmarks.

    ERIC Educational Resources Information Center

    Duffy, Jane C.

    2002-01-01

    Provides an overview of characteristics among Association of Research Libraries (ARL) physics Web pages. Examines current academic Web literature and from that develops six benchmarks to measure physics Web pages: ease of navigation; logic of presentation; representation of all forms of information; engagement of the discipline; interactivity of…

  8. The PIE Institute Project: Final Evaluation Report

    ERIC Educational Resources Information Center

    St. John, Mark; Carroll, Becky; Helms, Jen; Smith, Anita

    2008-01-01

    The Playful Invention and Exploration (PIE) Institute project was funded in 2005 by the National Science Foundation (NSF). For the past three years, Inverness Research has served as the external evaluator for the PIE project. The authors' evaluation efforts have included extensive observation and documentation of PIE project activities; ongoing…

  9. ICSBEP Criticality Benchmark Eigenvalues with ENDF/B-VII.1 Cross Sections

    SciTech Connect

    Kahler, Albert C. III; MacFarlane, Robert

    2012-06-28

    We review MCNP eigenvalue calculations from a suite of International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbook evaluations with the recently distributed ENDF/B-VII.1 cross section library.

  10. The Education North Evaluation Project. Final Report.

    ERIC Educational Resources Information Center

    Ingram, E. J.; McIntosh, R. G.

    The Education North Evaluation Project monitored operation of the Education North Project (a 1978-82 project aimed at encouraging parents, teachers, and other community members in small, isolated northern Alberta communities to work together in improving the quality of education for school-aged children), assessed project outcomes, and developed…

  11. The multifamily building evaluation project

    SciTech Connect

    1995-03-01

    In 1991 the New York State Energy Office embarked on a comprehensive multi-year study of multifamily housing in New York City. The principal objective of the evaluation was to determine the degree to which new windows and boiler/burner retrofits installed in 22 multifamily buildings located in the New York City region save energy and whether the savings persist over a minimum of two years. Window and boiler retrofits were selected because they are popular measures and are frequently implemented with assistance from government and utility energy programs. Approaches prospectively, energy consumption monitoring and a series of on-site inspections helped explain why energy savings exceeded or fell short of expectations. In 1993, the scope of the evaluation expanded to include the monitoring of domestic hot water (DHW) consumption in order to better understand the sizing of combined heating/DHW boilers and water consumption patterns. The evaluation was one of ten proposals selected from over 100 candidates in a nationwide competition for a US Department of Energy Building Efficiency Program Grant. The Energy Office managed the project, analyzed the data and prepared the reports, Lawrence Berkeley Laboratory served as technical advisor, and EME Group (New York City) installed meters and dataloggers, collected data, and inspected the retrofits. The New York State Energy Research and Development Authority collaborated with the Energy Office on the DHW monitoring component. Results did not always follow predictable patterns. Some buildings far exceeded energy saving estimates while others experienced an increase in consumption. Persistence patterns were mixed. Some buildings showed a steady decline in energy savings while others demonstrated a continual improvement. A clear advantage of the research design was a frequent ability to explain results.

  12. Validation of mechanical models for reinforced concrete structures: Presentation of the French project ``Benchmark des Poutres de la Rance''

    NASA Astrophysics Data System (ADS)

    L'Hostis, V.; Brunet, C.; Poupard, O.; Petre-Lazar, I.

    2006-11-01

    Several ageing models are available for the prediction of the mechanical consequences of rebar corrosion. They are used for service life prediction of reinforced concrete structures. Concerning corrosion diagnosis of reinforced concrete, some Non Destructive Testing (NDT) tools have been developed, and have been in use for some years. However, these developments require validation on existing concrete structures. The French projectBenchmark des Poutres de la Rance” contributes to this aspect. It has two main objectives: (i) validation of mechanical models to estimate the influence of rebar corrosion on the load bearing capacity of a structure, (ii) qualification of the use of the NDT results to collect information on steel corrosion within reinforced-concrete structures. Ten French and European institutions from both academic research laboratories and industrial companies contributed during the years 2004 and 2005. This paper presents the project that was divided into several work packages: (i) the reinforced concrete beams were characterized from non-destructive testing tools, (ii) the mechanical behaviour of the beams was experimentally tested, (iii) complementary laboratory analysis were performed and (iv) finally numerical simulations results were compared to the experimental results obtained with the mechanical tests.

  13. Neutron Cross Section Processing Methods for Improved Integral Benchmarking of Unresolved Resonance Region Evaluations

    NASA Astrophysics Data System (ADS)

    Walsh, Jonathan A.; Forget, Benoit; Smith, Kord S.; Brown, Forrest B.

    2016-03-01

    In this work we describe the development and application of computational methods for processing neutron cross section data in the unresolved resonance region (URR). These methods are integrated with a continuous-energy Monte Carlo neutron transport code, thereby enabling their use in high-fidelity analyses. Enhanced understanding of the effects of URR evaluation representations on calculated results is then obtained through utilization of the methods in Monte Carlo integral benchmark simulations of fast spectrum critical assemblies. First, we present a so-called on-the-fly (OTF) method for calculating and Doppler broadening URR cross sections. This method proceeds directly from ENDF-6 average unresolved resonance parameters and, thus, eliminates any need for a probability table generation pre-processing step in which tables are constructed at several energies for all desired temperatures. Significant memory reduction may be realized with the OTF method relative to a probability table treatment if many temperatures are needed. Next, we examine the effects of using a multi-level resonance formalism for resonance reconstruction in the URR. A comparison of results obtained by using the same stochastically-generated realization of resonance parameters in both the single-level Breit-Wigner (SLBW) and multi-level Breit-Wigner (MLBW) formalisms allows for the quantification of level-level interference effects on integrated tallies such as keff and energy group reaction rates. Though, as is well-known, cross section values at any given incident energy may differ significantly between single-level and multi-level formulations, the observed effects on integral results are minimal in this investigation. Finally, we demonstrate the calculation of true expected values, and the statistical spread of those values, through independent Monte Carlo simulations, each using an independent realization of URR cross section structure throughout. It is observed that both probability table

  14. Linking user and staff perspectives in the evaluation of innovative transition projects for youth with disabilities.

    PubMed

    McAnaney, Donal F; Wynne, Richard F

    2016-06-01

    A key challenge in formative evaluation is to gather appropriate evidence to inform the continuous improvement of initiatives. In the absence of outcome data, the programme evaluator often must rely on the perceptions of beneficiaries and staff in generating insight into what is making a difference. The article describes the approach adopted in an evaluation of 15 innovative projects supporting school-leavers with disabilities in making the transition to education, work and life in community settings. Two complementary processes provided an insight into what project staff and leadership viewed as the key project activities and features that facilitated successful transition as well as the areas of quality of life (QOL) that participants perceived as having been impacted positively by the projects. A comparison was made between participants' perceptions of QOL impact with the views of participants in services normally offered by the wider system. This revealed that project participants were significantly more positive in their views than participants in traditional services. In addition, the processes and activities of the more highly rated projects were benchmarked against less highly rated projects and also with usually available services. Even in the context of a range of intervening variables such as level and complexity of participant needs and variations in the stage of development of individual projects, the benchmarking process indicated a number of project characteristics that were highly valued by participants. PMID:26912504

  15. Benchmarking studies for the DESCARTES and CIDER codes. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Eslinger, P.W.; Ouderkirk, S.J.; Nichols, W.E.

    1993-01-01

    The Hanford Envirorunental Dose Reconstruction (HEDR) project is developing several computer codes to model the airborne release, transport, and envirormental accumulation of radionuclides resulting from Hanford operations from 1944 through 1972. In order to calculate the dose of radiation a person may have received in any given location, the geographic area addressed by the HEDR Project will be divided into a grid. The grid size suggested by the draft requirements contains 2091 units called nodes. Two of the codes being developed are DESCARTES and CIDER. The DESCARTES code will be used to estimate the concentration of radionuclides in environmental pathways from the output of the air transport code RATCHET. The CIDER code will use information provided by DESCARTES to estimate the dose received by an individual. The requirements that Battelle (BNW) set for these two codes were released to the HEDR Technical Steering Panel (TSP) in a draft document on November 10, 1992. This document reports on the preliminary work performed by the code development team to determine if the requirements could be met.

  16. Comprehensive Evaluation Project. Final Report.

    ERIC Educational Resources Information Center

    1969

    This project sought to develop a set of tests for the assessment of the basic literacy and occupational cognizance of pupils in those public elementary and secondary schools, including vocational schools, receiving services through Federally supported educational programs and projects. The assessment is to produce generalizable average scores for…

  17. Evaluations of Classroom Observations (ECO). Project Report.

    ERIC Educational Resources Information Center

    Filipczak, James

    The Evaluations of Classroom Observations Project had three major purposes, all related to direct observation of classroom behavior. First, the project was meant to assess the effectiveness of a behavioral treatment program in furthering the acquisition of appropriate classroom participation skills by disruptive students. Second, the project was…

  18. A study on operation efficiency evaluation based on firm's financial index and benchmark selection: take China Unicom as an example

    NASA Astrophysics Data System (ADS)

    Wu, Zu-guang; Tian, Zhan-jun; Liu, Hui; Huang, Rui; Zhu, Guo-hua

    2009-07-01

    Being the only listed telecom operators of A share market, China Unicom has always been attracted many institutional investors under the concept of 3G recent years,which itself is a great technical progress expectation.Do the institutional investors or the concept of technical progress have signficant effect on the improving of firm's operating efficiency?Though reviewing the documentary about operating efficiency we find that schoolars study this problem useing the regress analyzing based on traditional production function and data envelopment analysis(DEA) and financial index anayzing and marginal function and capital labor ratio coefficient etc. All the methods mainly based on macrodata. This paper we use the micro-data of company to evaluate the operating efficiency.Using factor analyzing based on financial index and comparing the factor score of three years from 2005 to 2007, we find that China Unicom's operating efficiency is under the averge level of benchmark corporates and has't improved under the concept of 3G from 2005 to 2007.In other words,institutional investor or the conception of technical progress expectation have faint effect on the changes of China Unicom's operating efficiency. Selecting benchmark corporates as post to evaluate the operating efficiency is a characteristic of this method ,which is basicallly sipmly and direct.This method is suit for the operation efficiency evaluation of agriculture listed companies because agriculture listed also face technical progress and marketing concept such as tax-free etc.

  19. Competitive Skills Project (CSP). External Evaluator's Report.

    ERIC Educational Resources Information Center

    Wrigley, Heide Spruck

    An external evaluation was made of the Competitive Skills Project, a National Workplace Literacy Program carried out in partnership between El Camino College and BP Chemicals. Among the problems identified were the following: (1) because the original director and his successor left the project, the original evaluation design could not be…

  20. MicroRNA array normalization: an evaluation using a randomized dataset as the benchmark.

    PubMed

    Qin, Li-Xuan; Zhou, Qin

    2014-01-01

    MicroRNA arrays possess a number of unique data features that challenge the assumption key to many normalization methods. We assessed the performance of existing normalization methods using two microRNA array datasets derived from the same set of tumor samples: one dataset was generated using a blocked randomization design when assigning arrays to samples and hence was free of confounding array effects; the second dataset was generated without blocking or randomization and exhibited array effects. The randomized dataset was assessed for differential expression between two tumor groups and treated as the benchmark. The non-randomized dataset was assessed for differential expression after normalization and compared against the benchmark. Normalization improved the true positive rate significantly in the non-randomized data but still possessed a false discovery rate as high as 50%. Adding a batch adjustment step before normalization further reduced the number of false positive markers while maintaining a similar number of true positive markers, which resulted in a false discovery rate of 32% to 48%, depending on the specific normalization method. We concluded the paper with some insights on possible causes of false discoveries to shed light on how to improve normalization for microRNA arrays.

  1. Benchmark Evaluation of Fuel Effect and Material Worth Measurements for a Beryllium-Reflected Space Reactor Mockup

    SciTech Connect

    Marshall, Margaret A.; Bess, John D.

    2015-02-01

    The critical configuration of the small, compact critical assembly (SCCA) experiments performed at the Oak Ridge Critical Experiments Facility (ORCEF) in 1962-1965 have been evaluated as acceptable benchmark experiments for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. The initial intent of these experiments was to support the design of the Medium Power Reactor Experiment (MPRE) program, whose purpose was to study “power plants for the production of electrical power in space vehicles.” The third configuration in this series of experiments was a beryllium-reflected assembly of stainless-steel-clad, highly enriched uranium (HEU)-O2 fuel mockup of a potassium-cooled space power reactor. Reactivity measurements cadmium ratio spectral measurements and fission rate measurements were measured through the core and top reflector. Fuel effect worth measurements and neutron moderating and absorbing material worths were also measured in the assembly fuel region. The cadmium ratios, fission rate, and worth measurements were evaluated for inclusion in the International Handbook of Evaluated Criticality Safety Benchmark Experiments. The fuel tube effect and neutron moderating and absorbing material worth measurements are the focus of this paper. Additionally, a measurement of the worth of potassium filling the core region was performed but has not yet been evaluated Pellets of 93.15 wt.% enriched uranium dioxide (UO2) were stacked in 30.48 cm tall stainless steel fuel tubes (0.3 cm tall end caps). Each fuel tube had 26 pellets with a total mass of 295.8 g UO2 per tube. 253 tubes were arranged in 1.506-cm triangular lattice. An additional 7-tube cluster critical configuration was also measured but not used for any physics measurements. The core was surrounded on all side by a beryllium reflector. The fuel effect worths were measured by removing fuel tubes at various radius. An accident scenario

  2. Project HEED. Final Evaluation Report.

    ERIC Educational Resources Information Center

    Hughes, Orval D.

    During 1972-73, Project HEED (Heed Ethnic Educational Depolarization) involved 1,350 Indian students in 60 classrooms at Sells, Topowa, San Carlos, Rice, Many Farms, Hotevilla, Peach Springs, and Sacaton. Primary objectives were: (1) improvement in reading skills, (2) development of cultural awareness, and (3) providing for the Special Education…

  3. Benchmarking foreign electronics technologies

    SciTech Connect

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  4. NHT-1 I/O Benchmarks

    NASA Technical Reports Server (NTRS)

    Carter, Russell; Ciotti, Bob; Fineberg, Sam; Nitzbert, Bill

    1992-01-01

    The NHT-1 benchmarks am a set of three scalable I/0 benchmarks suitable for evaluating the I/0 subsystems of high performance distributed memory computer systems. The benchmarks test application I/0, maximum sustained disk I/0, and maximum sustained network I/0. Sample codes are available which implement the benchmarks.

  5. Evaluating success levels of mega-projects

    NASA Technical Reports Server (NTRS)

    Kumaraswamy, Mohan M.

    1994-01-01

    Today's mega-projects transcend the traditional trajectories traced within national and technological limitations. Powers unleashed by internationalization of initiatives, in for example space exploration and environmental protection, are arguably only temporarily suppressed by narrower national, economic, and professional disagreements as to how best they should be harnessed. While the world gets its act together there is time to develop the technologies of such supra-mega-project management that will synergize truly diverse resources and smoothly mesh their interfaces. Such mega-projects and their management need to be realistically evaluated, when implementing such improvements. This paper examines current approaches to evaluating mega-projects and questions the validity of extrapolations to the supra-mega-projects of the future. Alternatives to improve such evaluations are proposed and described.

  6. INTEGRAL BENCHMARK DATA FOR NUCLEAR DATA TESTING THROUGH THE ICSBEP AND THE NEWLY ORGANIZED IRPHEP

    SciTech Connect

    J. Blair Briggs; Lori Scott; Yolanda Rugama; Enrico Satori

    2007-04-01

    The status of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) was last reported in a nuclear data conference at the International Conference on Nuclear Data for Science and Technology, ND-2004, in Santa Fe, New Mexico. Since that time the number and type of integral benchmarks have increased significantly. Included in the ICSBEP Handbook are criticality-alarm / shielding and fundamental physic benchmarks in addition to the traditional critical / subcritical benchmark data. Since ND 2004, a reactor physics counterpart to the ICSBEP, the International Reactor Physics Experiment Evaluation Project (IRPhEP) was initiated. The IRPhEP is patterned after the ICSBEP, but focuses on other integral measurements, such as buckling, spectral characteristics, reactivity effects, reactivity coefficients, kinetics measurements, reaction-rate and power distributions, nuclide compositions, and other miscellaneous-type measurements in addition to the critical configuration. The status of these two projects is discussed and selected benchmarks highlighted in this paper.

  7. A Benchmarking Model. Benchmarking Quality Performance in Vocational Technical Education.

    ERIC Educational Resources Information Center

    Losh, Charles

    The Skills Standards Projects have provided further emphasis on the need for benchmarking U.S. vocational-technical education (VTE) against international competition. Benchmarking is an ongoing systematic process designed to identify, as quantitatively as possible, those practices that produce world class performance. Metrics are those things that…

  8. Assessment of the available {sup 233}U cross-section evaluations in the calculation of critical benchmark experiments

    SciTech Connect

    Leal, L.C.; Wright, R.Q.

    1996-10-01

    In this report we investigate the adequacy of the available {sup 233}U cross-section data for calculation of experimental critical systems. The {sup 233}U evaluations provided in two evaluated nuclear data libraries, the U.S. Data Bank [ENDF/B (Evaluated Nuclear Data Files)] and the Japanese Data Bank [JENDL (Japanese Evaluated Nuclear Data Library)] are examined. Calculations were performed for six thermal and ten fast experimental critical systems using the S{sub n} transport XSDRNPM code. To verify the performance of the {sup 233}U cross-section data for nuclear criticality safety application in which the neutron energy spectrum is predominantly in the epithermal energy range, calculations of four numerical benchmark systems with energy spectra in the intermediate energy range were done. These calculations serve only as an indication of the difference in calculated results that may be expected when the two {sup 233}U cross-section evaluations are used for problems with neutron spectra in the intermediate energy range. Additionally, comparisons of experimental and calculated central fission rate ratios were also made. The study has suggested that an ad hoc {sup 233}U evaluation based on the JENDL library provides better overall results for both fast and thermal experimental critical systems.

  9. Assessment of the Available (Sup 233)U Cross Sections Evaluations in the Calculation of Critical Benchmark Experiments

    SciTech Connect

    Leal, L.C.

    1993-01-01

    In this report we investigate the adequacy of the available {sup 233}U cross-section data for calculation of experimental critical systems. The {sup 233}U evaluations provided in two evaluated nuclear data libraries, the U. S. Data Bank [ENDF/B (Evaluated Nuclear Data Files)] and the Japanese Data Bank [JENDL (Japanese Evaluated Nuclear Data Library)] are examined. Calculations were performed for six thermal and ten fast experimental critical systems using the Sn transport XSDRNPM code. To verify the performance of the {sup 233}U cross-section data for nuclear criticality safety application in which the neutron energy spectrum is predominantly in the epithermal energy range, calculations of four numerical benchmark systems with energy spectra in the intermediate energy range were done. These calculations serve only as an indication of the difference in calculated results that may be expected when the two {sup 233}U cross-section evaluations are used for problems with neutron spectra in the intermediate energy range. Additionally, comparisons of experimental and calculated central fission rate ratios were also made. The study has suggested that an ad hoc {sup 233}U evaluation based on the JENDL library provides better overall results for both fast and thermal experimental critical systems.

  10. Toxicological Benchmarks for Wildlife

    SciTech Connect

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red-tailed hawk

  11. Curriculum Evaluation Project. Final Report.

    ERIC Educational Resources Information Center

    Collett, Dave

    A longitudinal study was undertaken in Alberta to pilot an evaluation model which was devised by Robert E. Stake and which could provide feedback on the merit and effectiveness of vocational education in the provincial high school curriculum. Using both student records and questionnaires administered to the students, information was gathered over…

  12. Evaluating Housing Revitalization Projects: Critical Lessons for All Evaluators.

    ERIC Educational Resources Information Center

    Renger, Ralph; Passons, Omar; Cimetta, Adriana

    2003-01-01

    Describes the challenges faced by researchers in evaluating a neighborhood revitalization project. Places the challenges in the context of three of the Program Evaluation Standards of the Joint Committee on Standards for Educational Evaluation: Values Identification, Fiscal Responsibility, and Analysis of Quantitative Information. (SLD)

  13. Toward a benchmarking data set able to evaluate ligand- and structure-based virtual screening using public HTS data.

    PubMed

    Lindh, Martin; Svensson, Fredrik; Schaal, Wesley; Zhang, Jin; Sköld, Christian; Brandt, Peter; Karlén, Anders

    2015-02-23

    Virtual screening has the potential to accelerate and reduce costs of probe development and drug discovery. To develop and benchmark virtual screening methods, validation data sets are commonly used. Over the years, such data sets have been constructed to overcome the problems of analogue bias and artificial enrichment. With the rapid growth of public domain databases containing high-throughput screening data, such as the PubChem BioAssay database, there is an increased possibility to use such data for validation. In this study, we identify PubChem data sets suitable for validation of both structure- and ligand-based virtual screening methods. To achieve this, high-throughput screening data for which a crystal structure of the bioassay target was available in the PDB were identified. Thereafter, the data sets were inspected to identify structures and data suitable for use in validation studies. In this work, we present seven data sets (MMP13, DUSP3, PTPN22, EPHX2, CTDSP1, MAPK10, and CDK5) compiled using this method. In the seven data sets, the number of active compounds varies between 19 and 369 and the number of inactive compounds between 59 405 and 337 634. This gives a higher ratio of the number of inactive to active compounds than what is found in most benchmark data sets. We have also evaluated the screening performance using docking and 3D shape similarity with default settings. To characterize the data sets, we used physicochemical similarity and 2D fingerprint searches. We envision that these data sets can be a useful complement to current data sets used for method evaluation.

  14. EVALUATION OF U10MO FUEL PLATE IRRADIATION BEHAVIOR VIA NUMERICAL AND EXPERIMENTAL BENCHMARKING

    SciTech Connect

    Samuel J. Miller; Hakan Ozaltun

    2012-11-01

    This article analyzes dimensional changes due to irradiation of monolithic plate-type nuclear fuel and compares results with finite element analysis of the plates during fabrication and irradiation. Monolithic fuel plates tested in the Advanced Test Reactor (ATR) at Idaho National Lab (INL) are being used to benchmark proposed fuel performance for several high power research reactors. Post-irradiation metallographic images of plates sectioned at the midpoint were analyzed to determine dimensional changes of the fuel and the cladding response. A constitutive model of the fabrication process and irradiation behavior of the tested plates was developed using the general purpose commercial finite element analysis package, Abaqus. Using calculated burn-up profiles of irradiated plates to model the power distribution and including irradiation behaviors such as swelling and irradiation enhanced creep, model simulations allow analysis of plate parameters that are either impossible or infeasible in an experimental setting. The development and progression of fabrication induced stress concentrations at the plate edges was of primary interest, as these locations have a unique stress profile during irradiation. Additionally, comparison between 2D and 3D models was performed to optimize analysis methodology. In particular, the ability of 2D and 3D models account for out of plane stresses which result in 3-dimensional creep behavior that is a product of these components. Results show that assumptions made in 2D models for the out-of-plane stresses and strains cannot capture the 3-dimensional physics accurately and thus 2D approximations are not computationally accurate. Stress-strain fields are dependent on plate geometry and irradiation conditions, thus, if stress based criteria is used to predict plate behavior (as opposed to material impurities, fine micro-structural defects, or sharp power gradients), unique 3D finite element formulation for each plate is required.

  15. Current Reactor Physics Benchmark Activities at the Idaho National Laboratory

    SciTech Connect

    John D. Bess; Margaret A. Marshall; Mackenzie L. Gorham; Joseph Christensen; James C. Turnbull; Kim Clark

    2011-11-01

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) [1] and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) [2] were established to preserve integral reactor physics and criticality experiment data for present and future research. These valuable assets provide the basis for recording, developing, and validating our integral nuclear data, and experimental and computational methods. These projects are managed through the Idaho National Laboratory (INL) and the Organisation for Economic Co-operation and Development Nuclear Energy Agency (OECD-NEA). Staff and students at the Department of Energy - Idaho (DOE-ID) and INL are engaged in the development of benchmarks to support ongoing research activities. These benchmarks include reactors or assemblies that support Next Generation Nuclear Plant (NGNP) research, space nuclear Fission Surface Power System (FSPS) design validation, and currently operational facilities in Southeastern Idaho.

  16. Collaborative Writing Project Product Evaluation 1988-1989. Evaluation Report.

    ERIC Educational Resources Information Center

    Saginaw Public Schools, MI. Dept. of Evaluation Services.

    A study was conducted to evaluate the final outcome of the Section 98 writing project, a 3-year collaboration between the School District of the City of Saginaw and the University of Michigan, and to successfully employ the gap reduction design with the pre- to post-test results stemming from the writing project. Students in six sections of…

  17. Evaluation Project of a Postvention Program.

    ERIC Educational Resources Information Center

    Simon, Robert; And Others

    A student suicide or parasuicide increases the risk that potentially suicidal teenagers see suicide as an enviable option. The "copycat effect" can be reduced by a postvention program. This proposed evaluative research project will provide an implementation and impact evaluation of a school's postvention program following a suicide or parasuicide.…

  18. The BOUT Project: Validation and Benchmark of BOUT Code and Experimental Diagnostic Tools for Fusion Boundary Turbulence

    SciTech Connect

    Xu, X Q

    2001-08-09

    A boundary plasma turbulence code BOUT is presented. The preliminary encouraging results have been obtained when comparing with probe measurements for a typical Ohmic discharge in CT-7 tokamak. The validation and benchmark of BOUT code and experimental diagnostic tools for fusion boundary plasma turbulence is proposed.

  19. The BOUT Project; Validation and Benchmark of BOUT Code and Experimental Diagnostic Tools for Fusion Boundary Turbulence

    NASA Astrophysics Data System (ADS)

    Xu, Xue-qiao

    2001-10-01

    A boundary plasma turbulence code BOUT is presented. The preliminary encouraging results have been obtained when comparing with probe measurements for a typical Ohmic discharge in HT-7 tokamak. The validation and benchmark of BOUT code and experimental diagnostic tools for fusion boundary plasma turbulence is proposed.

  20. Training Evaluation Based on Cases of Taiwanese Benchmarked High-Tech Companies

    ERIC Educational Resources Information Center

    Lien, Bella Ya Hui; Hung, Richard Yu Yuan; McLean, Gary N.

    2007-01-01

    Although the influence of workplace practices and employees' experiences with training effectiveness has received considerable attention, less is known of the influence of workplace practices on training evaluation methods. The purposes of this study were to: (1) explore and understand the training evaluation methods used by seven Taiwanese…

  1. Evaluation of various LandFlux evapotranspiration algorithms using the LandFlux-EVAL synthesis benchmark products and observational data

    NASA Astrophysics Data System (ADS)

    Michel, Dominik; Hirschi, Martin; Jimenez, Carlos; McCabe, Mathew; Miralles, Diego; Wood, Eric; Seneviratne, Sonia

    2014-05-01

    Research on climate variations and the development of predictive capabilities largely rely on globally available reference data series of the different components of the energy and water cycles. Several efforts aimed at producing large-scale and long-term reference data sets of these components, e.g. based on in situ observations and remote sensing, in order to allow for diagnostic analyses of the drivers of temporal variations in the climate system. Evapotranspiration (ET) is an essential component of the energy and water cycle, which can not be monitored directly on a global scale by remote sensing techniques. In recent years, several global multi-year ET data sets have been derived from remote sensing-based estimates, observation-driven land surface model simulations or atmospheric reanalyses. The LandFlux-EVAL initiative presented an ensemble-evaluation of these data sets over the time periods 1989-1995 and 1989-2005 (Mueller et al. 2013). Currently, a multi-decadal global reference heat flux data set for ET at the land surface is being developed within the LandFlux initiative of the Global Energy and Water Cycle Experiment (GEWEX). This LandFlux v0 ET data set comprises four ET algorithms forced with a common radiation and surface meteorology. In order to estimate the agreement of this LandFlux v0 ET data with existing data sets, it is compared to the recently available LandFlux-EVAL synthesis benchmark product. Additional evaluation of the LandFlux v0 ET data set is based on a comparison to in situ observations of a weighing lysimeter from the hydrological research site Rietholzbach in Switzerland. These analyses serve as a test bed for similar evaluation procedures that are envisaged for ESA's WACMOS-ET initiative (http://wacmoset.estellus.eu). Reference: Mueller, B., Hirschi, M., Jimenez, C., Ciais, P., Dirmeyer, P. A., Dolman, A. J., Fisher, J. B., Jung, M., Ludwig, F., Maignan, F., Miralles, D. G., McCabe, M. F., Reichstein, M., Sheffield, J., Wang, K

  2. Medico-economic evaluation of healthcare products. Methodology for defining a significant impact on French health insurance costs and selection of benchmarks for interpreting results.

    PubMed

    Dervaux, Benoît; Baseilhac, Eric; Fagon, Jean-Yves; Biot, Claire; Blachier, Corinne; Braun, Eric; Debroucker, Frédérique; Detournay, Bruno; Ferretti, Carine; Granger, Muriel; Jouan-Flahault, Chrystel; Lussier, Marie-Dominique; Meyer, Arlette; Muller, Sophie; Pigeon, Martine; De Sahb, Rima; Sannié, Thomas; Sapède, Claudine; Vray, Muriel

    2014-01-01

    Decree No. 2012-1116 of 2 October 2012 on medico-economic assignments of the French National Authority for Health (Haute autorité de santé, HAS) significantly alters the conditions for accessing the health products market in France. This paper presents a theoretical framework for interpreting the results of the economic evaluation of health technologies and summarises the facts available in France for developing benchmarks that will be used to interpret incremental cost-effectiveness ratios. This literature review shows that it is difficult to determine a threshold value but it is also difficult to interpret then incremental cost effectiveness ratio (ICER) results without a threshold value. In this context, round table participants favour a pragmatic approach based on "benchmarks" as opposed to a threshold value, based on an interpretative and normative perspective, i.e. benchmarks that can change over time based on feedback.

  3. Strategic evaluation central to LNG project formation

    SciTech Connect

    Nissen, D.; DiNapoli, R.N.; Yost, C.C.

    1995-07-03

    An efficient-scale, grassroots LNG facility of about 6 million metric tons/year capacity requires a prestart-up outlay of $5 billion or more for the supply facilities--production, feedgas pipeline, liquefaction, and shipping. The demand side of the LNG chain requires a similar outlay, counting the import-regasification terminal and a combination of 5 gigawatts or more of electric power generation or the equivalent in city gas and industrial gas-using facilities. There exist no well-developed commodity markets for free-on-board (fob) or delivered LNG. A new LNG supply project is dedicated to its buyers. Indeed, the buyers` revenue commitment is the project`s only bankable asset. For the buyer to make this commitment, the supply venture`s capability and commitment must be credible: to complete the project and to deliver the LNG reliably over the 20+ years required to recover capital committed on both sides. This requirement has technical, economic, and business dimensions. In this article the authors describe a LNG project evaluation system and show its application to typical tasks: project cost of service and participant shares; LNG project competition; alternative project structures; and market competition for LNG-supplied electric power generation.

  4. Implementing Cognitive Behavioral Therapy for Chronic Fatigue Syndrome in a Mental Health Center: A Benchmarking Evaluation

    ERIC Educational Resources Information Center

    Scheeres, Korine; Wensing, Michel; Knoop, Hans; Bleijenberg, Gijs

    2008-01-01

    Objective: This study evaluated the success of implementing cognitive behavioral therapy (CBT) for chronic fatigue syndrome (CFS) in a representative clinical practice setting and compared the patient outcomes with those of previously published randomized controlled trials (RCTs) of CBT for CFS. Method: The implementation interventions were the…

  5. Examining Benchmark Indicator Systems for the Evaluation of Higher Education Institutions

    ERIC Educational Resources Information Center

    Garcia-Aracil, Adela; Palomares-Montero, Davinia

    2010-01-01

    Higher Education Institutions are undergoing important changes involving the development of new roles and missions, with implications for their structure. Governments and institutions are implementing strategies to ensure the proper performance of universities and several studies have investigated evaluation of universities through the development…

  6. HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks

    SciTech Connect

    Paulson, Patrick R.; Purohit, Sumit; Rodriguez, Luke R.

    2015-05-01

    This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.

  7. Dissemination of behavioural activation for depression to mental health nurses: training evaluation and benchmarked clinical outcomes.

    PubMed

    Ekers, D M; Dawson, M S; Bailey, E

    2013-03-01

    Depression causes significant distress, disability and cost within the UK. Behavioural activation (BA) is an effective single-strand psychological approach which may lend itself to brief training programmes for a wide range of clinical staff. No previous research has directly examined outcomes of such dissemination. A 5-day training course for 10 primary care mental health workers aiming to increase knowledge and clinical skills in BA was evaluated using the Training Acceptability Rating Scale. Depression symptom level data collected in a randomized controlled trial using trainees were then compared to results from meta-analysis of studies using experienced therapists. BA training was highly acceptable to trainees (94.4%, SD 6%). The combined effect size of BA was unchanged by the addition of the results of this evaluation to those of studies using specialist therapists. BA offers a promising psychological intervention for depression that appears suitable for delivery by mental health nurses following brief training.

  8. Workforce development and effective evaluation of projects.

    PubMed

    Dickerson, Claire; Green, Tess; Blass, Eddie

    The success of a project or programme is typically determined in relation to outputs. However, there is a commitment among UK public services to spending public funds efficiently and on activities that provide the greatest benefit to society. Skills for Health recognised the need for a tool to manage the complex process of evaluating project benefits. An integrated evaluation framework was developed to help practitioners identify, describe, measure and evaluate the benefits of workforce development projects. Practitioners tested the framework on projects within three NHS trusts and provided valuable feedback to support its development. The prospective approach taken to identify benefits and collect baseline data to support evaluation was positively received and the clarity and completeness of the framework, as well as the relevance of the questions, were commended. Users reported that the framework was difficult to complete; an online version could be developed, which might help to improve usability. Effective implementation of this approach will depend on the quality and usability of the framework, the willingness of organisations to implement it, and the presence or establishment of an effective change management culture.

  9. Federal Workplace Literacy Project. Internal Evaluation Report.

    ERIC Educational Resources Information Center

    Matuszak, David J.

    This report describes the following components of the Nestle Workplace Literacy Project: six job task analyses, curricula for six workplace basic skills training programs, delivery of courses using these curricula, and evaluation of the process. These six job categories were targeted for training: forklift loader/checker, BB's processing systems…

  10. Self-benchmarking Guide for Data Centers: Metrics, Benchmarks, Actions

    SciTech Connect

    Mathew, Paul; Ganguly, Srirupa; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in data centers. This guide is primarily intended for personnel who have responsibility for managing energy use in existing data centers - including facilities managers, energy managers, and their engineering consultants. Additionally, data center designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior data center benchmarking studies supported by the California Energy Commission. Much of the benchmarking data are drawn from the LBNL data center benchmarking database that was developed from these studies. Additional benchmark data were obtained from engineering experts including facility designers and energy managers. This guide also builds on recent research supported by the U.S. Department of Energy's Save Energy Now program.

  11. The International Reactor Physics Experiment Evaluation Project (IRPHEP)

    SciTech Connect

    J. Blair Briggs; Enrico Sartori; Lori Scott

    2006-09-01

    Since the beginning of the Nuclear Power industry, numerous experiments concerned with nuclear energy and technology have been performed at different research laboratories, worldwide. These experiments required a large investment in terms of infrastructure, expertise, and cost; however, many were performed without a high degree of attention to archival of results for future use. The degree and quality of documentation varies greatly. There is an urgent need to preserve integral reactor physics experimental data, including measurement methods, techniques, and separate or special effects data for nuclear energy and technology applications and the knowledge and competence contained therein. If the data are compromised, it is unlikely that any of these experiments will be repeated again in the future. The International Reactor Physics Evaluation Project (IRPhEP) was initiated, as a pilot activity in 1999 by the by the Organization of Economic Cooperation and Development (OECD) Nuclear Energy Agency (NEA) Nuclear Science Committee (NSC). The project was endorsed as an official activity of the NSC in June of 2003. The purpose of the IRPhEP is to provide an extensively peer reviewed set of reactor physics related integral benchmark data that can be used by reactor designers and safety analysts to validate the analytical tools used to design next generation reactors and establish the safety basis for operation of these reactors. A short history of the IRPhEP is presented and its purposes are discussed in this paper. Accomplishments of the IRPhEP, including the first publication of the IRPhEP Handbook, are highlighted and the future of the project outlined.

  12. Reactor Physics and Criticality Benchmark Evaluations for Advanced Nuclear Fuel, Progress Report for Work through August 31, 2002, First Annual/4th Quarterly Report

    SciTech Connect

    Anderson, William J.; Ake, Timothy N.; Punatar, Mahendra; Pitts, Michelle L.; Harms, Gary A.; Rearden, Bradley T.; Parks, Cecil V.; Tulenko, James S.; Dugan, Edward; Smith, Robert M.

    2002-09-23

    OAK B204 The objective of this Nuclear Energy Research Initiative (NERI) project is to design, perform, and analyze critical benchmark experiments for validating reactor physics methods and models for fuel enrichments greater than 5-wt% 235U. These experiments will also provide additional information for application to the criticality-safety bases for commercial fuel facilities handling greater than 5-wt% 235U fuel. These experiments are designed as reactor physics benchmarks, to include measurements of critical boron concentration, burnable absorber worth, relative pin powers, and relative average powers.The first year focused primarily on designing the experiments using available fuel, preparing the necessary plans, procedures and authorization basis for performing the experiments, and preparing for the transportation, receipt and storage of the Pathfinder fuel currently stored at Pennsylvania State University.Framatome ANP, Inc. leads the project with the collaboration of Oak Ridge National Laboratory (ORNL), Sandia National Laboratories (SNL), and the University of Florida (UF). The project is organized into 5 tasks:Task 1: Framatome ANP, Inc., ORNL, and SNL will design the specific experiments, establish the safety authorization, and obtain approvals to perform these experiments at the SNL facility. ORNL will apply their sensitivity/uncertainty methodology to verify the need for particular experiments and the parameters that these experiments need to explore.Task 2: Framatome ANP, Inc., ORNL, and UF will analyze the proposed experiments using a variety of reactor-physics methods employed in the nuclear industry. These analyses will support the operation of the experiments by predicting the expected experimental values for the criticality and physics parameters.Task 3: This task encompasses the experiments to be performed. The Pathfinder fuel will be transported from Penn State to SNL for use in the experiments. The experiments will be performed and the

  13. CSAR Benchmark Exercise 2011–2012: Evaluation of Results from Docking and Relative Ranking of Blinded Congeneric Series

    PubMed Central

    2013-01-01

    The Community Structure–Activity Resource (CSAR) recently held its first blinded exercise based on data provided by Abbott, Vertex, and colleagues at the University of Michigan, Ann Arbor. A total of 20 research groups submitted results for the benchmark exercise where the goal was to compare different improvements for pose prediction, enrichment, and relative ranking of congeneric series of compounds. The exercise was built around blinded high-quality experimental data from four protein targets: LpxC, Urokinase, Chk1, and Erk2. Pose prediction proved to be the most straightforward task, and most methods were able to successfully reproduce binding poses when the crystal structure employed was co-crystallized with a ligand from the same chemical series. Multiple evaluation metrics were examined, and we found that RMSD and native contact metrics together provide a robust evaluation of the predicted poses. It was notable that most scoring functions underpredicted contacts between the hetero atoms (i.e., N, O, S, etc.) of the protein and ligand. Relative ranking was found to be the most difficult area for the methods, but many of the scoring functions were able to properly identify Urokinase actives from the inactives in the series. Lastly, we found that minimizing the protein and correcting histidine tautomeric states positively trended with low RMSD for pose prediction but minimizing the ligand negatively trended. Pregenerated ligand conformations performed better than those that were generated on the fly. Optimizing docking parameters and pretraining with the native ligand had a positive effect on the docking performance as did using restraints, substructure fitting, and shape fitting. Lastly, for both sampling and ranking scoring functions, the use of the empirical scoring function appeared to trend positively with the RMSD. Here, by combining the results of many methods, we hope to provide a statistically relevant evaluation and elucidate specific shortcomings

  14. Kenya's Radio Language Arts Project: evaluation results.

    PubMed

    Oxford, R L

    1985-01-01

    The Kenya Radio Language Arts Project (RLAP), which has just been completed, documents the effectiveness of interactive radio-based educational instruction. Analyses in the areas of listening, reading, speaking, and writing show that children in radio classrooms consistently scored better than children in nonradio classrooms in every test. An evaluation of the project was conducted with the assistance of the Center for Applied Linguistics (CAL). Evaluation results came from a variety of sources, including language tests, observations, interviews, demographic and administrative records, and an attitude survey. A large proportion of the project's students were considerably transient. Only 22% of the total student population of 3908 were "normal progression" students -- that is, they advanced regularly through their education during the life of the project. Students who moved from the area, failed a standard (grade), dropped out, or were otherwise untrackable, comprised the remaining 78% of the total. 7 districts were included in the project. Tests were developed for listening and reading in Standards 1, 2, and 3 and in speaking and writing in Standards 2 and 3. The achievement tests were based on the official Kenya curriculum for those standards, so as to measure achievement against the curriculum. Nearly all the differences were highly significant statistically, with a probability of less than 1 in 1000 that the findings could have occurred by chance. Standard 1 radio students scored nearly 8 points higher than did their counterparts in the control group. Standard 2 and 3 radio students outperformed the control students by 4 points. The radio group consistently outperformed the control group in reading, writing, and speaking. Unstructured interviews and observations were conducted by the RLAP field staff. Overwhelmingly positive attitudes about the project prevailed among project teachers and headmasters. The data demonstrate that RLAP works. In fact, it works so

  15. HANFORD DST THERMAL & SEISMIC PROJECT ANSYS BENCHMARK ANALYSIS OF SEISMIC INDUCED FLUID STRUCTURE INTERACTION IN A HANFORD DOUBLE SHELL PRIMARY TANK

    SciTech Connect

    MACKEY, T.C.

    2006-03-14

    M&D Professional Services, Inc. (M&D) is under subcontract to Pacific Northwest National Laboratories (PNNL) to perform seismic analysis of the Hanford Site Double-Shell Tanks (DSTs) in support of a project entitled ''Double-Shell Tank (DSV Integrity Project-DST Thermal and Seismic Analyses)''. The overall scope of the project is to complete an up-to-date comprehensive analysis of record of the DST System at Hanford in support of Tri-Party Agreement Milestone M-48-14. The work described herein was performed in support of the seismic analysis of the DSTs. The thermal and operating loads analysis of the DSTs is documented in Rinker et al. (2004). The overall seismic analysis of the DSTs is being performed with the general-purpose finite element code ANSYS. The overall model used for the seismic analysis of the DSTs includes the DST structure, the contained waste, and the surrounding soil. The seismic analysis of the DSTs must address the fluid-structure interaction behavior and sloshing response of the primary tank and contained liquid. ANSYS has demonstrated capabilities for structural analysis, but the capabilities and limitations of ANSYS to perform fluid-structure interaction are less well understood. The purpose of this study is to demonstrate the capabilities and investigate the limitations of ANSYS for performing a fluid-structure interaction analysis of the primary tank and contained waste. To this end, the ANSYS solutions are benchmarked against theoretical solutions appearing in BNL 1995, when such theoretical solutions exist. When theoretical solutions were not available, comparisons were made to theoretical solutions of similar problems and to the results from Dytran simulations. The capabilities and limitations of the finite element code Dytran for performing a fluid-structure interaction analysis of the primary tank and contained waste were explored in a parallel investigation (Abatt 2006). In conjunction with the results of the global ANSYS analysis

  16. Multivariate dynamical systems-based estimation of causal brain interactions in fMRI: Group-level validation using benchmark data, neurophysiological models and human connectome project data

    PubMed Central

    Ryali, Srikanth; Chen, Tianwen; Supekar, Kaustubh; Tu, Tao; Kochlka, John; Cai, Weidong; Menon, Vinod

    2016-01-01

    Background Causal estimation methods are increasingly being used to investigate functional brain networks in fMRI, but there are continuing concerns about the validity of these methods. New Method Multivariate Dynamical Systems (MDS) is a state-space method for estimating dynamic causal interactions in fMRI data. Here we validate MDS using benchmark simulations as well as simulations from a more realistic stochastic neurophysiological model. Finally, we applied MDS to investigate dynamic casual interactions in a fronto-cingulate-parietal control network using Human Connectome Project (HCP) data acquired during performance of a working memory task. Crucially, since the ground truth in experimental data is unknown, we conducted novel stability analysis to determine robust causal interactions within this network. Results MDS accurately recovered dynamic causal interactions with an area under receiver operating characteristic (AUC) above 0.7 for benchmark datasets and AUC above 0.9 for datasets generated using the neurophysiological model. In experimental fMRI data, bootstrap procedures revealed a stable pattern of causal influences from the anterior insula to other nodes of the fronto-cingulate-parietal network. Comparison with Existing Methods MDS is effective in estimating dynamic causal interactions in both the benchmark and neurophysiological model based datasets in terms of AUC, sensitivity and false positive rates. Conclusions Our findings demonstrate that MDS can accurately estimate causal interactions in fMRI data. Neurophysiological models and stability analysis provide a general framework for validating computational methods designed to estimate causal interactions in fMRI. The right anterior insula functions as a causal hub during working memory. PMID:27015792

  17. Testing (Validating?) Cross Sections with ICSBEP Benchmarks

    SciTech Connect

    Kahler, Albert C. III

    2012-06-28

    We discuss how to use critical benchmarks from the International Handbook of Evaluated Criticality Safety Benchmark Experiments to determine the applicability of specific cross sections to the end-user's problem of interest. Particular attention is paid to making sure the selected suite of benchmarks includes the user's range of applicability (ROA).

  18. Self-benchmarking Guide for Cleanrooms: Metrics, Benchmarks, Actions

    SciTech Connect

    Mathew, Paul; Sartor, Dale; Tschudi, William

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  19. Self-benchmarking Guide for Laboratory Buildings: Metrics, Benchmarks, Actions

    SciTech Connect

    Mathew, Paul; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  20. Benchmarking HIV health care: from individual patient care to health care evaluation. An example from the EuroSIDA study

    PubMed Central

    2012-01-01

    Background State-of-the-art care involving the utilisation of multiple health care interventions is the basis for an optimal long-term clinical prognosis for HIV-patients. We evaluated health care for HIV patients based on four key indicators. Methods Four indicators of health care were assessed: Compliance with current guidelines on initiation of: 1) combination antiretroviral therapy (cART); 2) chemoprophylaxis; 3) frequency of laboratory monitoring; and 4) virological response to cART (proportion of patients with HIV-RNA < 500copies/ml for >90% of time on cART). Results 7097 EuroSIDA patients were included from Northern (n = 923), Southern (n = 1059), West Central (n = 1290) East Central (n = 1366), Eastern (n = 1964) Europe, and Argentina (n = 495). Patients in Eastern Europe with a CD4 < 200cells/mm3 were less likely to initiate cART and Pneumocystis jiroveci-chemoprophylaxis compared to patients from all other regions, and less frequently had a laboratory assessment of their disease status. The proportion of patients with virological response was highest in Northern, 89% vs. 84%, 78%, 78%, 61%, 55% in West Central, Southern, East Central Europe, Argentina and Eastern Europe, respectively (p < 0.0001). Compared to Northern, patients from other regions had significantly lower odds of virological response; the difference was most pronounced for Eastern Europe and Argentina (adjusted OR 0.16 [95%CI 0.11-0.23, p < 0.0001]; 0.20[0.14-0.28, p < 0.0001] respectively). Conclusions This assessment of HIV health care utilization revealed pronounced regional differences in adherence to guidelines and can help to identify gaps and direct target interventions. It may serve as a tool for the assessment and benchmarking of the clinical management of HIV patients in any setting worldwide. PMID:23009317

  1. GRID-based three-dimensional pharmacophores II: PharmBench, a benchmark data set for evaluating pharmacophore elucidation methods.

    PubMed

    Cross, Simon; Ortuso, Francesco; Baroni, Massimo; Costa, Giosuè; Distinto, Simona; Moraca, Federica; Alcaro, Stefano; Cruciani, Gabriele

    2012-10-22

    To date, published pharmacophore elucidation approaches typically use a handful of data sets for validation: here, we have assembled a data set for 81 targets, containing 960 ligands aligned using their cocrystallized protein targets, to provide the experimental "gold standard". The two-dimensional structures are also assembled to remove conformational bias; an ideal method would be able to take these structures as input, find the common features, and reproduce the bioactive conformations and their alignments to correspond with the X-ray-determined gold standard alignments. Here we present this data set and describe three objective measures to evaluate performance: the ability to identify the bioactive conformation, the ability to identify and correctly align this conformation for 50% of the molecules in each data set, and the pharmacophoric field similarity. We have applied this validation methodology to our pharmacophore elucidation method FLAPpharm, that is published in the first paper of this series and discuss the limitations of the data set and objective success criteria. Starting from two-dimensional structures and producing unbiased models, FLAPpharm was able to identify the bioactive conformations for 67% of the ligands and also to produce successful models according to the second metric for 67% of the Pharmbench data sets. Inspection of the unsuccessful models highlighted the limitation of this root mean square (rms)-derived metric, since many were found to be pharmacophorically reasonable, increasing the overall success rate to 83%. The PharmBench data set is available at http://www.moldiscovery.com/PharmBench , along with a web service to enable users to score model alignments coming from external methods in the same way that we have presented here and, therefore, establishes a pharmacophore elucidation benchmark data set available to be used by the community.

  2. NASA Countermeasures Evaluation and Validation Project

    NASA Technical Reports Server (NTRS)

    Lundquist, Charlie M.; Paloski, William H. (Technical Monitor)

    2000-01-01

    To support its ISS and exploration class mission objectives, NASA has developed a Countermeasure Evaluation and Validation Project (CEVP). The goal of this project is to evaluate and validate the optimal complement of countermeasures required to maintain astronaut health, safety, and functional ability during and after short- and long-duration space flight missions. The CEVP is the final element of the process in which ideas and concepts emerging from basic research evolve into operational countermeasures. The CEVP is accomplishing these objectives by conducting operational/clinical research to evaluate and validate countermeasures to mitigate these maladaptive responses. Evaluation is accomplished by testing in space flight analog facilities, and validation is accomplished by space flight testing. Both will utilize a standardized complement of integrated physiological and psychological tests, termed the Integrated Testing Regimen (ITR) to examine candidate countermeasure efficacy and intersystem effects. The CEVP emphasis is currently placed on validating the initial complement of ISS countermeasures targeting bone, muscle, and aerobic fitness; followed by countermeasures for neurological, psychological, immunological, nutrition and metabolism, and radiation risks associated with space flight. This presentation will review the processes, plans, and procedures that will enable CEVP to play a vital role in transitioning promising research results into operational countermeasures necessary to maintain crew health and performance during long duration space flight.

  3. How Good Is Our School? Hungry for Success: Benchmarks for Self-Evaluation. Self-Evaluation Series

    ERIC Educational Resources Information Center

    Her Majesty's Inspectorate of Education, 2006

    2006-01-01

    This document is intended to build on the advice given in the publication "How good is our school?" It is intended to be of use to staff in local authorities and schools who are involved in implementing the recommendations of "Hungry for Success." This guide can be used to support in evaluating effectiveness in implementing "Hungry for Success."…

  4. Implementation of Benchmarking Transportation Logistics Practices and Future Benchmarking Organizations

    SciTech Connect

    Thrower, A.W.; Patric, J.; Keister, M.

    2008-07-01

    The purpose of the Office of Civilian Radioactive Waste Management's (OCRWM) Logistics Benchmarking Project is to identify established government and industry practices for the safe transportation of hazardous materials which can serve as a yardstick for design and operation of OCRWM's national transportation system for shipping spent nuclear fuel and high-level radioactive waste to the proposed repository at Yucca Mountain, Nevada. The project will present logistics and transportation practices and develop implementation recommendations for adaptation by the national transportation system. This paper will describe the process used to perform the initial benchmarking study, highlight interim findings, and explain how these findings are being implemented. It will also provide an overview of the next phase of benchmarking studies. The benchmarking effort will remain a high-priority activity throughout the planning and operational phases of the transportation system. The initial phase of the project focused on government transportation programs to identify those practices which are most clearly applicable to OCRWM. These Federal programs have decades of safe transportation experience, strive for excellence in operations, and implement effective stakeholder involvement, all of which parallel OCRWM's transportation mission and vision. The initial benchmarking project focused on four business processes that are critical to OCRWM's mission success, and can be incorporated into OCRWM planning and preparation in the near term. The processes examined were: transportation business model, contract management/out-sourcing, stakeholder relations, and contingency planning. More recently, OCRWM examined logistics operations of AREVA NC's Business Unit Logistics in France. The next phase of benchmarking will focus on integrated domestic and international commercial radioactive logistic operations. The prospective companies represent large scale shippers and have vast experience in

  5. Wildlife habitat evaluation demonstration project. [Michigan

    NASA Technical Reports Server (NTRS)

    Burgoyne, G. E., Jr.; Visser, L. G.

    1981-01-01

    To support the deer range improvement project in Michigan, the capability of LANDSAT data in assessing deer habitat in terms of areas and mixes of species and age classes of vegetation is being examined to determine whether such data could substitute for traditional cover type information sources. A second goal of the demonstration project is to determine whether LANDSAT data can be used to supplement and improve the information normally used for making deer habitat management decisions, either by providing vegetative cover for private land or by providing information about the interspersion and juxtaposition of valuable vegetative cover types. The procedure to be used for evaluating in LANDSAT data of the Lake County test site is described.

  6. Color back projection for fruit maturity evaluation

    NASA Astrophysics Data System (ADS)

    Zhang, Dong; Lee, Dah-Jye; Desai, Alok

    2013-12-01

    In general, fruits and vegetables such as tomatoes and dates are harvested before they fully ripen. After harvesting, they continue to ripen and their color changes. Color is a good indicator of fruit maturity. For example, tomatoes change color from dark green to light green and then pink, light red, and dark red. Assessing tomato maturity helps maximize its shelf life. Color is used to determine the length of time the tomatoes can be transported. Medjool dates change color from green to yellow, and the orange, light red and dark red. Assessing date maturity helps determine the length of drying process to help ripen the dates. Color evaluation is an important step in the processing and inventory control of fruits and vegetables that directly affects profitability. This paper presents an efficient color back projection and image processing technique that is designed specifically for real-time maturity evaluation of fruits. This color processing method requires very simple training procedure to obtain the frequencies of colors that appear in each maturity stage. This color statistics is used to back project colors to predefined color indexes. Fruit maturity is then evaluated by analyzing the reprojected color indexes. This method has been implemented and used for commercial production.

  7. ELAPSE - NASA AMES LISP AND ADA BENCHMARK SUITE: EFFICIENCY OF LISP AND ADA PROCESSING - A SYSTEM EVALUATION

    NASA Technical Reports Server (NTRS)

    Davis, G. J.

    1994-01-01

    One area of research of the Information Sciences Division at NASA Ames Research Center is devoted to the analysis and enhancement of processors and advanced computer architectures, specifically in support of automation and robotic systems. To compare systems' abilities to efficiently process Lisp and Ada, scientists at Ames Research Center have developed a suite of non-parallel benchmarks called ELAPSE. The benchmark suite was designed to test a single computer's efficiency as well as alternate machine comparisons on Lisp, and/or Ada languages. ELAPSE tests the efficiency with which a machine can execute the various routines in each environment. The sample routines are based on numeric and symbolic manipulations and include two-dimensional fast Fourier transformations, Cholesky decomposition and substitution, Gaussian elimination, high-level data processing, and symbol-list references. Also included is a routine based on a Bayesian classification program sorting data into optimized groups. The ELAPSE benchmarks are available for any computer with a validated Ada compiler and/or Common Lisp system. Of the 18 routines that comprise ELAPSE, provided within this package are 14 developed or translated at Ames. The others are readily available through literature. The benchmark that requires the most memory is CHOLESKY.ADA. Under VAX/VMS, CHOLESKY.ADA requires 760K of main memory. ELAPSE is available on either two 5.25 inch 360K MS-DOS format diskettes (standard distribution) or a 9-track 1600 BPI ASCII CARD IMAGE format magnetic tape. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. The ELAPSE benchmarks were written in 1990. VAX and VMS are trademarks of Digital Equipment Corporation. MS-DOS is a registered trademark of Microsoft Corporation.

  8. NASA teleconference pilot project evaluation for 1975

    NASA Technical Reports Server (NTRS)

    Fordyce, S. W.

    1976-01-01

    Tabular data were given to summarize the results of the NASA teleconferencing network pilot project for 1975. The 1,241 evaluation reports received indicate that almost 100,000 man-hours of teleconferences took place. The travel funds reported saved total about $1.44 million, which is about 10% of the NASA travel costs. Subtracting the cost of providing the teleconferencing networks, the net savings reported are $1.28 million (about 9% of the travel costs). The teleconferencing network has proved to be successful in conducting many management meetings and reviews within NASA and its contractors. In spite of difficulties caused by inexperience in teleconferencing and some equipment and circuit problems, the evaluation reports indicated the system was satisfactory in an overwhelming majority of cases.

  9. Evaluation of Title I ESEA Projects: 1975-76.

    ERIC Educational Resources Information Center

    Philadelphia School District, PA. Office of Research and Evaluation.

    Evaluation services to be provided during 1975-76 to projects funded under the Elementary and Secondary Education Act Title I are listed in this annual booklet. For each project, the following information is provided: goals to be assessed, evaluation techniques (design), and evaluation milestones. Regular term and summer term projects reported on…

  10. Toxicological Benchmarks for Screening Potential Contaminants of Concern for Effects on Sediment-Associated Biota

    SciTech Connect

    Hull, R.N.

    1993-01-01

    A hazardous waste site may contain hundreds of chemicals; therefore, it is important to screen contaminants of potential concern for the ecological risk assessment. Often this screening is done as part of a screening assessment, the purpose of which is to evaluate the available data, identify data gaps, and screen contaminants of potential concern. Screening may be accomplished by using a set of toxicological benchmarks. These benchmarks are helpful in determining whether contaminants warrant further assessment or are at a level that requires no further attention. If a chemical concentration or the reported detection limit exceeds a proposed lower benchmark, further analysis is needed to determine the hazards posed by that chemical. If, however, the chemical concentration falls below the lower benchmark value, the chemical may be eliminated from further study. The use of multiple benchmarks is recommended for screening chemicals of concern in sediments. Integrative benchmarks developed for the National Oceanic and Atmospheric Administration and the Florida Department of Environmental Protection are included for inorganic and organic chemicals. Equilibrium partitioning benchmarks are included for screening nonionic organic chemicals. Freshwater sediment effect concentrations developed as part of the U.S. Environmental Protection Agency's (EPA's) Assessment and Remediation of Contaminated Sediment Project are included for inorganic and organic chemicals (EPA 1996). Field survey benchmarks developed for the Ontario Ministry of the Environment are included for inorganic and organic chemicals. In addition, EPA-proposed sediment quality criteria are included along with screening values from EPA Region IV and Ecotox Threshold values from the EPA Office of Solid Waste and Emergency Response. Pore water analysis is recommended for ionic organic compounds; comparisons are then made against water quality benchmarks. This report is an update of three prior reports (Jones et al

  11. Robust Multivariable Flutter Suppression for the Benchmark Active Control Technology (BACT) Wind-Tunnel Model

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.

    1997-01-01

    The Benchmark Active Controls Technology (BACT) project is part of NASA Langley Research Center s Benchmark Models Program for studying transonic aeroelastic phenomena. In January of 1996 the BACT wind-tunnel model was used to successfully demonstrate the application of robust multivariable control design methods (H and -synthesis) to flutter suppression. This paper addresses the design and experimental evaluation of robust multivariable flutter suppression control laws with particular attention paid to the degree to which stability and performance robustness was achieved.

  12. Mark 4A project training evaluation

    NASA Technical Reports Server (NTRS)

    Stephenson, S. N.

    1985-01-01

    A participant evaluation of a Deep Space Network (DSN) is described. The Mark IVA project is an implementation to upgrade the tracking and data acquisition systems of the dSN. Approximately six hundred DSN operations and engineering maintenance personnel were surveyed. The survey obtained a convenience sample including trained people within the population in order to learn what training had taken place and to what effect. The survey questionnaire used modifications of standard rating scales to evaluate over one hundred items in four training dimensions. The scope of the evaluation included Mark IVA vendor training, a systems familiarization training seminar, engineering training classes, a on-the-job training. Measures of central tendency were made from participant rating responses. Chi square tests of statistical significance were performed on the data. The evaluation results indicated that the effects of different Mark INA training methods could be measured according to certain ratings of technical training effectiveness, and that the Mark IVA technical training has exhibited positive effects on the abilities of DSN personnel to operate and maintain new Mark IVA equipment systems.

  13. Benchmarking in Student Affairs.

    ERIC Educational Resources Information Center

    Mosier, Robert E.; Schwarzmueller, Gary J.

    2002-01-01

    Discusses the use of benchmarking in student affairs, focusing on issues related to student housing. Provides examples of how benchmarking has influenced administrative practice at many institutions. (EV)

  14. Radiography benchmark 2014

    SciTech Connect

    Jaenisch, G.-R. Deresch, A. Bellon, C.; Schumm, A.; Lucet-Sanchez, F.; Guerin, P.

    2015-03-31

    The purpose of the 2014 WFNDEC RT benchmark study was to compare predictions of various models of radiographic techniques, in particular those that predict the contribution of scattered radiation. All calculations were carried out for homogenous materials and a mono-energetic X-ray point source in the energy range between 100 keV and 10 MeV. The calculations were to include the best physics approach available considering electron binding effects. Secondary effects like X-ray fluorescence and bremsstrahlung production were to be taken into account if possible. The problem to be considered had two parts. Part I examined the spectrum and the spatial distribution of radiation behind a single iron plate. Part II considered two equally sized plates, made of iron and aluminum respectively, only evaluating the spatial distribution. Here we present the results of above benchmark study, comparing them to MCNP as the assumed reference model. The possible origins of the observed deviations are discussed.

  15. Radiography benchmark 2014

    NASA Astrophysics Data System (ADS)

    Jaenisch, G.-R.; Deresch, A.; Bellon, C.; Schumm, A.; Lucet-Sanchez, F.; Guerin, P.

    2015-03-01

    The purpose of the 2014 WFNDEC RT benchmark study was to compare predictions of various models of radiographic techniques, in particular those that predict the contribution of scattered radiation. All calculations were carried out for homogenous materials and a mono-energetic X-ray point source in the energy range between 100 keV and 10 MeV. The calculations were to include the best physics approach available considering electron binding effects. Secondary effects like X-ray fluorescence and bremsstrahlung production were to be taken into account if possible. The problem to be considered had two parts. Part I examined the spectrum and the spatial distribution of radiation behind a single iron plate. Part II considered two equally sized plates, made of iron and aluminum respectively, only evaluating the spatial distribution. Here we present the results of above benchmark study, comparing them to MCNP as the assumed reference model. The possible origins of the observed deviations are discussed.

  16. Managing for Results in America's Great City Schools. A Report of the Performance Measurement and Benchmarking Project

    ERIC Educational Resources Information Center

    Council of the Great City Schools, 2012

    2012-01-01

    "Managing for Results in America's Great City Schools, 2012" is presented by the Council of the Great City Schools to its members and the public. The purpose of the project was and is to develop performance measures that can improve the business operations of urban public school districts nationwide. This year's report includes data from 61 of the…

  17. Evaluation and comparison of benchmark QSAR models to predict a relevant REACH endpoint: The bioconcentration factor (BCF)

    SciTech Connect

    Gissi, Andrea; Lombardo, Anna; Roncaglioni, Alessandra; Gadaleta, Domenico; Mangiatordi, Giuseppe Felice; Nicolotti, Orazio; Benfenati, Emilio

    2015-02-15

    }=0.85) and sensitivity (average>0.70) for new compounds in the AD but not present in the training set. However, no single optimal model exists and, thus, it would be wise a case-by-case assessment. Yet, integrating the wealth of information from multiple models remains the winner approach. - Highlights: • REACH encourages the use of in silico methods in the assessment of chemicals safety. • The performances of nine BCF models were evaluated on a benchmark database of 851 chemicals. • We compared the models on the basis of both regression and classification performance. • Statistics on chemicals out of the training set and/or within the applicability domain were compiled. • The results show that QSAR models are useful as weight-of-evidence in support to other methods.

  18. Benchmarking on Tsunami Currents with ComMIT

    NASA Astrophysics Data System (ADS)

    Sharghi vand, N.; Kanoglu, U.

    2015-12-01

    There were no standards for the validation and verification of tsunami numerical models before 2004 Indian Ocean tsunami. Even, number of numerical models has been used for inundation mapping effort, evaluation of critical structures, etc. without validation and verification. After 2004, NOAA Center for Tsunami Research (NCTR) established standards for the validation and verification of tsunami numerical models (Synolakis et al. 2008 Pure Appl. Geophys. 165, 2197-2228), which will be used evaluation of critical structures such as nuclear power plants against tsunami attack. NCTR presented analytical, experimental and field benchmark problems aimed to estimate maximum runup and accepted widely by the community. Recently, benchmark problems were suggested by the US National Tsunami Hazard Mitigation Program Mapping & Modeling Benchmarking Workshop: Tsunami Currents on February 9-10, 2015 at Portland, Oregon, USA (http://nws.weather.gov/nthmp/index.html). These benchmark problems concentrated toward validation and verification of tsunami numerical models on tsunami currents. Three of the benchmark problems were: current measurement of the Japan 2011 tsunami in Hilo Harbor, Hawaii, USA and in Tauranga Harbor, New Zealand, and single long-period wave propagating onto a small-scale experimental model of the town of Seaside, Oregon, USA. These benchmark problems were implemented in the Community Modeling Interface for Tsunamis (ComMIT) (Titov et al. 2011 Pure Appl. Geophys. 168, 2121-2131), which is a user-friendly interface to the validated and verified Method of Splitting Tsunami (MOST) (Titov and Synolakis 1995 J. Waterw. Port Coastal Ocean Eng. 121, 308-316) model and is developed by NCTR. The modeling results are compared with the required benchmark data, providing good agreements and results are discussed. Acknowledgment: The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant

  19. 23 CFR 505.11 - Project evaluation and rating.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 23 Highways 1 2012-04-01 2012-04-01 false Project evaluation and rating. 505.11 Section 505.11 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION TRANSPORTATION INFRASTRUCTURE MANAGEMENT PROJECTS OF NATIONAL AND REGIONAL SIGNIFICANCE EVALUATION AND RATING § 505.11 Project...

  20. 23 CFR 505.11 - Project evaluation and rating.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 23 Highways 1 2013-04-01 2013-04-01 false Project evaluation and rating. 505.11 Section 505.11 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION TRANSPORTATION INFRASTRUCTURE MANAGEMENT PROJECTS OF NATIONAL AND REGIONAL SIGNIFICANCE EVALUATION AND RATING § 505.11 Project...

  1. 23 CFR 505.11 - Project evaluation and rating.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 23 Highways 1 2010-04-01 2010-04-01 false Project evaluation and rating. 505.11 Section 505.11 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION TRANSPORTATION INFRASTRUCTURE MANAGEMENT PROJECTS OF NATIONAL AND REGIONAL SIGNIFICANCE EVALUATION AND RATING § 505.11 Project...

  2. 23 CFR 505.11 - Project evaluation and rating.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 23 Highways 1 2011-04-01 2011-04-01 false Project evaluation and rating. 505.11 Section 505.11 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION TRANSPORTATION INFRASTRUCTURE MANAGEMENT PROJECTS OF NATIONAL AND REGIONAL SIGNIFICANCE EVALUATION AND RATING § 505.11 Project...

  3. 23 CFR 505.11 - Project evaluation and rating.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 23 Highways 1 2014-04-01 2014-04-01 false Project evaluation and rating. 505.11 Section 505.11 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION TRANSPORTATION INFRASTRUCTURE MANAGEMENT PROJECTS OF NATIONAL AND REGIONAL SIGNIFICANCE EVALUATION AND RATING § 505.11 Project...

  4. Modelling in Evaluating a Working Life Project in Higher Education

    ERIC Educational Resources Information Center

    Sarja, Anneli; Janhonen, Sirpa; Havukainen, Pirjo; Vesterinen, Anne

    2012-01-01

    This article describes an evaluation method based on collaboration between the higher education, a care home and university, in a R&D project. The aim of the project was to elaborate modelling as a tool of developmental evaluation for innovation and competence in project cooperation. The approach was based on activity theory. Modelling enabled a…

  5. Evaluating a collaborative IT based research and development project.

    PubMed

    Khan, Zaheer; Ludlow, David; Caceres, Santiago

    2013-10-01

    In common with all projects, evaluating an Information Technology (IT) based research and development project is necessary in order to discover whether or not the outcomes of the project are successful. However, evaluating large-scale collaborative projects is especially difficult as: (i) stakeholders from different countries are involved who, almost inevitably, have diverse technological and/or application domain backgrounds and objectives; (ii) multiple and sometimes conflicting application specific and user-defined requirements exist; and (iii) multiple and often conflicting technological research and development objectives are apparent. In this paper, we share our experiences based on the large-scale integrated research project - The HUMBOLDT project - with project duration of 54 months, involving contributions from 27 partner organisations, plus 4 sub-contractors from 14 different European countries. In the HUMBOLDT project, a specific evaluation methodology was defined and utilised for the user evaluation of the project outcomes. The user evaluation performed on the HUMBOLDT Framework and its associated nine application scenarios from various application domains, resulted in not only an evaluation of the integrated project, but also revealed the benefits and disadvantages of the evaluation methodology. This paper presents the evaluation methodology, discusses in detail the process of applying it to the HUMBOLDT project and provides an in-depth analysis of the results, which can be usefully applied to other collaborative research projects in a variety of domains.

  6. Framework for the Evaluation of an IT Project Portfolio

    ERIC Educational Resources Information Center

    Tai, W. T.

    2010-01-01

    The basis for evaluating projects in an organizational IT project portfolio includes complexity factors, arguments/criteria, and procedures, with various implications. The purpose of this research was to develop a conceptual framework for IT project proposal evaluation. The research involved using a heuristic roadmap and the mind-mapping method to…

  7. Towards Systematic Benchmarking of Climate Model Performance

    NASA Astrophysics Data System (ADS)

    Gleckler, P. J.

    2014-12-01

    The process by which climate models are evaluated has evolved substantially over the past decade, with the Coupled Model Intercomparison Project (CMIP) serving as a centralizing activity for coordinating model experimentation and enabling research. Scientists with a broad spectrum of expertise have contributed to the CMIP model evaluation process, resulting in many hundreds of publications that have served as a key resource for the IPCC process. For several reasons, efforts are now underway to further systematize some aspects of the model evaluation process. First, some model evaluation can now be considered routine and should not require "re-inventing the wheel" or a journal publication simply to update results with newer models. Second, the benefit of CMIP research to model development has not been optimal because the publication of results generally takes several years and is usually not reproducible for benchmarking newer model versions. And third, there are now hundreds of model versions and many thousands of simulations, but there is no community-based mechanism for routinely monitoring model performance changes. An important change in the design of CMIP6 can help address these limitations. CMIP6 will include a small set standardized experiments as an ongoing exercise (CMIP "DECK": ongoing Diagnostic, Evaluation and Characterization of Klima), so that modeling groups can submit them at any time and not be overly constrained by deadlines. In this presentation, efforts to establish routine benchmarking of existing and future CMIP simulations will be described. To date, some benchmarking tools have been made available to all CMIP modeling groups to enable them to readily compare with CMIP5 simulations during the model development process. A natural extension of this effort is to make results from all CMIP simulations widely available, including the results from newer models as soon as the simulations become available for research. Making the results from routine

  8. Processor Emulator with Benchmark Applications

    SciTech Connect

    Lloyd, G. Scott; Pearce, Roger; Gokhale, Maya

    2015-11-13

    A processor emulator and a suite of benchmark applications have been developed to assist in characterizing the performance of data-centric workloads on current and future computer architectures. Some of the applications have been collected from other open source projects. For more details on the emulator and an example of its usage, see reference [1].

  9. Benchmark of 3D halo neutral simulation in TRANSP and FIDASIM and application to projected neutral-beam-heated NSTX-U plasmas

    NASA Astrophysics Data System (ADS)

    Liu, D.; Medley, S. S.; Gorelenkova, M. V.; Heidbrink, W. W.; Stagner, L.

    2014-10-01

    A cloud of halo neutrals is created in the vicinity of beam footprint during the neutral beam injection and the halo neutral density can be comparable with beam neutral density. Proper modeling of halo neutrals is critical to correctly interpret neutral particle analyzers (NPA) and fast ion D-alpha (FIDA) signals since these signals strongly depend on local beam and halo neutral density. A 3D halo neutral model has been recently developed and implemented inside TRANSP code. The 3D halo neutral code uses a ``beam-in-a-box'' model that encompasses both injected beam neutrals and resulting halo neutrals. Upon deposition by charge exchange, a subset of the full, one-half and one-third beam energy components produce thermal halo neutrals that are tracked through successive halo neutral generations until an ionization event occurs or a descendant halo exits the box. A benchmark between 3D halo neural model in TRANSP and in FIDA/NPA synthetic diagnostic code FIDASIM is carried out. Detailed comparison of halo neutral density profiles from two codes will be shown. The NPA and FIDA simulations with and without 3D halos are applied to projections of plasma performance for the National Spherical Tours eXperiment-Upgrade (NSTX-U) and the effects of halo neutral density on NPA and FIDA signal amplitude and profile will be presented. Work supported by US DOE.

  10. Village Library Project, Nome, Alaska. An Evaluation, 1979.

    ERIC Educational Resources Information Center

    Dalton, Phyllis I.

    The adequacy of the Village Library Project, headquartered in Nome, Alaska, to meet the library service needs and desires of the people in 18 villages on Seward Peninsula, St. Lawrence Island, and Diomede is evaluated in this report. Sections are devoted to the goals of the evaluation, evaluation procedure, goals and objectives of the project,…

  11. Design Alternatives for Evaluating the Impact of Conservation Projects

    ERIC Educational Resources Information Center

    Margoluis, Richard; Stem, Caroline; Salafsky, Nick; Brown, Marcia

    2009-01-01

    Historically, examples of project evaluation in conservation were rare. In recent years, however, conservation professionals have begun to recognize the importance of evaluation both for accountability and for improving project interventions. Even with this growing interest in evaluation, the conservation community has paid little attention to…

  12. Project GAIN Evaluation: 1969-70.

    ERIC Educational Resources Information Center

    Biller, Julian

    Project GAIN was designed to meet the special needs of the academically retarded junior high school student. This federally funded project has been on-going in Broward County (Florida) since January 1966. The project was conceived of as a means to motivate and educate those students whose "dull normal" intellectual ability might otherwise doom…

  13. RESEARCH DESIGN FOR EVALUATING PROJECT MISSION.

    ERIC Educational Resources Information Center

    FURNO, ORLANDO F.; AND OTHERS

    THIS REPORT OUTLINES DESIGNS FOR 8 POSSIBLE RESEARCH STUDIES WHICH COULD BE UNDERTAKEN WITH REGARD TO PROJECT MISSION, A PROGRAM TO PREPARE TEACHERS FOR ASSIGNMENT TO INNER CITY SCHOOLS. THEY ARE (1) A STUDY OF ATTRITION RATES OF STUDENT-INTERN-TEACHER ENROLLEES IN TRAINING IN PROJECT MISSION, (2) TEACHER CHARACTERISTICS OF PROJECT MISSION INTERNS…

  14. Metal mixtures modeling evaluation project: 1. Background.

    PubMed

    Meyer, Joseph S; Farley, Kevin J; Garman, Emily R

    2015-04-01

    Despite more than 5 decades of aquatic toxicity tests conducted with metal mixtures, there is still a need to understand how metals interact in mixtures and to predict their toxicity more accurately than what is currently done. The present study provides a background for understanding the terminology, regulatory framework, qualitative and quantitative concepts, experimental approaches, and visualization and data-analysis methods for chemical mixtures, with an emphasis on bioavailability and metal-metal interactions in mixtures of waterborne metals. In addition, a Monte Carlo-type randomization statistical approach to test for nonadditive toxicity is presented, and an example with a binary-metal toxicity data set demonstrates the challenge involved in inferring statistically significant nonadditive toxicity. This background sets the stage for the toxicity results, data analyses, and bioavailability models related to metal mixtures that are described in the remaining articles in this special section from the Metal Mixture Modeling Evaluation project and workshop. It is concluded that although qualitative terminology such as additive and nonadditive toxicity can be useful to convey general concepts, failure to expand beyond that limited perspective could impede progress in understanding and predicting metal mixture toxicity. Instead of focusing on whether a given metal mixture causes additive or nonadditive toxicity, effort should be directed to develop models that can accurately predict the toxicity of metal mixtures.

  15. The Vulcan Project: Methods, Results, and Evaluation

    NASA Astrophysics Data System (ADS)

    Gurney, K. R.; Mendoza, D.; Miller, C.; Ojima, D.; Knox, S.; Corbin, K.; Denning, S.; Fischer, M.; de La Rue Du Can, S.

    2008-12-01

    The Vulcan Project has quantified fossil fuel CO2 for the United States at the sub-county spatial scale, hourly for the year 2002. It approached quantification of fossil fuel CO2 from a novel perspective: leveraging the information already contained within the National Emissions Inventory for the assessment of nationally regulated air pollution. By utilizing the inventory emissions of carbon monoxide and nitrogen oxides combined with emissions factors, specific to combustion device technology, we have calculated CO2 emissions for industrial point sources, powerplants, mobile sources, residential and commercial sectors with information on fuel used and source classification information. In this presentation, we provide an overview of the Vulcan inventory methods, results and evaluation of the Vulcan inventory by comparing to state-level inventories and other independent estimates. The inventory has been recently placed onto Google Earth and we will provide a preview of this capability. Finally, we will present the result of fossil fuel CO2 concentration as transported by an atmospheric transport model and a comparison to in situ CO2 observations.

  16. Evolving Our Evaluation of Lighting Environments Project

    NASA Technical Reports Server (NTRS)

    Terrier, Douglas; Clayton, Ronald; Clark, Toni Anne

    2016-01-01

    Imagine you are an astronaut on their 100th day of your three year exploration mission. During your daily routine to the small hygiene compartment of the spacecraft, you realize that no matter what you do, your body blocks the light from the lamp. You can clearly see your hands or your toes but not both! What were those design engineers thinking! It would have been nice if they could have made the walls glow instead! The reason the designers were not more innovative is that their interpretation of the system lighting requirements didn't allow them to be so! Currently, our interior spacecraft lighting standards and requirements are written around the concept of a quantity of light illuminating a spacecraft surface. The natural interpretation for the engineer is that a lamp that throws light to the surface is required. Because of certification costs, only one lamp is designed and small rooms can wind up with lamps that may be inappropriate for the room architecture. The advances in solid state light emitting technologies and optics for lighting and visual communication necessitates the evaluation of how NASA envisions spacecraft lighting architectures and how NASA uses industry standards for the design and evaluation of lighting system. Current NASA lighting standards and requirements for existing architectures focus on the separate ability of a lighting system to throw light against a surface or the ability of a display system to provide the appropriate visual contrast. Realization that these systems can be integrated is not realized. The result is that the systems are developed independent from one another and potential efficiencies that could be realized from borrowing from the concept of one technology and applying it for the purpose of the other does not occur. This project investigated the possibility of incorporating large luminous surface lamps as an alternative or supplement to overhead lighting. We identified existing industry standards for architectural

  17. Global and local scale flood discharge simulations in the Rhine River basin for flood risk reduction benchmarking in the Flagship Project

    NASA Astrophysics Data System (ADS)

    Gädeke, Anne; Gusyev, Maksym; Magome, Jun; Sugiura, Ai; Cullmann, Johannes; Takeuchi, Kuniyoshi

    2015-04-01

    The global flood risk assessment is prerequisite to set global measurable targets of post-Hyogo Framework for Action (HFA) that mobilize international cooperation and national coordination towards disaster risk reduction (DRR) and requires the establishment of a uniform flood risk assessment methodology on various scales. To address these issues, the International Flood Initiative (IFI) has initiated a Flagship Project, which was launched in year 2013, to support flood risk reduction benchmarking at global, national and local levels. In the Flagship Project road map, it is planned to identify the original risk (1), to identify the reduced risk (2), and to facilitate the risk reduction actions (3). In order to achieve this goal at global, regional and local scales, international research collaboration is absolutely necessary involving domestic and international institutes, academia and research networks such as UNESCO International Centres. The joint collaboration by ICHARM and BfG was the first attempt that produced the first step (1a) results on the flood discharge estimates with inundation maps under way. As a result of this collaboration, we demonstrate the outcomes of the first step of the IFI Flagship Project to identify flood hazard in the Rhine river basin on the global and local scale. In our assessment, we utilized a distributed hydrological Block-wise TOP (BTOP) model on 20-km and 0.5-km scales with local precipitation and temperature input data between 1980 and 2004. We utilized existing 20-km BTOP model, which is applied globally, and constructed the local scale 0.5-km BTOP model for the Rhine River basin. For the BTOP model results, both calibrated 20-km and 0.5-km BTOP models had similar statistical performance and represented observed flood river discharges, epecially for 1993 and 1995 floods. From 20-km and 0.5-km BTOP simulation, the flood discharges of the selected return period were estimated using flood frequency analysis and were comparable to

  18. Computational Chemistry Comparison and Benchmark Database

    National Institute of Standards and Technology Data Gateway

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  19. Social Studies Project Evaluation: Case Study and Recommendations.

    ERIC Educational Resources Information Center

    Napier, John

    1982-01-01

    Describes the development and application of a model for social studies program evaluations. A case study showing how the model's three-step process was used to evaluate the Improving Citizenship Education Project in Fulton County, Georgia is included. (AM)

  20. Human Relations Education Project. Final Evaluation Report.

    ERIC Educational Resources Information Center

    Buffalo Board of Education, NY.

    This project did the planning and pilot phases of an effort to improve the teaching of human relations in grades K-12 of public and private schools in the Buffalo-Niagara Falls metropolitan area. In the pilot phase, the project furnished on-the-job training for approximately 70 schools. The training was given by teams of human relations…

  1. Evaluation of the Matrix Project. Interchange 77.

    ERIC Educational Resources Information Center

    McIvor, Gill; Moodie, Kristina

    The Matrix Project is a program that has been established in central Scotland with the aim of reducing the risk of offending and anti-social behavior among vulnerable children. The project provides a range of services to children between eight and 11 years of age who are at risk in the local authority areas of Clackmannanshire, Falkirk and…

  2. Project "Freestyle": National Sites Evaluation Design.

    ERIC Educational Resources Information Center

    Frost, Frederica; Eastman, Harvey

    Project "Freestyle" involved the development of prototypical television and print materials intended to combat sex-role stereotyping in career-related attitudes of nine to twelve-year-old children. In the first 16 months of the project an assessment was made of the reactions to three pilot shows among students, teachers, administrators, and…

  3. Evaluation of the School Administration Manager Project

    ERIC Educational Resources Information Center

    Turnbull, Brenda J.; Haslam, M. Bruce; Arcaira, Erikson R.; Riley, Derek L.; Sinclair, Beth; Coleman, Stephen

    2009-01-01

    The School Administration Manager (SAM) project, supported by The Wallace Foundation as part of its education initiative, focuses on changing the conditions in schools that prevent principals from devoting more time to instructional leadership. In schools participating in the National SAM Project, principals have made a commitment to increase the…

  4. ELT in Albania: Project Evaluation and Change.

    ERIC Educational Resources Information Center

    Dushku, S.

    1998-01-01

    Discusses the design and implementation of the British Council English-language-teaching (ELT) project at the University of Tirana in Albania. Through analysis of the project and discussion of the appropriateness of its methodology to the Albanian social and professional context, factors are highlighted that account for the ephemeral nature of…

  5. Wisconsin Rural Reading Improvement Project 1987-1988. Evaluation Report.

    ERIC Educational Resources Information Center

    Nowakowski, Jeri; And Others

    Based upon case studies, surveys conducted in 18 participating school districts in fall 1987 and spring 1988, meeting observations, discussions with project staff, and an audit trail, this report evaluates the first year of the Wisconsin Rural Reading Improvement Project (WRRIP), a school improvement project aimed at helping small, rural schools…

  6. PLATO across the Curriculum: An Evaluation of a Project.

    ERIC Educational Resources Information Center

    Freer, David

    1986-01-01

    A project at the University of Witwatersrand examined the implications of introducing a centrally controlled system of computer-based learning in which 13 university departments utilized PLATO to supplement teaching programs and encourage computer literacy. Department project descriptions and project evaluations (which reported positive student…

  7. Small Business Learning through Mentoring: Evaluating a Project

    ERIC Educational Resources Information Center

    Barrett, Rowena

    2006-01-01

    Purpose: The purpose of this paper is to evaluate a small business-mentoring project, which was delivered in regional Australia. Design/methodology/approach: This paper contains a case study of the mentoring project and focuses on the process and the outcomes of that project from different perspectives. Data collected in semi structured telephone…

  8. Outside Evaluation Report for the Arlington Federal Workplace Literacy Project.

    ERIC Educational Resources Information Center

    Wrigley, Heide Spruck

    The successes and challenges of the Arlington Education and Employment Program (REEP) Workplace Literacy Project in Virginia are described in this evaluation report. REEP's federal Workplace Literacy Project Consortium is operated as a special project within the Department of Adult, Career and Vocational Education of the Arlington Public Schools.…

  9. Project Aprendizaje. 1990-91 Final Evaluation Profile. OREA Report.

    ERIC Educational Resources Information Center

    New York City Board of Education, Brooklyn, NY. Office of Research, Evaluation, and Assessment.

    An evaluation was done of New York City Public Schools' Project Aprendizaje, which served disadvantaged, immigrant, Spanish-speaking high school students at Seward Park High School in Manhattan. The Project enrolled 290 students in grades 9 through 12, 93.1 percent of whom were eligible for the Free Lunch Program. The Project provided students of…

  10. Programme for Learning Enrichment. A Van Leer Project: An Evaluation.

    ERIC Educational Resources Information Center

    Ghani, Zainal

    This paper reports the evaluation of a project undertaken by the Sarawak Education Department to improve the quality of education in upper primary classes in rural Sarawak, Malaysia. The project is known officially as the Programme for Learning Enrichment, and commonly as the Van Leer Project, after the international agency which provides the main…

  11. Benchmarking concentrating photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Duerr, Fabian; Muthirayan, Buvaneshwari; Meuret, Youri; Thienpont, Hugo

    2010-08-01

    Integral to photovoltaics is the need to provide improved economic viability. To achieve this goal, photovoltaic technology has to be able to harness more light at less cost. A large variety of concentrating photovoltaic concepts has provided cause for pursuit. To obtain a detailed profitability analysis, a flexible evaluation is crucial for benchmarking the cost-performance of this variety of concentrating photovoltaic concepts. To save time and capital, a way to estimate the cost-performance of a complete solar energy system is to use computer aided modeling. In this work a benchmark tool is introduced based on a modular programming concept. The overall implementation is done in MATLAB whereas Advanced Systems Analysis Program (ASAP) is used for ray tracing calculations. This allows for a flexible and extendable structuring of all important modules, namely an advanced source modeling including time and local dependence, and an advanced optical system analysis of various optical designs to obtain an evaluation of the figure of merit. An important figure of merit: the energy yield for a given photovoltaic system at a geographical position over a specific period, can be calculated.

  12. Kentucky Migrant Technology Project: External Evaluation Report, 1997-98.

    ERIC Educational Resources Information Center

    Popp, Robert J.

    During its first year of operation (1997-98), the Kentucky Migrant Technology Project successfully implemented its model, used internal and external evaluations to inform improvement of the model, and began plans for expansion into new service areas. This evaluation report is organized around five questions that focus on the project model and its…

  13. Project SEARCH UK--Evaluating Its Employment Outcomes

    ERIC Educational Resources Information Center

    Kaehne, Axel

    2016-01-01

    Background: The study reports the findings of an evaluation of Project SEARCH UK. The programme develops internships for young people with intellectual disabilities who are about to leave school or college. The aim of the evaluation was to investigate at what rate Project SEARCH provided employment opportunities to participants. Methods: The…

  14. Student Assistance Program Demonstration Project Evaluation. Final Report.

    ERIC Educational Resources Information Center

    Pollard, John A.; Houle, Denise M.

    This document presents the final report on the evaluation of California's model student assistance program (SAP) demonstration projects implemented in five locations across the state from July 1989 through June 1992. The report provides an overall, integrated review of the evaluation of the SAP demonstration projects, summarizes important findings…

  15. The Program Evaluator's Role in Cross-Project Pollination.

    ERIC Educational Resources Information Center

    Yasgur, Bruce J.

    An expanded duties role of the multiple-program evaluator as an integral part of the ongoing decision-making process in all projects served is defended. Assumptions discussed included that need for projects with related objectives to pool resources and avoid duplication of effort and the evaluator's unique ability to provide an objective…

  16. Caribou Bilingual Project. Final Evaluation Report, 1973-1974.

    ERIC Educational Resources Information Center

    Cox, Lorraine

    This is an evaluative report on the Caribou Exemplary Bilingual Project for 1973-1974, its second year. The English-French program involved two kindergarten, two first grade, and two second grade classes. The report includes a description of the project, a discussion of the procedures used to evaluate it, an assessment of each of the five project…

  17. 40 CFR 57.604 - Evaluation of projects.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Evaluation of projects. 57.604 Section 57.604 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) PRIMARY NONFERROUS SMELTER ORDERS Research and Development Requirements § 57.604 Evaluation of projects. The research and development proposal...

  18. 40 CFR 57.604 - Evaluation of projects.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 6 2012-07-01 2012-07-01 false Evaluation of projects. 57.604 Section 57.604 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) PRIMARY NONFERROUS SMELTER ORDERS Research and Development Requirements § 57.604 Evaluation of projects. The research and development proposal...

  19. 40 CFR 57.604 - Evaluation of projects.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 6 2013-07-01 2013-07-01 false Evaluation of projects. 57.604 Section 57.604 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) PRIMARY NONFERROUS SMELTER ORDERS Research and Development Requirements § 57.604 Evaluation of projects. The research and development proposal...

  20. 40 CFR 57.604 - Evaluation of projects.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 5 2011-07-01 2011-07-01 false Evaluation of projects. 57.604 Section 57.604 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) PRIMARY NONFERROUS SMELTER ORDERS Research and Development Requirements § 57.604 Evaluation of projects. The research and development proposal...

  1. 40 CFR 57.604 - Evaluation of projects.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 6 2014-07-01 2014-07-01 false Evaluation of projects. 57.604 Section 57.604 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) PRIMARY NONFERROUS SMELTER ORDERS Research and Development Requirements § 57.604 Evaluation of projects. The research and development proposal...

  2. SPEEDES benchmarking analysis

    NASA Astrophysics Data System (ADS)

    Capella, Sebastian J.; Steinman, Jeffrey S.; McGraw, Robert M.

    2002-07-01

    SPEEDES, the Synchronous Parallel Environment for Emulation and Discrete Event Simulation, is a software framework that supports simulation applications across parallel and distributed architectures. SPEEDES is used as a simulation engine in support of numerous defense projects including the Joint Simulation System (JSIMS), the Joint Modeling And Simulation System (JMASS), the High Performance Computing and Modernization Program's (HPCMP) development of a High Performance Computing (HPC) Run-time Infrastructure, and the Defense Modeling and Simulation Office's (DMSO) development of a Human Behavioral Representation (HBR) Testbed. This work documents some of the performance metrics obtained from benchmarking the SPEEDES Simulation Framework with respect to the functionality found in the summer of 2001. Specifically this papers the scalability of SPEEDES with respect to its time management algorithms and simulation object event queues with respect to the number of objects simulated and events processed.

  3. Benchmarks for industrial energy efficiency

    SciTech Connect

    Amarnath, K.R.; Kumana, J.D.; Shah, J.V.

    1996-12-31

    What are the standards for improving energy efficiency for industries such as petroleum refining, chemicals, and glass manufacture? How can different industries in emerging markets and developing accelerate the pace of improvements? This paper discusses several case studies and experiences relating to this subject emphasizing the use of energy efficiency benchmarks. Two important benchmarks are discussed. The first is based on a track record of outstanding performers in the related industry segment; the second benchmark is based on site specific factors. Using energy use reduction targets or benchmarks, projects have been implemented in Mexico, Poland, India, Venezuela, Brazil, China, Thailand, Malaysia, Republic of South Africa and Russia. Improvements identified through these projects include a variety of recommendations. The use of oxy-fuel and electric furnaces in the glass industry in Poland; reconfiguration of process heat recovery systems for refineries in China, Malaysia, and Russia; recycling and reuse of process wastewater in Republic of South Africa; cogeneration plant in Venezuela. The paper will discuss three case studies of efforts undertaken in emerging market countries to improve energy efficiency.

  4. Evaluation of direct-use-project drilling costs

    SciTech Connect

    Dolenc, M.R.; Childs, F.W.; Allman, D.W.; Sanders, R.D.

    1983-01-01

    This study evaluates drilling and completion costs from eleven low-to-moderate temperature geothermal projects carried out under the Program Opportunity Notice (PON) and User-Coupled Confirmation Drilling Programs. Several studies have evaluated geothermal drilling costs, particularly with respect to high-temperature-system drilling costs. This study evaluates drilling costs and individual cost elements for low-to-moderate temperature projects. It considers the effect of drilling depth, rock types, remoteness of location, rig size, and unique operating and subsurface conditions on the total drilling cost. This detailed evaluation should provide the investor in direct-use projects with approximate cost projections by which the economics of such projects can be evaluated.

  5. Container evaluation for microwave solidification project

    SciTech Connect

    Smith, J.A.

    1994-08-01

    This document discusses the development and testing of a suitable waste container and packaging arrangement to be used with the Microwave Solidification System (MSS) and Bagless Posting System (BPS). The project involves the Rocky Flats Plant.

  6. Closed-Loop Neuromorphic Benchmarks.

    PubMed

    Stewart, Terrence C; DeWolf, Travis; Kleinhans, Ashley; Eliasmith, Chris

    2015-01-01

    Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is even more difficult when the task of interest is a closed-loop task; that is, a task where the output from the neuromorphic hardware affects some environment, which then in turn affects the hardware's future input. However, closed-loop situations are one of the primary potential uses of neuromorphic hardware. To address this, we present a methodology for generating closed-loop benchmarks that makes use of a hybrid of real physical embodiment and a type of "minimal" simulation. Minimal simulation has been shown to lead to robust real-world performance, while still maintaining the practical advantages of simulation, such as making it easy for the same benchmark to be used by many researchers. This method is flexible enough to allow researchers to explicitly modify the benchmarks to identify specific task domains where particular hardware excels. To demonstrate the method, we present a set of novel benchmarks that focus on motor control for an arbitrary system with unknown external forces. Using these benchmarks, we show that an error-driven learning rule can consistently improve motor control performance across a randomly generated family of closed-loop simulations, even when there are up to 15 interacting joints to be controlled. PMID:26696820

  7. Benchmark simulation models, quo vadis?

    PubMed

    Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.

  8. Closed-Loop Neuromorphic Benchmarks

    PubMed Central

    Stewart, Terrence C.; DeWolf, Travis; Kleinhans, Ashley; Eliasmith, Chris

    2015-01-01

    Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is even more difficult when the task of interest is a closed-loop task; that is, a task where the output from the neuromorphic hardware affects some environment, which then in turn affects the hardware's future input. However, closed-loop situations are one of the primary potential uses of neuromorphic hardware. To address this, we present a methodology for generating closed-loop benchmarks that makes use of a hybrid of real physical embodiment and a type of “minimal” simulation. Minimal simulation has been shown to lead to robust real-world performance, while still maintaining the practical advantages of simulation, such as making it easy for the same benchmark to be used by many researchers. This method is flexible enough to allow researchers to explicitly modify the benchmarks to identify specific task domains where particular hardware excels. To demonstrate the method, we present a set of novel benchmarks that focus on motor control for an arbitrary system with unknown external forces. Using these benchmarks, we show that an error-driven learning rule can consistently improve motor control performance across a randomly generated family of closed-loop simulations, even when there are up to 15 interacting joints to be controlled. PMID:26696820

  9. International benchmarking of specialty hospitals. A series of case studies on comprehensive cancer centres

    PubMed Central

    2010-01-01

    Background Benchmarking is one of the methods used in business that is applied to hospitals to improve the management of their operations. International comparison between hospitals can explain performance differences. As there is a trend towards specialization of hospitals, this study examines the benchmarking process and the success factors of benchmarking in international specialized cancer centres. Methods Three independent international benchmarking studies on operations management in cancer centres were conducted. The first study included three comprehensive cancer centres (CCC), three chemotherapy day units (CDU) were involved in the second study and four radiotherapy departments were included in the final study. Per multiple case study a research protocol was used to structure the benchmarking process. After reviewing the multiple case studies, the resulting description was used to study the research objectives. Results We adapted and evaluated existing benchmarking processes through formalizing stakeholder involvement and verifying the comparability of the partners. We also devised a framework to structure the indicators to produce a coherent indicator set and better improvement suggestions. Evaluating the feasibility of benchmarking as a tool to improve hospital processes led to mixed results. Case study 1 resulted in general recommendations for the organizations involved. In case study 2, the combination of benchmarking and lean management led in one CDU to a 24% increase in bed utilization and a 12% increase in productivity. Three radiotherapy departments of case study 3, were considering implementing the recommendations. Additionally, success factors, such as a well-defined and small project scope, partner selection based on clear criteria, stakeholder involvement, simple and well-structured indicators, analysis of both the process and its results and, adapt the identified better working methods to the own setting, were found. Conclusions The improved

  10. Decay Data Evaluation Project (DDEP): evaluation of the main 233Pa decay characteristics.

    PubMed

    Chechev, Valery P; Kuzmenko, Nikolay K

    2006-01-01

    The results of a decay data evaluation are presented for 233Pa (beta-) decay to nuclear levels in 233U. These evaluated data have been obtained within the Decay Data Evaluation Project using information published up to 2005.

  11. Decay Data Evaluation Project (DDEP): evaluation of the main 233Pa decay characteristics.

    PubMed

    Chechev, Valery P; Kuzmenko, Nikolay K

    2006-01-01

    The results of a decay data evaluation are presented for 233Pa (beta-) decay to nuclear levels in 233U. These evaluated data have been obtained within the Decay Data Evaluation Project using information published up to 2005. PMID:16574422

  12. Authentic e-Learning in a Multicultural Context: Virtual Benchmarking Cases from Five Countries

    ERIC Educational Resources Information Center

    Leppisaari, Irja; Herrington, Jan; Vainio, Leena; Im, Yeonwook

    2013-01-01

    The implementation of authentic learning elements at education institutions in five countries, eight online courses in total, is examined in this paper. The International Virtual Benchmarking Project (2009-2010) applied the elements of authentic learning developed by Herrington and Oliver (2000) as criteria to evaluate authenticity. Twelve…

  13. Radiation Detection Computational Benchmark Scenarios

    SciTech Connect

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  14. Regional Interstate Planning Project Program . . . Vol. IX. California Program Evaluation Improvement Project. Seminar Report.

    ERIC Educational Resources Information Center

    Dearmin, Evalyn, Ed.; And Others

    Program evaluation strategies and techniques based on materials developed by the California Evaluation Improvement Project were discussed at this meeting of the Regional Interstate Planning Project (RIPP). RIPP members represent the State Departments of Education of ten western states, and have met periodically over the past nine years to discuss…

  15. Teacher Leadership Project 2001: Evaluation Report.

    ERIC Educational Resources Information Center

    Brown, Carol J.; Fouts, Jeffrey T.; Rojan, Amy

    The Teacher Leadership Project (TLP), funded by the Bill & Melinda Gates Foundation, is a program developed to assist teachers in their efforts to integrate technology into the school curriculum. The program also encourages and facilitates teachers in assuming leadership roles to help schools and districts develop and implement technology plans,…

  16. Teacher Leadership Project 2002: Evaluation Report.

    ERIC Educational Resources Information Center

    Brown, Carol J.; Rojan, Amy

    The Teacher Leadership Project (TLP) is a program developed to assist teachers in their efforts to integrate technology into the school curriculum. The program also encourages and facilitates teachers in assuming leadership roles to help schools and districts develop and implement technology plans, and to provide training in using technology.…

  17. Evaluation of Project HAPPIER Survey: Illinois.

    ERIC Educational Resources Information Center

    Haenn, Joseph F.

    As part of Project HAPPIER (Health Awareness Patterns Preventing Illnesses and Encouraging Responsibility), a survey was conducted among teachers and other migrant personnel in Illinois to assess the current health needs of migrants. The availability of educational materials was also investigated in the survey in order to ensure that a proposed…

  18. Implementing and Evaluating Online Service Learning Projects

    ERIC Educational Resources Information Center

    Helms, Marilyn M.; Rutti, Raina M.; Hervani, Aref Agahei; LaBonte, Joanne; Sarkarat, Sy

    2015-01-01

    As online learning proliferates, professors must adapt traditional projects for an asynchronous environment. Service learning is an effective teaching style fostering interactive learning through integration of classroom activities into communities. While prior studies have documented the appropriateness of service learning in online courses,…

  19. Learning with East Aurora Families. Project Evaluation.

    ERIC Educational Resources Information Center

    Bercovitz, Laura

    The Learning with East Aurora Families (LEAF) Project was a 1-year family literacy program developed and implemented by Waubonsee Community College in Sugar Grove, Illinois. It recruited 51 parents and other significant adults of 4- and 5-year-olds enrolled in at-risk programs. Each of the 4-week sessions were divided into 5 components: adult…

  20. Lawrence Livermore plutonium button critical experiment benchmark

    SciTech Connect

    Trumble, E.F.; Justice, J.B.; Frost, R.L.

    1994-12-31

    The end of the Cold War and the subsequent weapons reductions have led to an increased need for the safe storage of large amounts of highly enriched plutonium. In support of code validation required to address this need, a set of critical experiments involving arrays of weapons-grade plutonium metal that were performed at the Lawrence Livermore National Laboratory (LLNL) in the late 1960s has been revisited. Although these experiments are well documented, discrepancies and omissions have been found in the earlier reports. Many of these have been resolved in the current work, and these data have been compiled into benchmark descriptions. In addition, a computational verification has been performed on the benchmarks using multiple computer codes. These benchmark descriptions are also being made available to the US Department of Energy (DOE)-sponsored Nuclear Criticality Safety Benchmark Evaluation Working Group for dissemination in the DOE Handbook on Evaluated Criticality Safety Benchmark Experiments.

  1. Quality framework proposal for Component Material Evaluation (CME) projects.

    SciTech Connect

    Christensen, Naomi G.; Arfman, John F.; Limary, Siviengxay

    2008-09-01

    This report proposes the first stage of a Quality Framework approach that can be used to evaluate and document Component Material Evaluation (CME) projects. The first stage of the Quality Framework defines two tools that will be used to evaluate a CME project. The first tool is used to decompose a CME project into its essential elements. These elements can then be evaluated for inherent quality by looking at the subelements that impact their level of quality maturity or rigor. Quality Readiness Levels (QRLs) are used to valuate project elements for inherent quality. The Framework provides guidance for the Principal Investigator (PI) and stakeholders for CME project prerequisites that help to ensure the proper level of confidence in the deliverable given its intended use. The Framework also Provides a roadmap that defined when and how the Framework tools should be applied. Use of these tools allow the Principal Investigator (PI) and stakeholders to understand what elements the project will use to execute the project, the inherent quality of the elements, which of those are critical to the project and why, and the risks associated to the project's elements.

  2. How is success or failure in river restoration projects evaluated? Feedback from French restoration projects.

    PubMed

    Morandi, Bertrand; Piégay, Hervé; Lamouroux, Nicolas; Vaudor, Lise

    2014-05-01

    Since the 1990s, French operational managers and scientists have been involved in the environmental restoration of rivers. The European Water Framework Directive (2000) highlights the need for feedback from restoration projects and for evidence-based evaluation of success. Based on 44 French pilot projects that included such an evaluation, the present study includes: 1) an introduction to restoration projects based on their general characteristics 2) a description of evaluation strategies and authorities in charge of their implementation, and 3) a focus on the evaluation of results and the links between these results and evaluation strategies. The results show that: 1) the quality of an evaluation strategy often remains too poor to understand well the link between a restoration project and ecological changes; 2) in many cases, the conclusions drawn are contradictory, making it difficult to determine the success or failure of a restoration project; and 3) the projects with the poorest evaluation strategies generally have the most positive conclusions about the effects of restoration. Recommendations are that evaluation strategies should be designed early in the project planning process and be based on clearly-defined objectives.

  3. Evaluation of the concrete shield compositions from the 2010 criticality accident alarm system benchmark experiments at the CEA Valduc SILENE facility

    SciTech Connect

    Miller, Thomas Martin; Celik, Cihangir; Dunn, Michael E; Wagner, John C; McMahan, Kimberly L; Authier, Nicolas; Jacquet, Xavier; Rousseau, Guillaume; Wolff, Herve; Savanier, Laurence; Baclet, Nathalie; Lee, Yi-kang; Trama, Jean-Christophe; Masse, Veronique; Gagnier, Emmanuel; Naury, Sylvie; Blanc-Tranchant, Patrick; Hunter, Richard; Kim, Soon; Dulik, George Michael; Reynolds, Kevin H.

    2015-01-01

    In October 2010, a series of benchmark experiments were conducted at the French Commissariat a l'Energie Atomique et aux Energies Alternatives (CEA) Valduc SILENE facility. These experiments were a joint effort between the United States Department of Energy Nuclear Criticality Safety Program and the CEA. The purpose of these experiments was to create three benchmarks for the verification and validation of radiation transport codes and evaluated nuclear data used in the analysis of criticality accident alarm systems. This series of experiments consisted of three single-pulsed experiments with the SILENE reactor. For the first experiment, the reactor was bare (unshielded), whereas in the second and third experiments, it was shielded by lead and polyethylene, respectively. The polyethylene shield of the third experiment had a cadmium liner on its internal and external surfaces, which vertically was located near the fuel region of SILENE. During each experiment, several neutron activation foils and thermoluminescent dosimeters (TLDs) were placed around the reactor. Nearly half of the foils and TLDs had additional high-density magnetite concrete, high-density barite concrete, standard concrete, and/or BoroBond shields. CEA Saclay provided all the concrete, and the US Y-12 National Security Complex provided the BoroBond. Measurement data from the experiments were published at the 2011 International Conference on Nuclear Criticality (ICNC 2011) and the 2013 Nuclear Criticality Safety Division (NCSD 2013) topical meeting. Preliminary computational results for the first experiment were presented in the ICNC 2011 paper, which showed poor agreement between the computational results and the measured values of the foils shielded by concrete. Recently the hydrogen content, boron content, and density of these concrete shields were further investigated within the constraints of the previously available data. New computational results for the first experiment are now available that

  4. Symbolic manipulation and transport benchmarks

    SciTech Connect

    Ganapol, B.D.

    1986-01-01

    The establishment of reliable benchmark solutions is an integral part of the development of computational algorithms to solve the Boltzmann equation of particle motion. These solutions provide standards by which code developers can assess new numerical algorithms as well as ensure proper programming. A transport benchmark solution, as defined here, is the accurate numerical evaluation (3 to 5 digits) of an analytical solution to the transport equation. The basic elements of such a solution are an analytical representation free from discretization and a numerical evaluation for which an error estimate can be obtained. Symbolic manipulation software such as REDUCE, MACSYMA, and SMP can greatly aid in the generation of benchmark solutions. The benefit of these manipulators lies both in their ability to perform lengthy algebraic calculations and to write a code that can be incorporated directly into existing programs. Using two fundamental problems from particle transport theory, the author explores the advantages and limitations of the application of the REDUCE software package in generating time dependent benchmark solutions.

  5. JUPITER PROJECT - JOINT UNIVERSAL PARAMETER IDENTIFICATION AND EVALUATION OF RELIABILITY

    EPA Science Inventory

    The JUPITER (Joint Universal Parameter IdenTification and Evaluation of Reliability) project builds on the technology of two widely used codes for sensitivity analysis, data assessment, calibration, and uncertainty analysis of environmental models: PEST and UCODE.

  6. Evaluation of the Treatment of Diabetic Retinopathy A Research Project

    ERIC Educational Resources Information Center

    Kupfer, Carl

    1973-01-01

    Evaluated is the treatment of diabetic retinopathy (blindness due to ruptured vessels of the retina as a side effect of diabetes), and described is a research project comparing two types of photocoagulation treatment. (DB)

  7. Wisconsin Rural Reading Improvement Project 1987-1988. Evaluation Summary.

    ERIC Educational Resources Information Center

    Nowakowski, Jeri; And Others

    This evaluation summary synthesizes the results of the first year of the Wisconsin Rural Reading Improvement Project (WRRIP), a project aimed at helping small, rural schools improve reading instruction by teaching reading as thinking (also termed "strategic reading"). The means used is staff development: specifically, a leadership team composed of…

  8. Video-Based Reporting of Evaluation Results in Project SUCCESS

    ERIC Educational Resources Information Center

    Macy, Daniel J.; Wallace, Karla

    2007-01-01

    Project SUCCESS sought to recruit, train, and support paraprofessionals and mid-career adults in high-need teaching fields (math, science, special education, bilingual) in transitioning to teach in high-need schools. A 27-minute video was produced to supplement reporting of project evaluation outcomes. This paper highlights procedures and…

  9. Childhood Obesity Research Demonstration project: Cross-site evaluation method

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Childhood Obesity Research Demonstration (CORD) project links public health and primary care interventions in three projects described in detail in accompanying articles in this issue of Childhood Obesity. This article describes a comprehensive evaluation plan to determine the extent to which th...

  10. Project Closeout: Guidance for Final Evaluation of Building America Communities

    SciTech Connect

    Norton, P.; Burch, J.; Hendron, B.

    2008-03-01

    This report presents guidelines for Project Closeout. It is used to determine whether the Building America program is successfully facilitating improved design and practices to achieve energy savings goals in production homes. Its objective is to use energy simulations, targeted utility bill analysis, and feedback from project stakeholders to evaluate the performance of occupied BA communities.

  11. Project Aprendizaje. Final Evaluation Report 1992-93.

    ERIC Educational Resources Information Center

    Clark, Andrew

    This report provides evaluative information regarding the effectiveness of Project Aprendizaje, a New York City program that served 269 Spanish-speaking students of limited English proficiency (LEP). The project promoted parent and community involvement by sponsoring cultural events, such as a large Latin American festival. Students developed…

  12. An Evaluation of Project Gifted 1971-1972.

    ERIC Educational Resources Information Center

    Renzulli, Joseph S.

    Evaluated was Project Gifted, a tri-city (Cranston, East Providence, and Warwick, Rhode Island) program which focused on the training of gifted children in grades 4-6 in the creative thinking process. Project goals were identification of gifted students, development of differential experiences, and development of innovative programs. Cranston's…

  13. Project Familia. Final Evaluation Report, 1992-93. OREA Report.

    ERIC Educational Resources Information Center

    Clarke, Candice

    Project Familia was an Elementary and Secondary Education Act Title VII funded project that, in the year covered by this evaluation, served 41 special education students of limited English proficiency (LEP) from 5 schools, with the participation of 54 parents and 33 siblings. Participating students received English language enrichment and…

  14. Evaluating Quality in Educational Spaces: OECD/CELE Pilot Project

    ERIC Educational Resources Information Center

    von Ahlefeld, Hannah

    2009-01-01

    CELE's International Pilot Project on Evaluating Quality in Educational Spaces aims to assist education authorities, schools and others to maximise the use of and investment in learning environments. This article provides an update on the pilot project, which is currently being implemented in Brazil, Mexico, New Zealand, Portugal and the United…

  15. Portland Public Schools Project Chrysalis: Year 2 Evaluation Report.

    ERIC Educational Resources Information Center

    Mitchell, Stephanie J.; Gabriel, Roy M.; Hahn, Karen J.; Laws, Katherine E.

    In 1994, the Chrysalis Project in Portland Public Schools received funding to prevent or delay the onset of substance abuse among a special target population: high-risk, female adolescents with a history of childhood abuse. Findings from the evaluation of the project's second year of providing assistance to these students are reported here. During…

  16. Criticality Benchmark Analysis of the HTTR Annular Startup Core Configurations

    SciTech Connect

    John D. Bess

    2009-11-01

    One of the high priority benchmarking activities for corroborating the Next Generation Nuclear Plant (NGNP) Project and Very High Temperature Reactor (VHTR) Program is evaluation of Japan's existing High Temperature Engineering Test Reactor (HTTR). The HTTR is a 30 MWt engineering test reactor utilizing graphite moderation, helium coolant, and prismatic TRISO fuel. A large amount of critical reactor physics data is available for validation efforts of High Temperature Gas-cooled Reactors (HTGRs). Previous international reactor physics benchmarking activities provided a collation of mixed results that inaccurately predicted actual experimental performance.1 Reevaluations were performed by the Japanese to reduce the discrepancy between actual and computationally-determined critical configurations.2-3 Current efforts at the Idaho National Laboratory (INL) involve development of reactor physics benchmark models in conjunction with the International Reactor Physics Experiment Evaluation Project (IRPhEP) for use with verification and validation methods in the VHTR Program. Annular cores demonstrate inherent safety characteristics that are of interest in developing future HTGRs.

  17. Evaluation of EUREKA Project, 1978-1979.

    ERIC Educational Resources Information Center

    Burke, Paul J., Ed.

    An evaluation for 1978-79 was conducted of EUREKA, a career information system in California. Personal visits were made to sixteen EUREKA sites throughout the state, accounting for over 75% of the high schools and agencies with active programs. Both the directors of the programs and counselors were interviewed for their reactions. It was found…

  18. Methodology of full-core Monte Carlo calculations with leakage parameter evaluations for benchmark critical experiment analysis

    NASA Astrophysics Data System (ADS)

    Sboev, A. G.; Ilyashenko, A. S.; Vetrova, O. A.

    1997-02-01

    The method of bucking evaluation, realized in the MOnte Carlo code MCS, is described. This method was applied for calculational analysis of well known light water experiments TRX-1 and TRX-2. The analysis of this comparison shows, that there is no coincidence between Monte Carlo calculations, obtained by different ways: the MCS calculations with given experimental bucklings; the MCS calculations with given bucklings evaluated on base of full core MCS direct simulations; the full core MCNP and MCS direct simulations; the MCNP and MCS calculations, where the results of cell calculations are corrected by the coefficients taking into the account the leakage from the core. Also the buckling values evaluated by full core MCS calculations have differed from experimental ones, especially in the case of TRX-1, when this difference has corresponded to 0.5 percent increase of Keff value.

  19. Evaluation on Collaborative Satisfaction for Project Management Team in Integrated Project Delivery Mode

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Li, Y.; Wu, Q.

    2013-05-01

    Integrated Project Delivery (IPD) is a newly-developed project delivery approach for construction projects, and the level of collaboration of project management team is crucial to the success of its implementation. Existing research has shown that collaborative satisfaction is one of the key indicators of team collaboration. By reviewing the literature on team collaborative satisfaction and taking into consideration the characteristics of IPD projects, this paper summarizes the factors that influence collaborative satisfaction of IPD project management team. Based on these factors, this research develops a fuzzy linguistic method to effectively evaluate the level of team collaborative satisfaction, in which the authors adopted the 2-tuple linguistic variables and 2-tuple linguistic hybrid average operators to enhance the objectivity and accuracy of the evaluation. The paper demonstrates the practicality and effectiveness of the method through carrying out a case study with the method.

  20. Benchmarking monthly homogenization algorithms

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  1. Benchmarking short sequence mapping tools

    PubMed Central

    2013-01-01

    Background The development of next-generation sequencing instruments has led to the generation of millions of short sequences in a single run. The process of aligning these reads to a reference genome is time consuming and demands the development of fast and accurate alignment tools. However, the current proposed tools make different compromises between the accuracy and the speed of mapping. Moreover, many important aspects are overlooked while comparing the performance of a newly developed tool to the state of the art. Therefore, there is a need for an objective evaluation method that covers all the aspects. In this work, we introduce a benchmarking suite to extensively analyze sequencing tools with respect to various aspects and provide an objective comparison. Results We applied our benchmarking tests on 9 well known mapping tools, namely, Bowtie, Bowtie2, BWA, SOAP2, MAQ, RMAP, GSNAP, Novoalign, and mrsFAST (mrFAST) using synthetic data and real RNA-Seq data. MAQ and RMAP are based on building hash tables for the reads, whereas the remaining tools are based on indexing the reference genome. The benchmarking tests reveal the strengths and weaknesses of each tool. The results show that no single tool outperforms all others in all metrics. However, Bowtie maintained the best throughput for most of the tests while BWA performed better for longer read lengths. The benchmarking tests are not restricted to the mentioned tools and can be further applied to others. Conclusion The mapping process is still a hard problem that is affected by many factors. In this work, we provided a benchmarking suite that reveals and evaluates the different factors affecting the mapping process. Still, there is no tool that outperforms all of the others in all the tests. Therefore, the end user should clearly specify his needs in order to choose the tool that provides the best results. PMID:23758764

  2. Rasch Model Analysis on the Effectiveness of Early Evaluation Questions as a Benchmark for New Students Ability

    ERIC Educational Resources Information Center

    Arsad, Norhana; Kamal, Noorfazila; Ayob, Afida; Sarbani, Nizaroyani; Tsuey, Chong Sheau; Misran, Norbahiah; Husain, Hafizah

    2013-01-01

    This paper discusses the effectiveness of the early evaluation questions conducted to determine the academic ability of the new students in the Department of Electrical, Electronics and Systems Engineering. Questions designed are knowledge based--on what the students have learned during their pre-university level. The results show students have…

  3. A portfolio evaluation framework for air transportation improvement projects

    NASA Astrophysics Data System (ADS)

    Baik, Hyeoncheol

    This thesis explores the application of portfolio theory to the Air Transportation System (ATS) improvement. The ATS relies on complexly related resources and different stakeholder groups. Moreover, demand for air travel is significantly increasing relative to capacity of air transportation. In this environment, improving the ATS is challenging. Many projects, which are defined as technologies or initiatives, for improvement have been proposed and some have been demonstrated in practice. However, there is no clear understanding of how well these projects work in different conditions nor of how they interact with each other or with existing systems. These limitations make it difficult to develop good project combinations, or portfolios that maximize improvement. To help address this gap, a framework for identifying good portfolios is proposed. The framework can be applied to individual projects or portfolios of projects. Projects or portfolios are evaluated using four different groups of factors (effectiveness, time-to-implement, scope of applicability, and stakeholder impacts). Portfolios are also evaluated in terms of interaction-determining factors (prerequisites, co-requisites, limiting factors, and amplifying factors) because, while a given project might work well in isolation, interdependencies between projects or with existing systems could result in lower overall performance in combination. Ways to communicate a portfolio to decision makers are also introduced. The framework is unique because (1) it allows using a variety of available data, and (2) it covers diverse benefit metrics. For demonstrating the framework, an application to ground delay management projects serves as a case study. The portfolio evaluation approach introduced in this thesis can aid decision makers and researchers at universities and aviation agencies such as Federal Aviation Administration (FAA), National Aeronautics and Space Administration (NASA), and Department of Defense (DoD), in

  4. PRISMATIC CORE COUPLED TRANSIENT BENCHMARK

    SciTech Connect

    J. Ortensi; M.A. Pope; G. Strydom; R.S. Sen; M.D. DeHart; H.D. Gougar; C. Ellis; A. Baxter; V. Seker; T.J. Downar; K. Vierow; K. Ivanov

    2011-06-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  5. Healthy city projects in developing countries: the first evaluation.

    PubMed

    Harpham, T; Burton, S; Blue, I

    2001-06-01

    The 'healthy city' concept has only recently been adopted in developing countries. From 1995 to 1999, the World Health Organization (WHO), Geneva, supported healthy city projects (HCPs) in Cox's Bazar (Bangladesh), Dar es Salaam (Tanzania), Fayoum (Egypt), Managua (Nicaragua) and Quetta (Pakistan). The authors evaluated four of these projects, representing the first major evaluation of HCPs in developing countries. Methods used were stakeholder analysis, workshops, document analysis and interviews with 102 managers/implementers and 103 intended beneficiaries. Municipal health plan development (one of the main components of the healthy city strategy) in these cities was limited, which is a similar finding to evaluations of HCPs in Europe. The main activities selected by the projects were awareness raising and environmental improvements, particularly solid waste disposal. Two of the cities effectively used the 'settings' approach of the healthy city concept, whereby places such as markets and schools are targeted. The evaluation found that stakeholder involvement varied in relation to: (i) the level of knowledge of the project; (ii) the project office location; (iii) the project management structure; and (iv) type of activities (ranging from low stakeholder involvement in capital-intensive infrastructure projects, to high in some settings-type activities). There was evidence to suggest that understanding of environment-health links was increased across stakeholders. There was limited political commitment to the healthy city projects, perhaps due to the fact that most of the municipalities had not requested the projects. Consequently, the projects had little influence on written/expressed municipal policies. Some of the projects mobilized considerable resources, and most projects achieved effective intersectoral collaboration. WHO support enabled the project coordinators to network at national and international levels, and the capacity of these individuals (although

  6. Preview: Evaluation of the 1973-1974 Bilingual/Bicultural Project. Formative Evaluation Report.

    ERIC Educational Resources Information Center

    Ligon, Glynn; And Others

    The formative report provided the Austin Independent School District personnel with information useful for planning the remaining activities for the 1973-74 Bilingual/Bicultural Project and the activities for the 1974-75 Project. Emphasis was on what had been done to evaluate the 1973-74 Project, the data which was or would be available for the…

  7. Test Nationally, Benchmark Locally: Using Local DIBELS Benchmarks to Predict Performance on the Pssa

    ERIC Educational Resources Information Center

    Ferchalk, Matthew R.

    2013-01-01

    The Dynamic Indicators of Basic Early Literacy Skills (DIBELS) benchmarks are frequently used to make important decision regarding student performance. More information, however, is needed to understand if the nationally-derived benchmarks created by the DIBELS system provide the most accurate criterion for evaluating reading proficiency. The…

  8. Helical Screw Expander Evaluation Project. Final report

    SciTech Connect

    McKay, R.

    1982-03-01

    A functional 1-MW geothermal electric power plant that featured a helical screw expander was produced and then tested in Utah in 1978 to 1979 with a demonstrated average performance of approximately 45% machine efficiency over a wide range of test conditions in noncondensing operation on two-phase geothermal fluids. The Project also produced a computer-equipped data system, an instrumentation and control van, and a 1000-kW variable load bank, all integrated into a test array designed for operation at a variety of remote test sites. Additional testing was performed in Mexico in 1980 under a cooperative test program using the same test array, and machine efficiency was measured at 62% maximum with the rotors partially coated with scale, compared with approximately 54% maximum in Utah with uncoated rotors, confirming the importance of scale deposits within the machine on performance. Data are presented for the Utah testing and for the noncondensing phases of the testing in Mexico. Test time logged was 437 hours during the Utah tests and 1101 hours during the Mexico tests.

  9. The ASCD Healthy School Communities Project: Formative Evaluation Results

    ERIC Educational Resources Information Center

    Valois, Robert F.; Lewallen, Theresa C.; Slade, Sean; Tasco, Adriane N.

    2015-01-01

    Purpose: The purpose of this paper is to report the formative evaluation results from the Association for Supervision and Curriculum Development Healthy School Communities (HSC) pilot project. Design/methodology/approach: This study utilized 11 HSC pilot sites in the USA (eight sites) and Canada (three sites). The evaluation question was…

  10. Corrections Education Evaluation System Project. Site Visit Report.

    ERIC Educational Resources Information Center

    Nelson, Orville; And Others

    Site visits to five correctional institutions in Wisconsin were conducted as part of the development of an evaluation model for the competency-based vocational education (CBVE) project for the Wisconsin Correctional System. The evaluators' perceptions of the CBVE system are presented with recommendations for improvement. Site visits were conducted…

  11. Evaluating Injury Prevention Programs: The Oklahoma City Smoke Alarm Project.

    ERIC Educational Resources Information Center

    Mallonee, Sue

    2000-01-01

    Illustrates how evaluating the Oklahoma City Smoke Alarm Project increased its success in reducing residential fire-related injuries and deaths. The program distributed and tested smoke alarms in residential dwellings and offered educational materials on fire prevention and safety. Evaluation provided sound data on program processes and outcomes,…

  12. Summative Evaluation of the Manukau Family Literacy Project, 2004

    ERIC Educational Resources Information Center

    Benseman, John Robert; Sutton, Alison Joy

    2005-01-01

    This report covers a summative evaluation of a family literacy project in Auckland, New Zealand. The evaluation covered 70 adults and their children over a two year period. Outcomes for the program included literacy skill gains for both adults and children, increased levels of self-confidence and self-efficacy, greater parental involvement in…

  13. Major Factors Influencing HIV/AIDS Project Evaluation

    ERIC Educational Resources Information Center

    Niba, Mercy Bi; Green, J. Maryann

    2005-01-01

    This article aimed at finding out if participatory processes (group discussions, enactments, and others) do make a valuable contribution in communication-based project implementation/evaluation and the fight against HIV/AIDS. A case study backed by documentary analysis of evaluation reports and occasional insights from interviews stood as the main…

  14. Benchmarking Tool Kit.

    ERIC Educational Resources Information Center

    Canadian Health Libraries Association.

    Nine Canadian health libraries participated in a pilot test of the Benchmarking Tool Kit between January and April, 1998. Although the Tool Kit was designed specifically for health libraries, the content and approach are useful to other types of libraries as well. Used to its full potential, benchmarking can provide a common measuring stick to…

  15. A framework for benchmarking land models

    SciTech Connect

    Luo, Yiqi; Randerson, James T.; Hoffman, Forrest; Norby, Richard J

    2012-01-01

    Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1) targeted aspects of model performance to be evaluated, (2) a set of benchmarks as defined references to test model performance, (3) metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4) model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1) a priori thresholds of acceptable model performance and (2) a scoring system to combine data model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties of land models

  16. Southern Regional Education Board Faculty Evaluation Project: Final Evaluation Report.

    ERIC Educational Resources Information Center

    Wergin, Jon F.; And Others

    A summary is presented of an intensive assessment of the impact of a two-year effort to assist 30 colleges and universities to improve their faculty evaluation procedures. The Southern Regional Education Board (SREB), supported by a grant from the Fund for the Improvement of Postsecondary Education, worked closely with teams of faculty and…

  17. Verification and validation benchmarks.

    SciTech Connect

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  18. Restaurant Energy Use Benchmarking Guideline

    SciTech Connect

    Hedrick, R.; Smith, V.; Field, K.

    2011-07-01

    A significant operational challenge for food service operators is defining energy use benchmark metrics to compare against the performance of individual stores. Without metrics, multiunit operators and managers have difficulty identifying which stores in their portfolios require extra attention to bring their energy performance in line with expectations. This report presents a method whereby multiunit operators may use their own utility data to create suitable metrics for evaluating their operations.

  19. Final report : PATTON Alliance gazetteer evaluation project.

    SciTech Connect

    Bleakly, Denise Rae

    2007-08-01

    In 2005 the National Ground Intelligence Center (NGIC) proposed that the PATTON Alliance provide assistance in evaluating and obtaining the Integrated Gazetteer Database (IGDB), developed for the Naval Space Warfare Command Research group (SPAWAR) under Advance Research and Development Activity (ARDA) funds by MITRE Inc., fielded to the text-based search tool GeoLocator, currently in use by NGIC. We met with the developers of GeoLocator and identified their requirements for a better gazetteer. We then validated those requirements by reviewing the technical literature, meeting with other members of the intelligence community (IC), and talking with both the United States Geologic Survey (USGS) and the National Geospatial Intelligence Agency (NGA), the authoritative sources for official geographic name information. We thus identified 12 high-level requirements from users and the broader intelligence community. The IGDB satisfies many of these requirements. We identified gaps and proposed ways of closing these gaps. Three important needs have not been addressed but are critical future needs for the broader intelligence community. These needs include standardization of gazetteer data, a web feature service for gazetteer information that is maintained by NGA and USGS but accessible to users, and a common forum that brings together IC stakeholders and federal agency representatives to provide input to these activities over the next several years. Establishing a robust gazetteer web feature service that is available to all IC users may go a long way toward resolving the gazetteer needs within the IC. Without a common forum to provide input and feedback, community adoption may take significantly longer than anticipated with resulting risks to the war fighter.

  20. [Al2O4](-), a Benchmark Gas-Phase Class II Mixed-Valence Radical Anion for the Evaluation of Quantum-Chemical Methods.

    PubMed

    Kaupp, Martin; Karton, Amir; Bischoff, Florian A

    2016-08-01

    The radical anion [Al2O4](-) has been identified as a rare example of a small gas-phase mixed-valence system with partially localized, weakly coupled class II character in the Robin/Day classification. It exhibits a low-lying C2v minimum with one terminal oxyl radical ligand and a high-lying D2h minimum at about 70 kJ/mol relative energy with predominantly bridge-localized-hole character. Two identical C2v minima and the D2h minimum are connected by two C2v-symmetrical transition states, which are only ca. 6-10 kJ/mol above the D2h local minimum. The small size of the system and the absence of environmental effects has for the first time enabled the computation of accurate ab initio benchmark energies, at the CCSDT(Q)/CBS level using W3-F12 theory, for a class-II mixed-valence system. These energies have been used to evaluate wave function-based methods [CCSD(T), CCSD, SCS-MP2, MP2, UHF] and density functionals ranging from semilocal (e.g., BLYP, PBE, M06L, M11L, N12) via global hybrids (B3LYP, PBE0, BLYP35, BMK, M06, M062X, M06HF, PW6B95) and range-separated hybrids (CAM-B3LYP, ωB97, ωB97X-D, LC-BLYP, LC-ωPBE, M11, N12SX), the B2PLYP double hybrid, and some local hybrid functionals. Global hybrids with about 35-43% exact-exchange (EXX) admixture (e.g., BLYP35, BMK), several range hybrids (CAM-B3LYP, ωB97X-D, ω-B97), and a local hybrid provide good to excellent agreement with benchmark energetics. In contrast, too low EXX admixture leads to an incorrect delocalized class III picture, while too large EXX overlocalizes and gives too large energy differences. These results provide support for previous method choices for mixed-valence systems in solution and for the treatment of oxyl defect sites in alumosilicates and SiO2. Vibrational gas-phase spectra at various computational levels have been compared directly to experiment and to CCSD(T)/aug-cc-pV(T+d)Z data. PMID:27434425

  1. Benchmarking without ground truth

    NASA Astrophysics Data System (ADS)

    Santini, Simone

    2006-01-01

    Many evaluation techniques for content based image retrieval are based on the availability of a ground truth, that is on a "correct" categorization of images so that, say, if the query image is of category A, only the returned images in category A will be considered as "hits." Based on such a ground truth, standard information retrieval measures such as precision and recall and given and used to evaluate and compare retrieval algorithms. Coherently, the assemblers of benchmarking data bases go to a certain length to have their images categorized. The assumption of the existence of a ground truth is, in many respect, naive. It is well known that the categorization of the images depends on the a priori (from the point of view of such categorization) subdivision of the semantic field in which the images are placed (a trivial observation: a plant subdivision for a botanist is very different from that for a layperson). Even within a given semantic field, however, categorization by human subjects is subject to uncertainty, and it makes little statistical sense to consider the categorization given by one person as the unassailable ground truth. In this paper I propose two evaluation techniques that apply to the case in which the ground truth is subject to uncertainty. In this case, obviously, measures such as precision and recall as well will be subject to uncertainty. The paper will explore the relation between the uncertainty in the ground truth and that in the most commonly used evaluation measures, so that the measurements done on a given system can preserve statistical significance.

  2. Childhood Obesity Research Demonstration Project: Cross-Site Evaluation Methods

    PubMed Central

    Lee, Rebecca E.; Mehta, Paras; Thompson, Debbe; Bhargava, Alok; Carlson, Coleen; Kao, Dennis; Layne, Charles S.; Ledoux, Tracey; O'Connor, Teresia; Rifai, Hanadi; Gulley, Lauren; Hallett, Allen M.; Kudia, Ousswa; Joseph, Sitara; Modelska, Maria; Ortega, Dana; Parker, Nathan; Stevens, Andria

    2015-01-01

    Abstract Introduction: The Childhood Obesity Research Demonstration (CORD) project links public health and primary care interventions in three projects described in detail in accompanying articles in this issue of Childhood Obesity. This article describes a comprehensive evaluation plan to determine the extent to which the CORD model is associated with changes in behavior, body weight, BMI, quality of life, and healthcare satisfaction in children 2–12 years of age. Design/Methods: The CORD Evaluation Center (EC-CORD) will analyze the pooled data from three independent demonstration projects that each integrate public health and primary care childhood obesity interventions. An extensive set of common measures at the family, facility, and community levels were defined by consensus among the CORD projects and EC-CORD. Process evaluation will assess reach, dose delivered, and fidelity of intervention components. Impact evaluation will use a mixed linear models approach to account for heterogeneity among project-site populations and interventions. Sustainability evaluation will assess the potential for replicability, continuation of benefits beyond the funding period, institutionalization of the intervention activities, and community capacity to support ongoing program delivery. Finally, cost analyses will assess how much benefit can potentially be gained per dollar invested in programs based on the CORD model. Conclusions: The keys to combining and analyzing data across multiple projects include the CORD model framework and common measures for the behavioral and health outcomes along with important covariates at the individual, setting, and community levels. The overall objective of the comprehensive evaluation will develop evidence-based recommendations for replicating and disseminating community-wide, integrated public health and primary care programs based on the CORD model. PMID:25679060

  3. Collected notes from the Benchmarks and Metrics Workshop

    NASA Technical Reports Server (NTRS)

    Drummond, Mark E.; Kaelbling, Leslie P.; Rosenschein, Stanley J.

    1991-01-01

    In recent years there has been a proliferation of proposals in the artificial intelligence (AI) literature for integrated agent architectures. Each architecture offers an approach to the general problem of constructing an integrated agent. Unfortunately, the ways in which one architecture might be considered better than another are not always clear. There has been a growing realization that many of the positive and negative aspects of an architecture become apparent only when experimental evaluation is performed and that to progress as a discipline, we must develop rigorous experimental methods. In addition to the intrinsic intellectual interest of experimentation, rigorous performance evaluation of systems is also a crucial practical concern to our research sponsors. DARPA, NASA, and AFOSR (among others) are actively searching for better ways of experimentally evaluating alternative approaches to building intelligent agents. One tool for experimental evaluation involves testing systems on benchmark tasks in order to assess their relative performance. As part of a joint DARPA and NASA funded project, NASA-Ames and Teleos Research are carrying out a research effort to establish a set of benchmark tasks and evaluation metrics by which the performance of agent architectures may be determined. As part of this project, we held a workshop on Benchmarks and Metrics at the NASA Ames Research Center on June 25, 1990. The objective of the workshop was to foster early discussion on this important topic. We did not achieve a consensus, nor did we expect to. Collected here is some of the information that was exchanged at the workshop. Given here is an outline of the workshop, a list of the participants, notes taken on the white-board during open discussions, position papers/notes from some participants, and copies of slides used in the presentations.

  4. TWODANT benchmark. Progress report

    SciTech Connect

    Lee, Sung

    1994-01-11

    TWODANT (Two-Dimensional, Diffusion-Accelerated, Neutral-Particle Transport) code has been benchmarked against 6 critical experiments (Jezebel plutonium critical assembly) and their k effective values compared with those of KENO and MCNP codes.

  5. Benchmarking for strategic action.

    PubMed

    Jennings, K; Westfall, F

    1992-01-01

    By focusing on three key elements--customer expectations, competitor strengths and vulnerabilities, and organizational competencies--a company's benchmarking effort can be designed to drive the strategic planning process.

  6. Decay data evaluation project (DDEP): updated evaluations of the 233Th and 241Am decay characteristics.

    PubMed

    Chechev, Valery P; Kuzmenko, Nikolay K

    2010-01-01

    The results of new decay data evaluations are presented for (233)Th (beta(-)) decay to nuclear levels in (233)Pa and (241)Am (alpha) decay to nuclear levels in (237)Np. These evaluated data have been obtained within the Decay Data Evaluation Project using information published up to 2009.

  7. Decay Data Evaluation Project (DDEP): evaluation of the main 243Cm and 245Cm decay characteristics.

    PubMed

    Chechev, Valery P

    2012-09-01

    The results of new decay data evaluations are presented for (243)Cm (α) decay to nuclear levels in (239)Pu and (245)Cm (α) decay to nuclear levels in (241)Pu. These evaluated data have been obtained within the Decay Data Evaluation Project using information published up to 2011.

  8. An Economic Evaluation Framework for Assessing Renewable Energy Projects

    SciTech Connect

    Omitaomu, Olufemi A; Badiru, Adedeji B

    2012-01-01

    It is becoming increasingly imperative to integrate renewable energy, such as solar and wind, into electricity generation due to increased regulations on air and water pollution and a sociopolitical desire to develop more clean energy sources. This increased spotlight on renewable energy requires evaluating competing projects using either conventional economic analysis techniques or other economics-based models and approaches in order to select a subset of the projects to be funded. Even then, there are reasons to suspect that techniques applied to renewable energy projects may result in decisions that will reject viable projects due to the use of a limited number of quantifiable and tangible attributes about the projects. This paper presents a framework for economic evaluation of renewable energy projects. The framework is based on a systems approach in which the processes within the entire network of the system, from generation to consumption, are accounted for. Furthermore, the framework uses the concept of fuzzy system to calculate the value of information under conditions of uncertainty.

  9. Technology Education in South Africa: Evaluating an Innovative Pilot Project

    NASA Astrophysics Data System (ADS)

    Stables, Kay; Kimbell, Richard

    2001-02-01

    Researchers from Goldsmiths College were asked to undertake an evaluation of a three year curriculum initiative introducing technology education, through a learner-centred, problem solving and collaborative approach. The program was developed in a group of high schools in the North West Province of South Africa. We visited ten schools involved in the project and ten parallel schools not involved who acted as a control group. We collected data on student capability (demonstrated through an innovative test activity) and on student attitudes towards technology (demonstrated in evaluation questionnaires and in semi-structured interviews). Collectively the data indicate that in areas of knowledge and skill and in certain aspects of procedures (most notably problem solving) the project has had a marked impact. We also illustrate that greater consideration could have been given in the project to developing skills in generating and developing ideas and in graphic communication. Gender differences are noted, particularly in terms of positive attitudes illustrated by both boys and girls from schools involved in the project. Attention is drawn to the critical impact the project has had on transforming the pedagogy of the teachers from a teacher-centred didactic model, to a learner-centred, problem solving model. Some wider implications of the successes of this project are debated.

  10. Benchmark problems and solutions

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.

    1995-01-01

    The scientific committee, after careful consideration, adopted six categories of benchmark problems for the workshop. These problems do not cover all the important computational issues relevant to Computational Aeroacoustics (CAA). The deciding factor to limit the number of categories to six was the amount of effort needed to solve these problems. For reference purpose, the benchmark problems are provided here. They are followed by the exact or approximate analytical solutions. At present, an exact solution for the Category 6 problem is not available.

  11. Developing integrated benchmarks for DOE performance measurement

    SciTech Connect

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  12. Instruments and Scoring Guide of the Experiential Education Evaluation Project.

    ERIC Educational Resources Information Center

    Conrad, Dan; Hedin, Diane

    As a result of the Experiential Education Evaluation Project the publication identifies instruments used to measure and assess experiential learning programs. The following information is given for each instrument: rationale for its inclusion in the study; precise issues or outcomes designed to measure, validity and reliability data; and…

  13. Developing and Evaluating a Cardiovascular Risk Reduction Project.

    ERIC Educational Resources Information Center

    Brownson, Ross C.; Mayer, Jeffrey P.; Dusseault, Patricia; Dabney, Sue; Wright, Kathleen; Jackson-Thompson, Jeannette; Malone, Bernard; Goodman, Robert

    1997-01-01

    Describes the development and baseline evaluation data from the Ozark Heart Health Project, a community-based cardiovascular disease risk reduction program in rural Missouri that targeted smoking, physical inactivity, and poor diet. Several Ozark counties participated in either intervention or control groups, and researchers conducted surveillance…

  14. Collaborative Partnerships and School Change: Evaluating Project SOBEIT

    ERIC Educational Resources Information Center

    Lacey, Candace H.

    2006-01-01

    This presentation will report on the findings of the evaluation of Project SOBEIT a multi-school initiative focused on building partnerships between schools, law enforcement, and community mental health agencies. Guided by a process, context, outcomes, and sustainability framework and grounded in the understanding of the impact of change theory on…

  15. Portland Peers Project. 1989-91 Final Evaluation Report.

    ERIC Educational Resources Information Center

    Mitchell, Stephanie

    This evaluation report describes a program designed to reduce substance abuse among students by establishing a comprehensive peer program in the middle schools (grades 6 through 8). The background of the project is reviewed, five important aspects of a peer helper program are listed, and three intervention strategies of peer assistance programs…

  16. Evaluation of Project TREC: Teaching Respect for Every Culture.

    ERIC Educational Resources Information Center

    Mitchell, Stephanie

    The purpose of Teaching Respect for Every Culture (TREC) was to ensure that racial/ethnic, gender, disability, and other circumstances did not bar student access to alcohol/drug education, prevention, and intervention services. This report describes the implementation and evaluation of the TREC Project. Five objectives of TREC were to: (1)…

  17. Human Relations Training for Educators. Final Evaluation. Project Upper Cumberland.

    ERIC Educational Resources Information Center

    Khanna, J. L.

    Project Upper Cumberland was a three year endeavor which served 16 Tennessee counties. The final report and evaluation, in three documents, summarizes the three innovative programs which it engendered: (1) teacher inservice training, emphasizing human relations; (2) a pilot cultural arts program (art, music, drama) for grades 1-12; and (3) a pilot…

  18. Service Learning in Medical Education: Project Description and Evaluation

    ERIC Educational Resources Information Center

    Borges, Nicole J.; Hartung, Paul J.

    2007-01-01

    Although medical education has long recognized the importance of community service, most medical schools have not formally nor fully incorporated service learning into their curricula. To address this problem, we describe the initial design, development, implementation, and evaluation of a service-learning project within a first-year medical…

  19. Project Achieve Evaluation Report: Year One, 2001-2002.

    ERIC Educational Resources Information Center

    Speas, Carol

    This report is an evaluation of the pilot year of Project Achieve, a major local instructional initiative at six elementary schools and two middle schools in the Wake County Public School System (WCPSS), North Carolina, that was designed to help reach the WCPSS goal of 95% of students at or above grade level. Participating schools had a higher…

  20. Niagara Falls HEW 309 Project 1974-1975: Evaluation Report.

    ERIC Educational Resources Information Center

    Skeen, Elois M.

    The document reports an outside evaluation of a Niagara Falls Adult Basic Education Program special project entitled "Identification of Preferred Cognitive Styles and Matching Adult Reading Program Alternatives for the 0-4 Grade Levels." It was concerned with (1) research, training in cognitive style mapping, and development of a survey and…

  1. Evaluation of the Universal Design for Learning Projects

    ERIC Educational Resources Information Center

    Cooper-Martin, Elizabeth; Wolanin, Natalie

    2014-01-01

    The Office of Shared Accountability evaluated the "Universal Design for Learning" (UDL) projects during spring 2013. UDL is an instructional framework that seeks to give all students equal opportunities to learn, by providing multiple means of representation, of action and expression, and of engagement. To inform future implementation…

  2. Project "Freestyle": Ad Hoc: Fast-Turn-Around Evaluation.

    ERIC Educational Resources Information Center

    Smith, Karen

    Project "Freestyle" involved the development of prototypical television materials and a comic book intended to combat sex-role stereotyping in career-related attitudes of nine to twelve-year-old children. At various times during the early developmental stages of "Freestyle" materials, "ad hoc fast-turn-around" formative evaluations were conducted.…

  3. Process and Outcome: Evaluation of the Sexual Abuse Treatment Project.

    ERIC Educational Resources Information Center

    Love, Arnold J.

    1989-01-01

    Assesses the feasibility and effectiveness of the Sexual Abuse Treatment Project used in a child welfare setting in Canada. Also evaluates the therapeutic process, which was based on an intensive psychodynamic model, and assesses its effectiveness for child and adult clients. (RJC)

  4. 43 CFR 10005.20 - Project evaluation procedures.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 43 Public Lands: Interior 2 2011-10-01 2011-10-01 false Project evaluation procedures. 10005.20 Section 10005.20 Public Lands: Interior Regulations Relating to Public Lands (Continued) UTAH RECLAMATION MITIGATION AND CONSERVATION COMMISSION POLICIES AND PROCEDURES FOR DEVELOPING AND IMPLEMENTING THE...

  5. 43 CFR 10005.20 - Project evaluation procedures.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 43 Public Lands: Interior 2 2012-10-01 2012-10-01 false Project evaluation procedures. 10005.20 Section 10005.20 Public Lands: Interior Regulations Relating to Public Lands (Continued) UTAH RECLAMATION MITIGATION AND CONSERVATION COMMISSION POLICIES AND PROCEDURES FOR DEVELOPING AND IMPLEMENTING THE...

  6. 43 CFR 10005.20 - Project evaluation procedures.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 43 Public Lands: Interior 2 2013-10-01 2013-10-01 false Project evaluation procedures. 10005.20 Section 10005.20 Public Lands: Interior Regulations Relating to Public Lands (Continued) UTAH RECLAMATION MITIGATION AND CONSERVATION COMMISSION POLICIES AND PROCEDURES FOR DEVELOPING AND IMPLEMENTING THE...

  7. Evaluation of the Attendant Care Pilot Project. Final Report.

    ERIC Educational Resources Information Center

    Clark, Anne; Faragher, Jean

    An Attendant Care Pilot Project, administered by the Home Care Service of New South Wales, Australia, and providing attendant care for 24 permanently severely physically disabled adults for 2 years, was evaluated. The patients were medically stable and intellectually capable of managing their own affairs; all had impairments which required…

  8. Westside Area Career Occupations Project. Evaluation Report 1975-76.

    ERIC Educational Resources Information Center

    Glur, John

    Evaluation of the Westside Area Career Occupations Project (WACOP) focused on (1) examining what aspects of the Arizona career education effort had the most significant impact on students, and (2) measuring specific outcomes related to the students' knowledge about the world of work, using the Arizona Careers Test. System implementation and…

  9. Vocational Education Evaluation Project: Annual Report--Fiscal Year 1973.

    ERIC Educational Resources Information Center

    Oliver, J. Dale; And Others

    The primary objective of the Vocational Education Evaluation Project (VEEP) is to develop a management information system for the planning and programing of vocational education. The work has been divided into a macro-system (primarily concerned with guidelines and systematic procedures at the State level) and a micro-system (emphasizing the…

  10. Evaluation of Fatih Project in the Frame of Digital Divide

    ERIC Educational Resources Information Center

    Karabacak, Kerim

    2016-01-01

    The aim of this research realized at the general survey model is to evaluate "FATIH Project" in the frame of digital divide by determining the effects of the distributed tablets to the students being educated at K-12 schools on digital divide. Sample is taking from the 9th grade students in Sakarya city in the 2013-2014 academic session.…

  11. Evaluation of East Tennessee's Child Health and Development Project.

    ERIC Educational Resources Information Center

    Banta, Trudy W.; And Others

    The Child Health and Development Project (CHDP), a home-based early intervention program operated in six East Tennessee counties, provides well-child clinics, developmental evaluation, individualized early childhood education for disadvantaged children, and training in parenting skills for their parents. The University of Tennessee's Bureau of…

  12. The KMAT: Benchmarking Knowledge Management.

    ERIC Educational Resources Information Center

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  13. Decay Data Evaluation Project: Evaluation of (52)Fe nuclear decay data.

    PubMed

    Luca, Aurelian

    2016-03-01

    Within the Decay Data Evaluation Project (DDEP) and the IAEA Coordinated Research Project no. F41029, the evaluation of the nuclear decay data of (52)Fe, a radionuclide of interest in nuclear medicine, was performed. The main nuclear decay data evaluated are: the half-life, decay energy, energies and probabilities of the electron capture and β(+) transitions, internal conversion coefficients and gamma-ray energies and emission intensities. This new evaluation, made using the DDEP methodology and tools, was included in the DDEP database NUCLEIDE.

  14. An evaluation approach for research project pilot technological applications

    NASA Astrophysics Data System (ADS)

    Marcelino-Jesus, Elsa; Sarraipa, Joao; Jardim-Goncalves, Ricardo

    2013-10-01

    In a world increasingly more competitive and in a constantly development and growth it's important that companies have economic tools, like frameworks to help them to evaluate and validate the technology development to better fits in each company particular needs. The paper presents an evaluation approach for research project pilot applications to stimulate its implementation and deployment, increasing its adequacy and acceptance to their stakeholders and consequently providing new business profit and opportunities. Authors used the DECIDE evaluation framework as a major guide to this approach, which was tested in the iSURF project to support the implementation of an interoperability service utility for collaborative supply chain planning across multiple domains supported by RFID devices.

  15. Analysis and Development of a Project Evaluation Process.

    SciTech Connect

    Coutant, Charles C.; Cada Glenn F.

    1985-01-01

    The Bonneville Power Administration has responsibility, assigned by the Pacific Northwest Electric Power Planning and Conservation Act of 1980 (Public Law 96-501; 16 USC 839), for implementing the Columbia River Basin Fish and Wildlife Program of the Northwest Power Planning Council. One aspect of this responsibility is evaluation of project proposals and ongoing and completed projects. This report recommends formalized procedures for conducting this work in an accurate, professional, and widely respected manner. Recommendations and justifications are based largely on interviews with federal and state agencies and Indian tribes in the Northwest and nationally. Organizations were selected that have evaluation systems of their own, interact with the Fish and Wildlife Program, or have similar objectives or obligations. Perspective on aspects to be considered were obtained from the social science of evaluation planning. Examples of procedures and quantitative criteria are proposed. 1 figure, 2 tables.

  16. How to Conduct Rigorous Evaluations of Mathematics and Science Partnerships (MSP) Projects: A User-Friendly Guide for MSP Project Officials and Evaluators

    ERIC Educational Resources Information Center

    Coalition for Evidence-Based Policy, 2005

    2005-01-01

    The purpose of this Guide is to provide Mathematics and Science Partnership (MSP) project officials and evaluators with clear, practical advice on how to conduct rigorous evaluations of MSP projects at low cost. Specifically, this is a how-to Guide designed to enable MSP grantees and evaluators of MSP projects to answer questions about the…

  17. A Competitive Benchmarking Study of Noncredit Program Administration.

    ERIC Educational Resources Information Center

    Alstete, Jeffrey W.

    1996-01-01

    A benchmarking project to measure administrative processes and financial ratios received 57 usable replies from 300 noncredit continuing education programs. Programs with strong financial surpluses were identified and their processes benchmarked (including response to inquiries, registrants, registrant/staff ratio, new courses, class size,…

  18. An Evaluation of the Project STAR Reading Program Intervention (State Technical Assistance Resources Project). Volume II: Evaluation Final Report.

    ERIC Educational Resources Information Center

    Holowenzak, Stephen P.

    This is the second of three volumes that constitute the final evaluation report of Project STAR (State Technical Assistance Resources), an undertaking of the Maryland State Department of Education designed to help 36 elementary schools improve their reading programs. This volume is divided into five parts. The first part contains a discussion of…

  19. Benchmarking of Graphite Reflected Critical Assemblies of UO2

    SciTech Connect

    Margaret A. Marshall; John D. Bess

    2011-11-01

    A series of experiments were carried out in 1963 at the Oak Ridge National Laboratory Critical Experiments Facility (ORCEF) for use in space reactor research programs. A core containing 93.2% enriched UO2 fuel rods was used in these experiments. The first part of the experimental series consisted of 253 tightly-packed fuel rods (1.27 cm triangular pitch) with graphite reflectors [1], the second part used 253 graphite-reflected fuel rods organized in a 1.506 cm triangular pitch [2], and the final part of the experimental series consisted of 253 beryllium-reflected fuel rods with a 1.506 cm triangular pitch. [3] Fission rate distribution and cadmium ratio measurements were taken for all three parts of the experimental series. Reactivity coefficient measurements were taken for various materials placed in the beryllium reflected core. The first part of this experimental series has been evaluated for inclusion in the International Reactor Physics Experiment Evaluation Project (IRPhEP) [4] and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbooks, [5] and is discussed below. These experiments are of interest as benchmarks because they support the validation of compact reactor designs with similar characteristics to the design parameters for a space nuclear fission surface power systems. [6

  20. Area recommendation report for the crystalline repository project: An evaluation. [Crystalline Repository Project

    SciTech Connect

    Beck, J E; Lowe, H; Yurkovich, S P

    1986-03-28

    An evaluation is given of DOE's recommendation of the Elk River complex in North Carolina for siting the second repository. Twelve recommendations are made including a strong suggestion that the Cherokee Tribe appeal both through political and legal avenues for inclusion as an affected area primarily due to projected impacts upon economy and public health as a consequence of the potential for reduced tourism.

  1. Benchmarking NNWSI flow and transport codes: COVE 1 results

    SciTech Connect

    Hayden, N.K.

    1985-06-01

    The code verification (COVE) activity of the Nevada Nuclear Waste Storage Investigations (NNWSI) Project is the first step in certification of flow and transport codes used for NNWSI performance assessments of a geologic repository for disposing of high-level radioactive wastes. The goals of the COVE activity are (1) to demonstrate and compare the numerical accuracy and sensitivity of certain codes, (2) to identify and resolve problems in running typical NNWSI performance assessment calculations, and (3) to evaluate computer requirements for running the codes. This report describes the work done for COVE 1, the first step in benchmarking some of the codes. Isothermal calculations for the COVE 1 benchmarking have been completed using the hydrologic flow codes SAGUARO, TRUST, and GWVIP; the radionuclide transport codes FEMTRAN and TRUMP; and the coupled flow and transport code TRACR3D. This report presents the results of three cases of the benchmarking problem solved for COVE 1, a comparison of the results, questions raised regarding sensitivities to modeling techniques, and conclusions drawn regarding the status and numerical sensitivities of the codes. 30 refs.

  2. A Systems Approach to the Development of an Evaluation System for ESEA Title III Projects.

    ERIC Educational Resources Information Center

    Yost, Marlen; Monnin, Frank J.

    A major activity of any ESEA Title III project is evaluation. This paper suggests evaluation methods especially appropriate to such projects by applying a systems approach to the evaluation design. Evaluation as a system is divided into three subsystems: (1) baseline evaluation, which describes conditions as they exist before project treatment;…

  3. Benchmarking in healthcare: selecting and working with partners.

    PubMed

    Benson, H R

    1995-01-01

    The process of selecting a benchmarking partner begins with gathering information to establish industry standards, identifying potential partners and supplying data on the subject to be benchmarked. Suggested sources of information are business and trade publications; investment industry analysts; journalists; trade associations and professional organizations; government research reports; disclosure documents; current and former employees; and product and service providers. Potential partners should be approached only after careful preparation of a project plan that includes information about the benchmarking team's organization and purpose, description of the subject and a statement of benefits for the prospective partner. After obtaining a commitment from the benchmarking partner, relevant comparative data is gathered and analyzed, using some of the following methods: library research, questionnaires, telephone surveys, site visits and consultants. Because benchmarking often involves sharing information with competitors, a code of ethical conduct has been developed by the International Benchmarking Clearinghouse.

  4. Evaluating Statewide Priorities. Improving Community College Evaluation and Planning: Project Working Paper Number Nine.

    ERIC Educational Resources Information Center

    California Community Colleges, Sacramento. Office of the Chancellor.

    One of a series of papers resulting from a Fund for the Improvement of Postsecondary Education (FIPSE) project to improve planning and evaluation in community colleges, this working paper is intended for use by 20 community colleges in California undergoing accreditation self-studies during 1982-83, who were asked to evaluate their performance…

  5. Object-adapted inverse pattern projection: generation, evaluation, and applications

    NASA Astrophysics Data System (ADS)

    Bothe, Thorsten; Li, Wansong; von Kopylow, Christoph; Juptner, Werner P.

    2003-05-01

    Fast and robust 3D quality control as well as fast deformation measurement is of particular importance for industrial inspection. Additionally a direct response about measured properties is desired. Therefore, robust optical techniques are needed which use as few images as possible for measurement and visualize results in an efficient way. One promising technique for this aim is the inverse pattern projection which has the following advantages: The technique codes the information of a preceding measurement into the projected inverse pattern. Thus, it is possible to do differential measurements using only one camera frame for each state. Additionally, the results are optimized straight fringes for sampling which are independent of the object curvature. The ability to use any image for inverse projection enables the use for augmented reality, i.e. any properties can be visualized directly on the object's surface which makes inspections easier than with use of a separated indicating device. The hardware needs are low as just a programmable projector and a standard camera are necessary. The basic idea of inverse pattern projection, necessary algorithms ane found optimizations are demonstrated, roughly. Evaluation techniques were found to preserve a high quality phase measurement under imperfect conditions. The different application fields can be sorted out by the type of pattern used for inverse projection. We select two main topics for presentation. One is the incremental (one image per state) deformation measurement which is a promising technique for high speed deformation measurements. A video series of a wavering flag with projected inverse pattern was evaluated to show the complete deformation series. The other application is the optical feature marking (augmented reality) that allows to map any measured result directly onto the object under investigation. The general ability to straighten any kind of information on 3D surfaces is shown while preserving an exact

  6. Maximizing the Impact of the NASA Innovations in Climate Education (NICE) Project: Building a Community of Project Evaluators, Collaborating Across Agencies & Evaluating a 71-Project Portfolio

    NASA Astrophysics Data System (ADS)

    Martin, A. M.; Chambers, L. H.; Pippin, M. R.; Spruill, K.

    2012-12-01

    Ann Martin, Lin Chambers, Margaret Pippin, & Kate Spruill, NASA The NASA Innovations in Climate Education (NICE) project at Langley Research Center in Hampton, VA, has funded 71 climate education initiatives since 2008. An evaluator was added to the team in mid-2011 to undertake an evaluation of the portfolio. The funded initiatives span across the nation and contribute to the development of a climate-literate public and the preparation of a climate-related STEM workforce through research experiences, professional development opportunities, development of data access and modeling tools, and educational opportunities in both K-12 and higher education. The portfolio of projects also represents a wide range of evaluation questions, approaches, and methodologies. The evaluation of the NICE portfolio has encountered context-specific challenges, including the breadth of the portfolio, the need to build up capacity for electronic project monitoring, and government-wide initiatives to align evaluations across Federal agencies. Additionally, we have contended with the difficulties of maintaining compliance with the Paperwork Reduction Act (PRA), which constrains the ability of NICE to gather data and approach interesting evaluative questions. We will discuss these challenges and our approaches to overcoming them. First, we have committed to fostering communication and partnerships among our awardees and evaluators, facilitating the sharing of expertise, resources, lessons learned and practices across the individual project evaluations. Additionally, NICE has worked in collaboration with NOAA's Environmental Literacy Grants (ELG) and NSF's Climate Change Education Partnerships (CCEP) programs to foster synergy, leverage resources, and facilitate communication. NICE projects, and their evaluators, have had the opportunity to work with and benefit from colleagues on projects funded by other agencies, and to orient their work within the context of the broader tri-agency goals

  7. A unified evaluation of iterative projection algorithms for phase retrieval

    SciTech Connect

    Marchesini, S

    2006-03-08

    Iterative projection algorithms are successfully being used as a substitute of lenses to recombine, numerically rather than optically, light scattered by illuminated objects. Images obtained computationally allow aberration-free diffraction-limited imaging and allow new types of imaging using radiation for which no lenses exist. The challenge of this imaging technique is transferred from the lenses to the algorithms. We evaluate these new computational ''instruments'' developed for the phase retrieval problem, and discuss acceleration strategies.

  8. New Fe-56 Evaluation for the CIELO project

    SciTech Connect

    Nobre, G P; Herman, Micheal W; Brown, D A; Capote, R.; Leal, Luiz C; Plompen, A.; Danon, Y.; Qian, Jing; Ge, Zhigang; Liu, Tingjin; Lu, Hnalin; Ruan, Xichao

    2016-01-01

    The Collaborative International Evaluated Library Organisation (CIELO) aims to provide revised and updated evaluations for Pu-239, U-238,U-235, Fe-56, O-16, and H-1 through international collaboration. This work, which is part of the CIELO project, presents the initial results for the evaluation of the Fe-56 isotope, with neutron-incident energy ranging from 0 to 20 MeV. The Fe-56(n,p) cross sections were fitted to reproduce the ones from IRDFF dosimetry file. Our preliminary file provides good cross-section agreements for the main angle-integrated reactions, as well as a reasonable overall agreement for angular distributions and double-differential spectra, when compared to previous evaluations.

  9. Model-Based Engineering and Manufacturing CAD/CAM Benchmark.

    SciTech Connect

    Domm, T.C.; Underwood, R.S.

    1999-10-13

    The Benchmark Project was created from a desire to identify best practices and improve the overall efficiency and performance of the Y-12 Plant's systems and personnel supporting the manufacturing mission. The mission of the benchmark team was to search out industry leaders in manufacturing and evaluate their engineering practices and processes to determine direction and focus for Y-12 modernization efforts. The companies visited included several large established companies and a new, small, high-tech machining firm. As a result of this effort, changes are recommended that will enable Y-12 to become a more modern, responsive, cost-effective manufacturing facility capable of supporting the needs of the Nuclear Weapons Complex (NWC) into the 21st century. The benchmark team identified key areas of interest, both focused and general. The focus areas included Human Resources, Information Management, Manufacturing Software Tools, and Standards/Policies and Practices. Areas of general interest included Infrastructure, Computer Platforms and Networking, and Organizational Structure. The results of this benchmark showed that all companies are moving in the direction of model-based engineering and manufacturing. There was evidence that many companies are trying to grasp how to manage current and legacy data. In terms of engineering design software tools, the companies contacted were somewhere between 3-D solid modeling and surfaced wire-frame models. The manufacturing computer tools were varied, with most companies using more than one software product to generate machining data and none currently performing model-based manufacturing (MBM) from a common model. The majority of companies were closer to identifying or using a single computer-aided design (CAD) system than a single computer-aided manufacturing (CAM) system. The Internet was a technology that all companies were looking to either transport information more easily throughout the corporation or as a conduit for

  10. Surveys and Benchmarks

    ERIC Educational Resources Information Center

    Bers, Trudy

    2012-01-01

    Surveys and benchmarks continue to grow in importance for community colleges in response to several factors. One is the press for accountability, that is, for colleges to report the outcomes of their programs and services to demonstrate their quality and prudent use of resources, primarily to external constituents and governing boards at the state…

  11. Python/Lua Benchmarks

    SciTech Connect

    Busby, L.

    2014-08-01

    This is an adaptation of the pre-existing Scimark benchmark code to a variety of Python and Lua implementations. It also measures performance of the Fparser expression parser and C and C++ code on a variety of simple scientific expressions.

  12. Benchmarking the World's Best

    ERIC Educational Resources Information Center

    Tucker, Marc S.

    2012-01-01

    A century ago, the United States was a world leader in industrial benchmarking. However, after World War II, once no one could compete with the U.S., it became complacent. Many industrialized countries now have higher student achievement and more equitable and efficient education systems. A higher proportion of young people in their workforces…

  13. Monte Carlo Benchmark

    2010-10-20

    The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.

  14. Comparison of five benchmarks

    SciTech Connect

    Huss, J. E.; Pennline, J. A.

    1987-02-01

    Five benchmark programs were obtained and run on the NASA Lewis CRAY X-MP/24. A comparison was made between the programs codes and between the methods for calculating performance figures. Several multitasking jobs were run to gain experience in how parallel performance is measured.

  15. Benchmarks: WICHE Region 2012

    ERIC Educational Resources Information Center

    Western Interstate Commission for Higher Education, 2013

    2013-01-01

    Benchmarks: WICHE Region 2012 presents information on the West's progress in improving access to, success in, and financing of higher education. The information is updated annually to monitor change over time and encourage its use as a tool for informed discussion in policy and education communities. To establish a general context for the…

  16. Evaluating injury prevention programs: the Oklahoma City Smoke Alarm Project.

    PubMed

    Mallonee, S

    2000-01-01

    Evaluation of injury prevention programs is critical for measuring program effects on reducing injury-related morbidity and mortality or on increasing the adoption of safety practices. During the planning and implementation of injury prevention programs, evaluation data also can be used to test program strategies and to measure the program's penetration among the target population. The availability of this early data enables program managers to refine a program, increasing the likelihood of successful outcomes. The Oklahoma City Smoke Alarm Project illustrates how an evaluation was designed to inform program decisions by providing methodologically sound data on program processes and outcomes. This community intervention trial was instituted to reduce residential fire-related injuries and deaths in a geographic area of Oklahoma City that was disproportionately affected by this problem. The distribution of free smoke alarms in targeted neighborhoods was accompanied by written educational pamphlets and home-based follow-up to test whether the alarms were functioning correctly. Early evaluation during the planning and implementation phases of the program allowed for midcourse corrections that increased the program's impact on desired outcomes. During the six years following the project, the residential fire-related injury rate decreased 81% in the target population but only 7% in the rest of Oklahoma City. This dramatic decline in fire-related injuries in the target area is largely attributed to the free smoke alarm distribution as well as to educational efforts promoting awareness about residential fires and their prevention. PMID:10911692

  17. Summary of monitoring station component evaluation project 2009-2011.

    SciTech Connect

    Hart, Darren M.

    2012-02-01

    Sandia National Laboratories (SNL) is regarded as a center for unbiased expertise in testing and evaluation of geophysical sensors and instrumentation for ground-based nuclear explosion monitoring (GNEM) systems. This project will sustain and enhance our component evaluation capabilities. In addition, new sensor technologies that could greatly improve national monitoring system performance will be sought and characterized. This work directly impacts the Ground-based Nuclear Explosion Monitoring mission by verifying that the performance of monitoring station sensors and instrumentation is characterized and suitable to the mission. It enables the operational monitoring agency to deploy instruments of known capability and to have confidence in operational success. This effort will ensure that our evaluation capabilities are maintained for future use.

  18. Evaluating the utility of dynamical downscaling in agricultural impacts projections.

    PubMed

    Glotter, Michael; Elliott, Joshua; McInerney, David; Best, Neil; Foster, Ian; Moyer, Elisabeth J

    2014-06-17

    Interest in estimating the potential socioeconomic costs of climate change has led to the increasing use of dynamical downscaling--nested modeling in which regional climate models (RCMs) are driven with general circulation model (GCM) output--to produce fine-spatial-scale climate projections for impacts assessments. We evaluate here whether this computationally intensive approach significantly alters projections of agricultural yield, one of the greatest concerns under climate change. Our results suggest that it does not. We simulate US maize yields under current and future CO2 concentrations with the widely used Decision Support System for Agrotechnology Transfer crop model, driven by a variety of climate inputs including two GCMs, each in turn downscaled by two RCMs. We find that no climate model output can reproduce yields driven by observed climate unless a bias correction is first applied. Once a bias correction is applied, GCM- and RCM-driven US maize yields are essentially indistinguishable in all scenarios (<10% discrepancy, equivalent to error from observations). Although RCMs correct some GCM biases related to fine-scale geographic features, errors in yield are dominated by broad-scale (100s of kilometers) GCM systematic errors that RCMs cannot compensate for. These results support previous suggestions that the benefits for impacts assessments of dynamically downscaling raw GCM output may not be sufficient to justify its computational demands. Progress on fidelity of yield projections may benefit more from continuing efforts to understand and minimize systematic error in underlying climate projections.

  19. Evaluating the utility of dynamical downscaling in agricultural impacts projections

    PubMed Central

    Glotter, Michael; Elliott, Joshua; McInerney, David; Best, Neil; Foster, Ian; Moyer, Elisabeth J.

    2014-01-01

    Interest in estimating the potential socioeconomic costs of climate change has led to the increasing use of dynamical downscaling—nested modeling in which regional climate models (RCMs) are driven with general circulation model (GCM) output—to produce fine-spatial-scale climate projections for impacts assessments. We evaluate here whether this computationally intensive approach significantly alters projections of agricultural yield, one of the greatest concerns under climate change. Our results suggest that it does not. We simulate US maize yields under current and future CO2 concentrations with the widely used Decision Support System for Agrotechnology Transfer crop model, driven by a variety of climate inputs including two GCMs, each in turn downscaled by two RCMs. We find that no climate model output can reproduce yields driven by observed climate unless a bias correction is first applied. Once a bias correction is applied, GCM- and RCM-driven US maize yields are essentially indistinguishable in all scenarios (<10% discrepancy, equivalent to error from observations). Although RCMs correct some GCM biases related to fine-scale geographic features, errors in yield are dominated by broad-scale (100s of kilometers) GCM systematic errors that RCMs cannot compensate for. These results support previous suggestions that the benefits for impacts assessments of dynamically downscaling raw GCM output may not be sufficient to justify its computational demands. Progress on fidelity of yield projections may benefit more from continuing efforts to understand and minimize systematic error in underlying climate projections. PMID:24872455

  20. NASA Software Engineering Benchmarking Study

    NASA Technical Reports Server (NTRS)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    To identify best practices for the improvement of software engineering on projects, NASA's Offices of Chief Engineer (OCE) and Safety and Mission Assurance (OSMA) formed a team led by Heather Rarick and Sally Godfrey to conduct this benchmarking study. The primary goals of the study are to identify best practices that: Improve the management and technical development of software intensive systems; Have a track record of successful deployment by aerospace industries, universities [including research and development (R&D) laboratories], and defense services, as well as NASA's own component Centers; and Identify candidate solutions for NASA's software issues. Beginning in the late fall of 2010, focus topics were chosen and interview questions were developed, based on the NASA top software challenges. Between February 2011 and November 2011, the Benchmark Team interviewed a total of 18 organizations, consisting of five NASA Centers, five industry organizations, four defense services organizations, and four university or university R and D laboratory organizations. A software assurance representative also participated in each of the interviews to focus on assurance and software safety best practices. Interviewees provided a wealth of information on each topic area that included: software policy, software acquisition, software assurance, testing, training, maintaining rigor in small projects, metrics, and use of the Capability Maturity Model Integration (CMMI) framework, as well as a number of special topics that came up in the discussions. NASA's software engineering practices compared favorably with the external organizations in most benchmark areas, but in every topic, there were ways in which NASA could improve its practices. Compared to defense services organizations and some of the industry organizations, one of NASA's notable weaknesses involved communication with contractors regarding its policies and requirements for acquired software. One of NASA's strengths

  1. Storage-Intensive Supercomputing Benchmark Study

    SciTech Connect

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows: SuperMicro X7

  2. Principles for an ETL Benchmark

    NASA Astrophysics Data System (ADS)

    Wyatt, Len; Caufield, Brian; Pol, Daniel

    Conditions in the marketplace for ETL tools suggest that an industry standard benchmark is needed. The benchmark should provide useful data for comparing the performance of ETL systems, be based on a meaningful scenario, and be scalable over a wide range of data set sizes. This paper gives a general scoping of the proposed benchmark and outlines some key decision points. The Transaction Processing Performance Council (TPC) has formed a development subcommittee to define and produce such a benchmark.

  3. American Fuel Cell Bus Project Evaluation. Second Report

    SciTech Connect

    Eudy, Leslie; Post, Matthew

    2015-09-01

    This report presents results of the American Fuel Cell Bus (AFCB) Project, a demonstration of fuel cell electric buses operating in the Coachella Valley area of California. The prototype AFCB was developed as part of the Federal Transit Administration's (FTA's) National Fuel Cell Bus Program. Through the non-profit consortia CALSTART, a team led by SunLine Transit Agency and BAE Systems developed a new fuel cell electric bus for demonstration. SunLine added two more AFCBs to its fleet in 2014 and another in 2015. FTA and the AFCB project team are collaborating with the U.S. Department of Energy (DOE) and DOE's National Renewable Energy Laboratory to evaluate the buses in revenue service. This report summarizes the performance results for the buses through June 2015.

  4. Evaluation of observation-driven evaporation algorithms: results of the WACMOS-ET project

    NASA Astrophysics Data System (ADS)

    Miralles, Diego G.; Jimenez, Carlos; Ershadi, Ali; McCabe, Matthew F.; Michel, Dominik; Hirschi, Martin; Seneviratne, Sonia I.; Jung, Martin; Wood, Eric F.; (Bob) Su, Z.; Timmermans, Joris; Chen, Xuelong; Fisher, Joshua B.; Mu, Quiaozen; Fernandez, Diego

    2015-04-01

    scales for the 2005-2007 reference period will be disclosed. The skill of these algorithms to close the water balance over the continents will be assessed by comparisons to runoff data. The consistency in forcing data will allow to (a) evaluate the skill of these five algorithms in producing ET over particular ecosystems, (b) facilitate the attribution of the observed differences to either algorithms or driving data, and (c) set up a solid scientific basis for the development of global long-term benchmark ET products. Project progress can be followed on our website http://wacmoset.estellus.eu. REFERENCES Fisher, J. B., Tu, K.P., and Baldocchi, D.D. Global estimates of the land-atmosphere water flux based on monthly AVHRR and ISLSCP-II data, validated at 16 FLUXNET sites. Remote Sens. Environ. 112, 901-919, 2008. Jiménez, C. et al. Global intercomparison of 12 land surface heat flux estimates. J. Geophys. Res. 116, D02102, 2011. Jung, M. et al. Recent decline in the global land evapotranspiration trend due to limited moisture supply. Nature 467, 951-954, 2010. Miralles, D.G. et al. Global land-surface evaporation estimated from satellite-based observations. Hydrol. Earth Syst. Sci. 15, 453-469, 2011. Mu, Q., Zhao, M. & Running, S.W. Improvements to a MODIS global terrestrial evapotranspiration algorithm. Remote Sens. Environ. 115, 1781-1800, 2011. Mueller, B. et al. Benchmark products for land evapotranspiration: LandFlux-EVAL multi- dataset synthesis. Hydrol. Earth Syst. Sci. 17, 3707-3720, 2013. Su, Z. The Surface Energy Balance System (SEBS) for estimation of turbulent heat fluxes. Hydrol. Earth Syst. Sci. 6, 85-99, 2002.

  5. Radionuclide Inventory Distribution Project Data Evaluation and Verification White Paper

    SciTech Connect

    NSTec Environmental Restoration

    2010-05-17

    Testing of nuclear explosives caused widespread contamination of surface soils on the Nevada Test Site (NTS). Atmospheric tests produced the majority of this contamination. The Radionuclide Inventory and Distribution Program (RIDP) was developed to determine distribution and total inventory of radionuclides in surface soils at the NTS to evaluate areas that may present long-term health hazards. The RIDP achieved this objective with aerial radiological surveys, soil sample results, and in situ gamma spectroscopy. This white paper presents the justification to support the use of RIDP data as a guide for future evaluation and to support closure of Soils Sub-Project sites under the purview of the Federal Facility Agreement and Consent Order. Use of the RIDP data as part of the Data Quality Objective process is expected to provide considerable cost savings and accelerate site closures. The following steps were completed: - Summarize the RIDP data set and evaluate the quality of the data. - Determine the current uses of the RIDP data and cautions associated with its use. - Provide recommendations for enhancing data use through field verification or other methods. The data quality is sufficient to utilize RIDP data during the planning process for site investigation and closure. Project planning activities may include estimating 25-millirem per industrial access year dose rate boundaries, optimizing characterization efforts, projecting final end states, and planning remedial actions. In addition, RIDP data may be used to identify specific radionuclide distributions, and augment other non-radionuclide dose rate data. Finally, the RIDP data can be used to estimate internal and external dose rates. The data quality is sufficient to utilize RIDP data during the planning process for site investigation and closure. Project planning activities may include estimating 25-millirem per industrial access year dose rate boundaries, optimizing characterization efforts, projecting final

  6. A One-group, One-dimensional Transport Benchmark in Cylindrical Geometry

    SciTech Connect

    Barry Ganapol; Abderrafi M. Ougouag

    2006-06-01

    A 1-D, 1-group computational benchmark in cylndrical geometry is described. This neutron transport benchmark is useful for evaluating reactor concepts that possess azimuthal symmetry such as a pebble-bed reactor.

  7. Asotin Creek Instream Habitat Alteration Projects: 1998 Habitat Evaluation Surveys.

    SciTech Connect

    Bumgarner, Joseph D.

    1999-03-01

    The Asotin Creek Model Watershed Master Plan was completed 1994. The plan was developed by a landowner steering committee for the Asotin County Conservation District (ACCD), with technical support from the various Federal, State and local entities. Actions identified within the plan to improve the Asotin Creek ecosystem fall into four main categories, (1) Stream and Riparian, (2) Forestland, (3) Rangeland, and (4) Cropland. Specific actions to be carried out within the stream and in the riparian area to improve fish habitat were, (a) create more pools, (b) increase the amount of large organic debris (LOD), (c) increase the riparian buffer zone through tree planting, and (d) increase fencing to limit livestock access; additionally, the actions are intended to stabilize the river channel, reduce sediment input, and protect private property. Fish species of main concern in Asotin Creek are summer steelhead (Oncorhynchus mykiss), spring chinook (Oncorhynchus tshawytscha), and bull trout (Salvelinus confluentus). Spring chinook in Asotin Creek are considered extinct (Bumgarner et al. 1998); bull trout and summer steelhead are below historical levels and are currently as ''threatened'' under the ESA. In 1998, 16 instream habitat projects were planned by ACCD along with local landowners. The ACCD identified the need for a more detailed analysis of these instream projects to fully evaluate their effectiveness at improving fish habitat. The Washington Department of Fish and Wildlife's (WDFW) Snake River Lab (SRL) was contracted by the ACCD to take pre-construction measurements of the existing habitat (pools, LOD, width, depth, etc.) within each identified site, and to eventually evaluate fish use within these sites. All pre-construction habitat measurements were completed between 6 and 14 July, 1998. 1998 was the first year that this sort of evaluation has occurred. Post construction measurements of habitat structures installed in 1998, and fish usage evaluation, will be

  8. Toward a Benchmark for Multi-Threaded Testing Tools

    NASA Technical Reports Server (NTRS)

    Eytani, Yaniv; Stoller, Scott D.; Havelund, Klaus; Ur, Shmuel

    2005-01-01

    Looking for intermittent bugs is a problem that has been getting prominence in testing. Multi-threaded code is becoming very common, mostly on the server side. As there is no silver bullet solution, research focuses on a variety of partial solutions. We outline a road map for combining the research on the different disciplines of testing multi-threaded programs and on evaluating its quality. The project goals are to create a benchmark that can be used to evaluate different solutions, to create a framework with open API's that enables combining techniques in the multithreading domain, and to create a focus for the research in this area around which a community of people who try to solve similar problems with different techniques, could congregate. The benchmark, apart from containing programs with documented bugs, includes other artifacts, such as traces, that are used for evaluating some of the technologies. We have started creating such a bench mrk and detail the lesson learned in the process. The framework will enable technology developers, for example, race detectors, to concentrate on their components and use other ready made components, (e.g., instrumentor) to create a testing solution.

  9. Benchmarks for Psychotherapy Efficacy in Adult Major Depression

    ERIC Educational Resources Information Center

    Minami, Takuya; Wampold, Bruce E.; Serlin, Ronald C.; Kircher, John C.; Brown, George S.

    2007-01-01

    This study estimates pretreatment-posttreatment effect size benchmarks for the treatment of major depression in adults that may be useful in evaluating psychotherapy effectiveness in clinical practice. Treatment efficacy benchmarks for major depression were derived for 3 different types of outcome measures: the Hamilton Rating Scale for Depression…

  10. Benchmark Airport Charges

    NASA Technical Reports Server (NTRS)

    de Wit, A.; Cohn, N.

    1999-01-01

    The Netherlands Directorate General of Civil Aviation (DGCA) commissioned Hague Consulting Group (HCG) to complete a benchmark study of airport charges at twenty eight airports in Europe and around the world, based on 1996 charges. This study followed previous DGCA research on the topic but included more airports in much more detail. The main purpose of this new benchmark study was to provide insight into the levels and types of airport charges worldwide and into recent changes in airport charge policy and structure. This paper describes the 1996 analysis. It is intended that this work be repeated every year in order to follow developing trends and provide the most up-to-date information possible.

  11. Sequoia Messaging Rate Benchmark

    2008-01-22

    The purpose of this benchmark is to measure the maximal message rate of a single compute node. The first num_cores ranks are expected to reside on the 'core' compute node for which message rate is being tested. After that, the next num_nbors ranks are neighbors for the first core rank, the next set of num_nbors ranks are neighbors for the second core rank, and so on. For example, testing an 8-core node (num_cores = 8)more » with 4 neighbors (num_nbors = 4) requires 8 + 8 * 4 - 40 ranks. The first 8 of those 40 ranks are expected to be on the 'core' node being benchmarked, while the rest of the ranks are on separate nodes.« less

  12. MHEC Survey Establishes Midwest Property Insurance Benchmarks.

    ERIC Educational Resources Information Center

    Midwestern Higher Education Commission Risk Management Institute Research Bulletin, 1994

    1994-01-01

    This publication presents the results of a survey of over 200 midwestern colleges and universities on their property insurance programs and establishes benchmarks to help these institutions evaluate their insurance programs. Findings included the following: (1) 51 percent of respondents currently purchase their property insurance as part of a…

  13. MPI Multicore Linktest Benchmark

    2008-01-25

    The MPI Multicore Linktest (LinkTest) measures the aggregate bandwidth from/to a multicore node in a parallel system. It allows the user to specify a variety of different node layout and communication routine variations and reports the maximal observed bandwidth across all specified options. In particular, this benchmark is able to vary the number of tasks on the root node and thereby allows users to study the impact of multicore architectures on MPI communication performance.

  14. Algebraic Multigrid Benchmark

    SciTech Connect

    2013-05-06

    AMG2013 is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. It has been derived directly from the Boomer AMG solver in the hypre library, a large linear solvers library that is being developed in the Center for Applied Scientific Computing (CASC) at LLNL. The driver provided in the benchmark can build various test problems. The default problem is a Laplace type problem on an unstructured domain with various jumps and an anisotropy in one part.

  15. Benchmarking HIPAA compliance.

    PubMed

    Wagner, James R; Thoman, Deborah J; Anumalasetty, Karthikeyan; Hardre, Pat; Ross-Lazarov, Tsvetomir

    2002-01-01

    One of the nation's largest academic medical centers is benchmarking its operations using internally developed software to improve privacy/confidentiality of protected health information (PHI) and to enhance data security to comply with HIPAA regulations. It is also coordinating the development of a web-based interactive product that can help hospitals, physician practices, and managed care organizations measure their compliance with HIPAA regulations.

  16. Toxicological benchmarks for wildlife: 1996 Revision

    SciTech Connect

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets.

  17. Los Alamos National Laboratory computer benchmarking 1982

    SciTech Connect

    Martin, J.L.

    1983-06-01

    Evaluating the performance of computing machinery is a continual effort of the Computer Research and Applications Group of the Los Alamos National Laboratory. This report summarizes the results of the group's benchmarking activities performed between October 1981 and September 1982, presenting compilation and execution times as well as megaflop rates for a set of benchmark codes. Tests were performed on the following computers: Cray Research, Inc. (CRI) Cray-1S; Control Data Corporation (CDC) 7600, 6600, Cyber 73, Cyber 825, Cyber 835, Cyber 855, and Cyber 205; Digital Equipment Corporation (DEC) VAX 11/780 and VAX 11/782; and Apollo Computer, Inc., Apollo.

  18. EVALUATION OF THE WEIGHT-BASED COLLECTION PROJECT IN FARMINGTON, MINNESOTA: A MITE PROGRAM EVALUATION

    EPA Science Inventory

    This project evaluates a test program of a totally automated weight-based refuse disposal rate system. his test program was conducted by the City of Farmington, Minnesota between 1991 and 1993. he intent of the program was to test a mechanism which would automatically assess a fe...

  19. Pescara benchmark: overview of modelling, testing and identification

    NASA Astrophysics Data System (ADS)

    Bellino, A.; Brancaleoni, F.; Bregant, L.; Carminelli, A.; Catania, G.; Di Evangelista, A.; Gabriele, S.; Garibaldi, L.; Marchesiello, S.; Sorrentino, S.; Spina, D.; Valente, C.; Zuccarino, L.

    2011-07-01

    The `Pescara benchmark' is part of the national research project `BriViDi' (BRIdge VIbrations and DIagnosis) supported by the Italian Ministero dell'Universitá e Ricerca. The project is aimed at developing an integrated methodology for the structural health evaluation of railway r/c, p/c bridges. The methodology should provide for applicability in operating conditions, easy data acquisition through common industrial instrumentation, robustness and reliability against structural and environmental uncertainties. The Pescara benchmark consisted in lab tests to get a consistent and large experimental data base and subsequent data processing. Special tests were devised to simulate the train transit effects in actual field conditions. Prestressed concrete beams of current industrial production both sound and damaged at various severity corrosion levels were tested. The results were collected either in a deterministic setting and in a form suitable to deal with experimental uncertainties. Damage identification was split in two approaches: with or without a reference model. In the first case f.e. models were used in conjunction with non conventional updating techniques. In the second case, specialized output-only identification techniques capable to deal with time-variant and possibly non linear systems were developed. The lab tests allowed validating the above approaches and the performances of classical modal based damage indicators.

  20. Toxicological benchmarks for wildlife: 1994 Revision

    SciTech Connect

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report.

  1. 20 CFR 641.610 - How are pilot, demonstration, and evaluation projects administered?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 3 2012-04-01 2012-04-01 false How are pilot, demonstration, and evaluation projects administered? 641.610 Section 641.610 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION..., and Evaluation Projects § 641.610 How are pilot, demonstration, and evaluation projects...

  2. 20 CFR 641.610 - How are pilot, demonstration, and evaluation projects administered?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 3 2013-04-01 2013-04-01 false How are pilot, demonstration, and evaluation projects administered? 641.610 Section 641.610 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION..., and Evaluation Projects § 641.610 How are pilot, demonstration, and evaluation projects...

  3. 20 CFR 641.610 - How are pilot, demonstration, and evaluation projects administered?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 3 2011-04-01 2011-04-01 false How are pilot, demonstration, and evaluation projects administered? 641.610 Section 641.610 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION..., and Evaluation Projects § 641.610 How are pilot, demonstration, and evaluation projects...

  4. 20 CFR 641.610 - How are pilot, demonstration, and evaluation projects administered?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 3 2014-04-01 2014-04-01 false How are pilot, demonstration, and evaluation projects administered? 641.610 Section 641.610 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION..., and Evaluation Projects § 641.610 How are pilot, demonstration, and evaluation projects...

  5. The NAS Parallel Benchmarks

    SciTech Connect

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage

  6. The hydrologic bench-mark program; a standard to evaluate time-series trends in selected water-quality constituents for streams in Georgia

    USGS Publications Warehouse

    Buell, G.R.; Grams, S.C.

    1985-01-01

    Significant temporal trends in monthly pH, specific conductance, total alkalinity, hardness, total nitrite-plus-nitrite nitrogen, and total phosphorus measurements at five stream sites in Georgia were identified using a rank correlation technique, the seasonal Kendall test and slope estimator. These sites include a U.S. Geological Survey Hydrologic Bench-Mark site, Falling Creek near Juliette, and four periodic water-quality monitoring sites. Comparison of raw data trends with streamflow-residual trends and, where applicable, with chemical-discharge trends (instantaneous fluxes) shws that some of these trends are responses to factors other than changing streamflow. Percentages of forested, agricultural, and urban cover with each basin did not change much during the periods of water-quality record, and therefore these non-flow-related trends are not obviously related to changes in land cover or land use. Flow-residual water-quality trends at the Hydrologic Bench-Mark site and at the Chattooga River site probably indicate basin reponses to changes in the chemical quality of atmospheric deposition. These two basins are predominantly forested and have received little recent human use. Observed trends at the other three sites probably indicate basin responses to various land uses and water uses associated with agricultural and urban land or to changes in specific uses. (USGS)

  7. Comparative evaluation of 1D and quasi-2D hydraulic models based on benchmark and real-world applications for uncertainty assessment in flood mapping

    NASA Astrophysics Data System (ADS)

    Dimitriadis, Panayiotis; Tegos, Aristoteles; Oikonomou, Athanasios; Pagana, Vassiliki; Koukouvinos, Antonios; Mamassis, Nikos; Koutsoyiannis, Demetris; Efstratiadis, Andreas

    2016-03-01

    One-dimensional and quasi-two-dimensional hydraulic freeware models (HEC-RAS, LISFLOOD-FP and FLO-2d) are widely used for flood inundation mapping. These models are tested on a benchmark test with a mixed rectangular-triangular channel cross section. Using a Monte-Carlo approach, we employ extended sensitivity analysis by simultaneously varying the input discharge, longitudinal and lateral gradients and roughness coefficients, as well as the grid cell size. Based on statistical analysis of three output variables of interest, i.e. water depths at the inflow and outflow locations and total flood volume, we investigate the uncertainty enclosed in different model configurations and flow conditions, without the influence of errors and other assumptions on topography, channel geometry and boundary conditions. Moreover, we estimate the uncertainty associated to each input variable and we compare it to the overall one. The outcomes of the benchmark analysis are further highlighted by applying the three models to real-world flood propagation problems, in the context of two challenging case studies in Greece.

  8. Energy saving in WWTP: Daily benchmarking under uncertainty and data availability limitations.

    PubMed

    Torregrossa, D; Schutz, G; Cornelissen, A; Hernández-Sancho, F; Hansen, J

    2016-07-01

    Efficient management of Waste Water Treatment Plants (WWTPs) can produce significant environmental and economic benefits. Energy benchmarking can be used to compare WWTPs, identify targets and use these to improve their performance. Different authors have performed benchmark analysis on monthly or yearly basis but their approaches suffer from a time lag between an event, its detection, interpretation and potential actions. The availability of on-line measurement data on many WWTPs should theoretically enable the decrease of the management response time by daily benchmarking. Unfortunately this approach is often impossible because of limited data availability. This paper proposes a methodology to perform a daily benchmark analysis under database limitations. The methodology has been applied to the Energy Online System (EOS) developed in the framework of the project "INNERS" (INNovative Energy Recovery Strategies in the urban water cycle). EOS calculates a set of Key Performance Indicators (KPIs) for the evaluation of energy and process performances. In EOS, the energy KPIs take in consideration the pollutant load in order to enable the comparison between different plants. For example, EOS does not analyse the energy consumption but the energy consumption on pollutant load. This approach enables the comparison of performances for plants with different loads or for a single plant under different load conditions. The energy consumption is measured by on-line sensors, while the pollutant load is measured in the laboratory approximately every 14 days. Consequently, the unavailability of the water quality parameters is the limiting factor in calculating energy KPIs. In this paper, in order to overcome this limitation, the authors have developed a methodology to estimate the required parameters and manage the uncertainty in the estimation. By coupling the parameter estimation with an interval based benchmark approach, the authors propose an effective, fast and reproducible

  9. Collection of Neutronic VVER Reactor Benchmarks.

    2002-01-30

    Version 00 A system of computational neutronic benchmarks has been developed. In this CD-ROM report, the data generated in the course of the project are reproduced in their integrity with minor corrections. The editing that was performed on the various documents comprising this report was primarily meant to facilitate the production of the CD-ROM and to enable electronic retrieval of the information. The files are electronically navigable.

  10. Benchmarking and improving microbial-explicit soil biogeochemistry models

    NASA Astrophysics Data System (ADS)

    Wieder, W. R.; Bonan, G. B.; Hartman, M. D.; Sulman, B. N.; Wang, Y.

    2015-12-01

    Earth system models that are designed to project future carbon (C) cycle - climate feedbacks exhibit notably poor representation of soil biogeochemical processes and generate highly uncertain projections about the fate of the largest terrestrial C pool on Earth. Given these shortcomings there has been intense interest in soil biogeochemical model development, but parallel efforts to create the analytical tools to characterize, improve and benchmark these models have thus far lagged behind. A long-term goal of this work is to develop a framework to compare, evaluate and improve the process-level representation of soil biogeochemical models that could be applied in global land surface models. Here, we present a newly developed global model test bed that is built on the Carnegie Ames Stanford Approach model (CASA-CNP) that can rapidly integrate different soil biogeochemical models that are forced with consistent driver datasets. We focus on evaluation of two microbial explicit soil biogeochemical models that function at global scales: the MIcrobial-MIneral Carbon Stabilization model (MIMICS) and Carbon, Organisms, Rhizosphere, and Protection in the Soil Environment (CORPSE) model. Using the global model test bed coupled to MIMICS and CORPSE we quantify the uncertainty in potential C cycle - climate feedbacks that may be expected with these microbial explicit models, compared with a conventional first-order, linear model. By removing confounding variation of climate and vegetation drivers, our model test bed allows us to isolate key differences among different soil model structure and parameterizations that can be evaluated with further study. Specifically, the global test bed also identifies key parameters that can be estimated using cross-site observations. In global simulations model results are evaluated with steady state litter, microbial biomass, and soil C pools and benchmarked against independent globally gridded data products.

  11. 43 CFR 10005.20 - Project evaluation procedures.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... watershed), project type, and the resource that the project seeks to address. (b) Each project's consistency... a watershed-wide analysis. It will also involve a state-wide analysis. As with the previous...

  12. Benchmarking: applications to transfusion medicine.

    PubMed

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal.

  13. Benchmarking facilities providing care: An international overview of initiatives

    PubMed Central

    Thonon, Frédérique; Watson, Jonathan; Saghatchian, Mahasti

    2015-01-01

    We performed a literature review of existing benchmarking projects of health facilities to explore (1) the rationales for those projects, (2) the motivation for health facilities to participate, (3) the indicators used and (4) the success and threat factors linked to those projects. We studied both peer-reviewed and grey literature. We examined 23 benchmarking projects of different medical specialities. The majority of projects used a mix of structure, process and outcome indicators. For some projects, participants had a direct or indirect financial incentive to participate (such as reimbursement by Medicaid/Medicare or litigation costs related to quality of care). A positive impact was reported for most projects, mainly in terms of improvement of practice and adoption of guidelines and, to a lesser extent, improvement in communication. Only 1 project reported positive impact in terms of clinical outcomes. Success factors and threats are linked to both the benchmarking process (such as organisation of meetings, link with existing projects) and indicators used (such as adjustment for diagnostic-related groups). The results of this review will help coordinators of a benchmarking project to set it up successfully. PMID:26770800

  14. Nanomagnet Logic: Architectures, design, and benchmarking

    NASA Astrophysics Data System (ADS)

    Kurtz, Steven J.

    Nanomagnet Logic (NML) is an emerging technology being studied as a possible replacement or supplementary device for Complimentary Metal-Oxide-Semiconductor (CMOS) Field-Effect Transistors (FET) by the year 2020. NML devices offer numerous potential advantages including: low energy operation, steady state non-volatility, radiation hardness and a clear path to fabrication and integration with CMOS. However, maintaining both low-energy operation and non-volatility while scaling from the device to the architectural level is non-trivial as (i) nearest neighbor interactions within NML circuits complicate the modeling of ensemble nanomagnet behavior and (ii) the energy intensive clock structures required for re-evaluation and NML's relatively high latency challenge its ability to offer system-level performance wins against other emerging nanotechnologies. Thus, further research efforts are required to model more complex circuits while also identifying circuit design techniques that balance low-energy operation with steady state non-volatility. In addition, further work is needed to design and model low-power on-chip clocks while simultaneously identifying application spaces where NML systems (including clock overhead) offer sufficient energy savings to merit their inclusion in future processors. This dissertation presents research advancing the understanding and modeling of NML at all levels including devices, circuits, and line clock structures while also benchmarking NML against both scaled CMOS and tunneling FETs (TFET) devices. This is accomplished through the development of design tools and methodologies for (i) quantifying both energy and stability in NML circuits and (ii) evaluating line-clocked NML system performance. The application of these newly developed tools improves the understanding of ideal design criteria (i.e., magnet size, clock wire geometry, etc.) for NML architectures. Finally, the system-level performance evaluation tool offers the ability to

  15. Benchmark initiative on coupled multiphase flow and geomechanical processes during CO2 injection

    NASA Astrophysics Data System (ADS)

    Benisch, K.; Annewandter, R.; Olden, P.; Mackay, E.; Bauer, S.; Geiger, S.

    2012-12-01

    CO2 injection into deep saline aquifers involves multiple strongly interacting processes such as multiphase flow and geomechanical deformation, which threat to the seal integrity of CO2 repositories. Coupled simulation codes are required to establish realistic prognoses of the coupled process during CO2 injection operations. International benchmark initiatives help to evaluate, to compare and to validate coupled simulation results. However, there is no published code comparison study so far focusing on the impact of coupled multiphase flow and geomechanics on the long-term integrity of repositories, which is required to obtain confidence in the predictive capabilities of reservoir simulators. We address this gap by proposing a benchmark study. A wide participation from academic and industrial institutions is sought, as the aim of building confidence in coupled simulators become more plausible with many participants. Most published benchmark studies on coupled multiphase flow and geomechanical processes have been performed within the field of nuclear waste disposal (e.g. the DECOVALEX project), using single-phase formulation only. As regards CO2 injection scenarios, international benchmark studies have been published comparing isothermal and non-isothermal multiphase flow processes such as the code intercomparison by LBNL, the Stuttgart Benchmark study, the CLEAN benchmark approach and other initiatives. Recently, several codes have been developed or extended to simulate the coupling of hydraulic and geomechanical processes (OpenGeoSys, ELIPSE-Visage, GEM, DuMuX and others), which now enables a comprehensive code comparison. We propose four benchmark tests of increasing complexity, addressing the coupling between multiphase flow and geomechanical processes during CO2 injection. In the first case, a horizontal non-faulted 2D model consisting of one reservoir and one cap rock is considered, focusing on stress and strain regime changes in the storage formation and the

  16. Benchmark on outdoor scenes

    NASA Astrophysics Data System (ADS)

    Zhang, Hairong; Wang, Cheng; Chen, Yiping; Jia, Fukai; Li, Jonathan

    2016-03-01

    Depth super-resolution is becoming popular in computer vision, and most of test data is based on indoor data sets with ground-truth measurements such as Middlebury. However, indoor data sets mainly are acquired from structured light techniques under ideal conditions, which cannot represent the objective world with nature light. Unlike indoor scenes, the uncontrolled outdoor environment is much more complicated and is rich both in visual and depth texture. For that reason, we develop a more challenging and meaningful outdoor benchmark for depth super-resolution using the state-of-the-art active laser scanning system.

  17. Benchmarking emerging logic devices

    NASA Astrophysics Data System (ADS)

    Nikonov, Dmitri

    2014-03-01

    As complementary metal-oxide-semiconductor field-effect transistors (CMOS FET) are being scaled to ever smaller sizes by the semiconductor industry, the demand is growing for emerging logic devices to supplement CMOS in various special functions. Research directions and concepts of such devices are overviewed. They include tunneling, graphene based, spintronic devices etc. The methodology to estimate future performance of emerging (beyond CMOS) devices and simple logic circuits based on them is explained. Results of benchmarking are used to identify more promising concepts and to map pathways for improvement of beyond CMOS computing.

  18. Algebraic Multigrid Benchmark

    2013-05-06

    AMG2013 is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. It has been derived directly from the Boomer AMG solver in the hypre library, a large linear solvers library that is being developed in the Center for Applied Scientific Computing (CASC) at LLNL. The driver provided in the benchmark can build various test problems. The default problem is a Laplace type problem on an unstructured domain with various jumpsmore » and an anisotropy in one part.« less

  19. 2001 benchmarking guide.

    PubMed

    Hoppszallern, S

    2001-01-01

    Our fifth annual guide to benchmarking under managed care presents data that is a study in market dynamics and adaptation. New this year are financial indicators on HMOs exiting the market and those remaining. Hospital financial ratios and details on department performance are included. The physician group practice numbers show why physicians are scrutinizing capitated payments. Overall, hospitals in markets with high managed care penetration are more successful in managing labor costs and show productivity gains in imaging services, physical therapy and materials management.

  20. Evaluating South Carolina's community cardiovascular disease prevention project.

    PubMed Central

    Wheeler, F C; Lackland, D T; Mace, M L; Reddick, A; Hogelin, G; Remington, P L

    1991-01-01

    A community cardiovascular disease prevention program was undertaken as a cooperative effort of the South Carolina Department of Health and Environmental Control and the Centers for Disease Control of the Public Health Service. As part of the evaluation of the project, a large scale community health survey was conducted by the State and Federal agencies. The successful design and implementation of the survey, which included telephone and in-home interviews as well as clinical assessments of participants, is described. Interview response rates were adequate, although physical assessments were completed on only 61 percent of those interviewed. Households without telephones were difficult and costly to identify, and young adults were difficult to locate for survey participation. The survey produced baseline data for program planning and for measuring the success of ongoing intervention efforts. Survey data also have been used to estimate the prevalence of selected cardiovascular disease risk factors. PMID:1910187

  1. Project SOLWIND: Space radiation exposure. [evaluation of particle fluxes

    NASA Technical Reports Server (NTRS)

    Stassinopoulos, E. G.

    1975-01-01

    A special orbital radiation study was conducted for the SOLWIND project to evaluate mission-encountered energetic particle fluxes. Magnetic field calculations were performed with a current field model, extrapolated to the tentative spacecraft launch epoch with linear time terms. Orbital flux integrations for circular flight paths were performed with the latest proton and electron environment models, using new improved computational methods. Temporal variations in the ambient electron environment are considered and partially accounted for. Estimates of average energetic solar proton fluences are given for a one year mission duration at selected integral energies ranging from E greater than 10 to E greater than 100 MeV; the predicted annual fluence is found to relate to the period of maximum solar activity during the next solar cycle. The results are presented in graphical and tabular form; they are analyzed, explained, and discussed.

  2. USGS Blind Sample Project: monitoring and evaluating laboratory analytical quality

    USGS Publications Warehouse

    Ludtke, Amy S.; Woodworth, Mark T.

    1997-01-01

    The U.S. Geological Survey (USGS) collects and disseminates information about the Nation's water resources. Surface- and ground-water samples are collected and sent to USGS laboratories for chemical analyses. The laboratories identify and quantify the constituents in the water samples. Random and systematic errors occur during sample handling, chemical analysis, and data processing. Although all errors cannot be eliminated from measurements, the magnitude of their uncertainty can be estimated and tracked over time. Since 1981, the USGS has operated an independent, external, quality-assurance project called the Blind Sample Project (BSP). The purpose of the BSP is to monitor and evaluate the quality of laboratory analytical results through the use of double-blind quality-control (QC) samples. The information provided by the BSP assists the laboratories in detecting and correcting problems in the analytical procedures. The information also can aid laboratory users in estimating the extent that laboratory errors contribute to the overall errors in their environmental data.

  3. Evaluating the utility of dynamical downscaling in agricultural impacts projections

    NASA Astrophysics Data System (ADS)

    Glotter, M.; Elliott, J. W.; McInerney, D. J.; Moyer, E. J.

    2013-12-01

    The need to understand the future impacts of climate change has driven the increasing use of dynamical downscaling to produce fine-spatial-scale climate projections for impacts models. We evaluate here whether this computationally intensive approach significantly alters projections of agricultural yield. Our results suggest that it does not. We simulate U.S. maize yields under current and future CO2 concentrations with the widely-used DSSAT crop model, driven by a variety of climate inputs including two general circulation models (GCMs), each in turn downscaled by two regional climate models (RCMs). We find that no climate model output can reproduce yields driven by observed climate unless a bias correction is first applied. Once a bias correction is applied, GCM- and RCM-driven yields are essentially indistinguishable in all scenarios (<10% discrepancy in national yield, equivalent to error from observations). While RCMs correct some GCM biases related to fine-scale geographic features, errors in yield are dominated by broad-scale (100s of kms) GCM systematic errors that RCMs cannot compensate for. These results support previous suggestions that the added value of dynamically downscaling raw GCM output for impacts assessments may not justify its computational demands, and that some rethinking of downscaling methods is warranted.

  4. The Employment Impact of the Des Moines Occupational Upgrading Project and Model Cities High School Equivalency Project: Project Year One Evaluation.

    ERIC Educational Resources Information Center

    Palomba, Neil A.; And Others

    This study was conducted to: (1) evaluate the Occupational Upgrading Project (OUP) and the Model Neighborhood High School Equivalency (HSE) Project's first year of operation, and (2) create baseline data from which future and more conclusive evaluation can be undertaken. Data were gathered by conducting open-ended interviews with the…

  5. Simple benchmark for complex dose finding studies.

    PubMed

    Cheung, Ying Kuen

    2014-06-01

    While a general goal of early phase clinical studies is to identify an acceptable dose for further investigation, modern dose finding studies and designs are highly specific to individual clinical settings. In addition, as outcome-adaptive dose finding methods often involve complex algorithms, it is crucial to have diagnostic tools to evaluate the plausibility of a method's simulated performance and the adequacy of the algorithm. In this article, we propose a simple technique that provides an upper limit, or a benchmark, of accuracy for dose finding methods for a given design objective. The proposed benchmark is nonparametric optimal in the sense of O'Quigley et al. (2002, Biostatistics 3, 51-56), and is demonstrated by examples to be a practical accuracy upper bound for model-based dose finding methods. We illustrate the implementation of the technique in the context of phase I trials that consider multiple toxicities and phase I/II trials where dosing decisions are based on both toxicity and efficacy, and apply the benchmark to several clinical examples considered in the literature. By comparing the operating characteristics of a dose finding method to that of the benchmark, we can form quick initial assessments of whether the method is adequately calibrated and evaluate its sensitivity to the dose-outcome relationships.

  6. International land Model Benchmarking (ILAMB) Package v001.00

    DOE Data Explorer

    Mu, Mingquan [University of California, Irvine; Randerson, James T. [University of California, Irvine; Riley, William J. [Lawrence Berkeley National Laboratory; Hoffman, Forrest M. [Oak Ridge National Laboratory

    2016-05-02

    As a contribution to International Land Model Benchmarking (ILAMB) Project, we are providing new analysis approaches, benchmarking tools, and science leadership. The goal of ILAMB is to assess and improve the performance of land models through international cooperation and to inform the design of new measurement campaigns and field studies to reduce uncertainties associated with key biogeochemical processes and feedbacks. ILAMB is expected to be a primary analysis tool for CMIP6 and future model-data intercomparison experiments. This team has developed initial prototype benchmarking systems for ILAMB, which will be improved and extended to include ocean model metrics and diagnostics.

  7. International land Model Benchmarking (ILAMB) Package v002.00

    DOE Data Explorer

    Collier, Nathaniel [Oak Ridge National Laboratory; Hoffman, Forrest M. [Oak Ridge National Laboratory; Mu, Mingquan [University of California, Irvine; Randerson, James T. [University of California, Irvine; Riley, William J. [Lawrence Berkeley National Laboratory

    2016-05-09

    As a contribution to International Land Model Benchmarking (ILAMB) Project, we are providing new analysis approaches, benchmarking tools, and science leadership. The goal of ILAMB is to assess and improve the performance of land models through international cooperation and to inform the design of new measurement campaigns and field studies to reduce uncertainties associated with key biogeochemical processes and feedbacks. ILAMB is expected to be a primary analysis tool for CMIP6 and future model-data intercomparison experiments. This team has developed initial prototype benchmarking systems for ILAMB, which will be improved and extended to include ocean model metrics and diagnostics.

  8. Project Familia. Final Evaluation Report, 1993-94. OER Report.

    ERIC Educational Resources Information Center

    New York City Board of Education, Brooklyn, NY. Office of Educational Research.

    Project Familia was an Elementary and Secondary Education Act Title VII project in its second year in 1993-94 in New York City. Project Familia served 77 children at 3 schools who were identified as limited English proficient, special education students in prekindergarten through fifth grade and their parents. The project provided after-school…

  9. Design and Application of a Community Land Benchmarking System for Earth System Models

    NASA Astrophysics Data System (ADS)

    Mu, M.; Hoffman, F. M.; Lawrence, D. M.; Riley, W. J.; Keppel-Aleks, G.; Koven, C. D.; Kluzek, E. B.; Mao, J.; Randerson, J. T.

    2015-12-01

    Benchmarking has been widely used to assess the ability of climate models to capture the spatial and temporal variability of observations during the historical era. For the carbon cycle and terrestrial ecosystems, the design and development of an open-source community platform has been an important goal as part of the International Land Model Benchmarking (ILAMB) project. Here we developed a new benchmarking software system that enables the user to specify the models, benchmarks, and scoring metrics, so that results can be tailored to specific model intercomparison projects. Evaluation data sets included soil and aboveground carbon stocks, fluxes of energy, carbon and water, burned area, leaf area, and climate forcing and response variables. We used this system to evaluate simulations from the 5th Phase of the Coupled Model Intercomparison Project (CMIP5) with prognostic atmospheric carbon dioxide levels over the period from 1850 to 2005 (i.e., esmHistorical simulations archived on the Earth System Grid Federation). We found that the multi-model ensemble had a high bias in incoming solar radiation across Asia, likely as a consequence of incomplete representation of aerosol effects in this region, and in South America, primarily as a consequence of a low bias in mean annual precipitation. The reduced precipitation in South America had a larger influence on gross primary production than the high bias in incoming light, and as a consequence gross primary production had a low bias relative to the observations. Although model to model variations were large, the multi-model mean had a positive bias in atmospheric carbon dioxide that has been attributed in past work to weak ocean uptake of fossil emissions. In mid latitudes of the northern hemisphere, most models overestimate latent heat fluxes in the early part of the growing season, and underestimate these fluxes in mid-summer and early fall, whereas sensible heat fluxes show the opposite trend.

  10. Evaluating the impact of decision making during construction on transport project outcome.

    PubMed

    Polydoropoulou, Amalia; Roumboutsos, Athena

    2009-11-01

    Decisions made during the project construction phase may bear considerable impacts on the success of transport projects and undermine the ex-ante project evaluation. An innovative and holistic approach has been taken to assess and address this issue by (a) examining the decision process and procedure during project construction, through a field survey, (b) assessing the impact of decisions made during construction on respective transport project and, finally, (c) developing a quality monitoring framework model which links decisions made during the project implementation (construction) phase with the ex-ante and ex-post project evaluations. The framework model is proposed as a guiding and support tool for decision makers.

  11. Evaluating the impact of decision making during construction on transport project outcome.

    PubMed

    Polydoropoulou, Amalia; Roumboutsos, Athena

    2009-11-01

    Decisions made during the project construction phase may bear considerable impacts on the success of transport projects and undermine the ex-ante project evaluation. An innovative and holistic approach has been taken to assess and address this issue by (a) examining the decision process and procedure during project construction, through a field survey, (b) assessing the impact of decisions made during construction on respective transport project and, finally, (c) developing a quality monitoring framework model which links decisions made during the project implementation (construction) phase with the ex-ante and ex-post project evaluations. The framework model is proposed as a guiding and support tool for decision makers. PMID:19616849

  12. A chemical EOR benchmark study of different reservoir simulators

    NASA Astrophysics Data System (ADS)

    Goudarzi, Ali; Delshad, Mojdeh; Sepehrnoori, Kamy

    2016-09-01

    Interest in chemical EOR processes has intensified in recent years due to the advancements in chemical formulations and injection techniques. Injecting Polymer (P), surfactant/polymer (SP), and alkaline/surfactant/polymer (ASP) are techniques for improving sweep and displacement efficiencies with the aim of improving oil production in both secondary and tertiary floods. There has been great interest in chemical flooding recently for different challenging situations. These include high temperature reservoirs, formations with extreme salinity and hardness, naturally fractured carbonates, and sandstone reservoirs with heavy and viscous crude oils. More oil reservoirs are reaching maturity where secondary polymer floods and tertiary surfactant methods have become increasingly important. This significance has added to the industry's interest in using reservoir simulators as tools for reservoir evaluation and management to minimize costs and increase the process efficiency. Reservoir simulators with special features are needed to represent coupled chemical and physical processes present in chemical EOR processes. The simulators need to be first validated against well controlled lab and pilot scale experiments to reliably predict the full field implementations. The available data from laboratory scale include 1) phase behavior and rheological data; and 2) results of secondary and tertiary coreflood experiments for P, SP, and ASP floods under reservoir conditions, i.e. chemical retentions, pressure drop, and oil recovery. Data collected from corefloods are used as benchmark tests comparing numerical reservoir simulators with chemical EOR modeling capabilities such as STARS of CMG, ECLIPSE-100 of Schlumberger, REVEAL of Petroleum Experts. The research UTCHEM simulator from The University of Texas at Austin is also included since it has been the benchmark for chemical flooding simulation for over 25 years. The results of this benchmark comparison will be utilized to improve

  13. A novel and well-defined benchmarking method for second generation read mapping

    PubMed Central

    2011-01-01

    Background Second generation sequencing technologies yield DNA sequence data at ultra high-throughput. Common to most biological applications is a mapping of the reads to an almost identical or highly similar reference genome. The assessment of the quality of read mapping results is not straightforward and has not been formalized so far. Hence, it has not been easy to compare different read mapping approaches in a unified way and to determine which program is the best for what task. Results We present a new benchmark method, called Rabema (Read Alignment BEnchMArk), for read mappers. It consists of a strict definition of the read mapping problem and of tools to evaluate the result of arbitrary read mappers supporting the SAM output format. Conclusions We show the usefulness of the benchmark program by performing a comparison of popular read mappers. The tools supporting the benchmark are licensed under the GPL and available from http://www.seqan.de/projects/rabema.html. PMID:21615913

  14. Evaluation of Title I ESEA Projects, 1974-75: Technical Reports. Report No. 7606.

    ERIC Educational Resources Information Center

    Philadelphia School District, PA. Office of Research and Evaluation.

    Technical reports of individual Title I project evaluations conducted during the 1974-75 school year are contained in this annual volume. It presents information about each project's rationale, expected outcomes, mode of operation, previous evaluative findings, current implementation, and attainment of its objectives. Projects included are:…

  15. An Evaluation of the CERES Model Project--Career Education Responsive to Every Student, Ceres, California.

    ERIC Educational Resources Information Center

    Aslanian, Carol B.; Paul, Regina H.

    The CERES (Career Education Responsive to Every Student) Model Project for grades K-12 was evaluated by an outside party as well as internally by project staff (see CE 017 740). The external summative evaluation was limited to assessing project effectiveness based on pre- and posttests for the following objectives: (1) career education knowledge…

  16. Finding the Forest Amid the Trees: Tools for Evaluating Astronomy Education and Public Outreach Projects

    ERIC Educational Resources Information Center

    Bailey, Janelle M.; Slater, Timothy F.

    2004-01-01

    The effective evaluation of educational projects is becoming increasingly important to funding agencies and to the individuals and organizations involved in the projects. This brief "how-to" guide provides an introductory description of the purpose and basic ideas of project evaluation, and uses authentic examples from four different astronomy and…

  17. A Re-Evaluation of Project PRIDE, a Redesigned School-Based Drug Abuse Prevention Program

    ERIC Educational Resources Information Center

    LoSciuto, Leonard; Steinman, Ross B.

    2004-01-01

    The present study examined the effectiveness of Project PRIDE, a school-based, counselor-administered, drug and alcohol prevention program. The study is presented in the context of Project PRIDE's efforts to keep itself current and effective via continual evaluation-based development. In this outcome evaluation, Project PRIDE participants…

  18. Design and development of a community carbon cycle benchmarking system for CMIP5 models

    NASA Astrophysics Data System (ADS)

    Mu, M.; Hoffman, F. M.; Lawrence, D. M.; Riley, W. J.; Keppel-Aleks, G.; Randerson, J. T.

    2013-12-01

    Benchmarking has been widely used to assess the ability of atmosphere, ocean, sea ice, and land surface models to capture the spatial and temporal variability of observations during the historical period. For the carbon cycle and terrestrial ecosystems, the design and development of an open-source community platform has been an important goal as part of the International Land Model Benchmarking (ILAMB) project. Here we designed and developed a software system that enables the user to specify the models, benchmarks, and scoring systems so that results can be tailored to specific model intercomparison projects. We used this system to evaluate the performance of CMIP5 Earth system models (ESMs). Our scoring system used information from four different aspects of climate, including the climatological mean spatial pattern of gridded surface variables, seasonal cycle dynamics, the amplitude of interannual variability, and long-term decadal trends. We used this system to evaluate burned area, global biomass stocks, net ecosystem exchange, gross primary production, and ecosystem respiration from CMIP5 historical simulations. Initial results indicated that the multi-model mean often performed better than many of the individual models for most of the observational constraints.

  19. Faculty Attitudes Toward Required Evaluative Projects for Doctor of Pharmacy Candidates.

    ERIC Educational Resources Information Center

    Murphy, John E.

    1997-01-01

    Describes evaluative projects required of seniors in an entry-level pharmacy doctoral program, and faculty (n=57) attitudes toward them. Findings indicated 32 publications and 68 professional presentations resulted from the projects. Respondents agreed the research-related courses and projects are important and that the project improves analytical…

  20. Benchmark analysis of MCNP{trademark} ENDF/B-VI iron

    SciTech Connect

    Court, J.D.; Hendricks, J.S.

    1994-12-01

    The MCNP ENDF/B-VI iron cross-section data was subjected to four benchmark studies as part of the Hiroshima/Nagasaki dose re-evaluation for the National Academy of Science and the Defense Nuclear Agency. The four benchmark studies were: (1) the iron sphere benchmarks from the Lawrence Livermore Pulsed Spheres; (2) the Oak Ridge National Laboratory Fusion Reactor Shielding Benchmark; (3) a 76-cm diameter iron sphere benchmark done at the University of Illinois; (4) the Oak Ridge National Laboratory Benchmark for Neutron Transport through Iron. MCNP4A was used to model each benchmark and computational results from the ENDF/B-VI iron evaluations were compared to ENDF/B-IV, ENDF/B-V, the MCNP Recommended Data Set (which includes Los Alamos National Laboratory Group T-2 evaluations), and experimental data. The results show that the ENDF/B-VI iron evaluations are as good as, or better than, previous data sets.

  1. Benchmark specifications for EBR-II shutdown heat removal tests

    SciTech Connect

    Sofu, T.; Briggs, L. L.

    2012-07-01

    Argonne National Laboratory (ANL) is hosting an IAEA-coordinated research project on benchmark analyses of sodium-cooled fast reactor passive safety tests performed at the Experimental Breeder Reactor-II (EBR-II). The benchmark project involves analysis of a protected and an unprotected loss of flow tests conducted during an extensive testing program within the framework of the U.S. Integral Fast Reactor program to demonstrate the inherently safety features of EBR-II as a pool-type, sodium-cooled fast reactor prototype. The project is intended to improve the participants' design and safety analysis capabilities for sodium-cooled fast reactors through validation and qualification of safety analysis codes and methods. This paper provides a description of the EBR-II tests included in the program, and outlines the benchmark specifications being prepared to support the IAEA-coordinated research project. (authors)

  2. Annual Progress Report Fish Research Project Oregon : Project title, Evaluation of Habitat Improvements -- John Day River.

    SciTech Connect

    Olsen, Erik A.

    1984-01-01

    This report summarizes data collected in 1983 to evaluate habitat improvements in Deer, Camp, and Clear creeks, tributaries of the John Day River. The studies are designed to evaluate changes in abundance of spring chinook and summer steelhead due to habitat improvement projects and to contrast fishery benefits with costs of construction and maintenance of each project. Structure types being evaluated are: (1) log weirs, rock weirs, log deflectors, and in stream boulders in Deer Creek; (2) log weirs in Camp Creek; and (3) log weir-boulder combinations and introduced spawning gravel in Clear Creek. Abundance of juvenile steelhead ranged from 16% to 119% higher in the improved (treatment) area than in the unimproved (control) area of Deer Creek. However, abundance of steelhead in Camp Creek was not significantly different between treatment and control areas. Chinook and steelhead abundance in Clear Creek was 50% and 25% lower, respectively in 1983, than the mean abundance estimated in three previous years. The age structure of steelhead was similar between treatment and control areas in Deer and Clear creeks. The treatment area in Camp Creek, however, had a higher percentage of age 2 and older steelhead than the control. Steelhead redd counts in Camp Creek were 36% lower in 1983 than the previous five year average. Steelhead redd counts in Deer Creek were not made in 1983 because of high streamflows. Chinook redds counted in Clear Creek were 64% lower than the five year average. Surface area, volume, cover, and spawning gravel were the same or higher than the corresponding control in each stream except in Deer Creek where there was less available cover and spawning gravel in sections with rock weirs and in those with log deflectors, respectively. Pool:riffle ratios ranged from 57:43 in sections in upper Clear Creek with log weirs to 9:91 in sections in Deer Creek with rock weirs. Smolt production following habitat improvements is estimated for each stream

  3. Production of Working Reference Materials for the Capability Evaluation Project

    SciTech Connect

    Phillip D. Noll, Jr.; Robert S. Marshall

    1999-03-01

    Nondestructive waste assay (NDA) methods are employed to determine the mass and activity of waste-entrained radionuclides as part of the National TRU (Trans-Uranic) Waste Characterization Program. In support of this program the Idaho National Engineering and Environmental Laboratory Mixed Waste Focus Area developed a plan to acquire capability/performance data on systems proposed for NDA purposes. The Capability Evaluation Project (CEP) was designed to evaluate the NDA systems of commercial contractors by subjecting all participants to identical tests involving 55 gallon drum surrogates containing known quantities and distributions of radioactive materials in the form of sealed-source standards, referred to as working reference materials (WRMs). Although numerous Pu WRMs already exist, the CEP WRM set allows for the evaluation of the capability and performance of systems with respect to waste types/configurations which contain increased amounts of {sup 241}Am relative to weapons grade Pu, waste that is dominantly {sup 241}Am, as well as wastes containing various proportions of depleted uranium. The CEP WRMs consist of a special mixture of PuO{sub 2}/AmO{sub 2} (IAP) and diatomaceous earth (DE) or depleted uranium (DU) oxide and DE and were fabricated at Los Alamos National Laboratory. The IAP WRMS are contained inside a pair of welded inner and outer stainless steel containers. The DU WRMs are singly contained within a stainless steel container equivalent to the outer container of the IAP standards. This report gives a general overview and discussion relating to the production and certification of the CEP WRMs.

  4. Safety in numbers 5: Evaluation of computer-based authentic assessment and high fidelity simulated OSCE environments as a framework for articulating a point of registration medication dosage calculation benchmark.

    PubMed

    Sabin, Mike; Weeks, Keith W; Rowe, David A; Hutton, B Meriel; Coben, Diana; Hall, Carol; Woolley, Norman

    2013-03-01

    This paper reports a key educational initiative undertaken by NHS Education for Scotland (NES), based upon recommendations from a 'Numeracy in Healthcare' consultation. We report here the design of a web-based technical measurement authentic assessment environment evolved from the safeMedicate suite of programs that provided a model for an environment within which a medication dosage calculation problem-solving (MDC-PS) benchmark could be articulated. A sample of 63 third-year pre-registration nursing students was recruited from four participating universities in the UK. A counterbalanced design was employed where the virtual authentic assessment environment was evaluated for internal consistency reliability and criterion-related validity against an objective structured clinical assessment (OSCE) undertaken in high-fidelity simulated clinical environments. Outcome measures indicated an extremely high internal consistency of the web-based environment. It was concluded that the combination of a web-based authentic assessment environment and further assessment of safe technical measurement interpretation and dexterity in a practice/practice simulation setting, populated with a benchmark and a criterion referenced rubric validated by the profession, is an innovative, viable, valid and reliable assessment method for the safe administration of medicines. As a result, the rubric for assessment of the appropriate range of calculation type and complexity informed the NMC's revised Essential Skills Clusters for Medicines Management (NMC, 2010a; NMC, 2010b). This inclusion provides a particularly strong example of both research directly influencing policy and of evidence-based regulation.

  5. Benchmark Standards for Youth Apprenticeship Programs in Georgia.

    ERIC Educational Resources Information Center

    Smith, Clifton L.

    A project was conducted in Georgia to improve the quality of youth apprenticeship programs by identifying and validating a benchmarking system leading toward the establishment of a set of common, valued quality components and indicators for use by local educational agencies. Project activities were undertaken to accomplish the following: (1)…

  6. Thermal Performance Benchmarking; NREL (National Renewable Energy Laboratory)

    SciTech Connect

    Moreno, Gilbert

    2015-06-09

    This project proposes to seek out the SOA power electronics and motor technologies to thermally benchmark their performance. The benchmarking will focus on the thermal aspects of the system. System metrics including the junction-to-coolant thermal resistance and the parasitic power consumption (i.e., coolant flow rates and pressure drop performance) of the heat exchanger will be measured. The type of heat exchanger (i.e., channel flow, brazed, folded-fin) and any enhancement features (i.e., enhanced surfaces) will be identified and evaluated to understand their effect on performance. Additionally, the thermal resistance/conductivity of the power module’s passive stack and motor’s laminations and copper winding bundles will also be measured. The research conducted will allow insight into the various cooling strategies to understand which heat exchangers are most effective in terms of thermal performance and efficiency. Modeling analysis and fluid-flow visualization may also be carried out to better understand the heat transfer and fluid dynamics of the systems.

  7. US-VISIT Identity Matching Algorithm Evaluation Program: ADIS Algorithm Evaluation Project Plan Update

    SciTech Connect

    Grant, C W; Lenderman, J S; Gansemer, J D

    2011-02-24

    This document is an update to the 'ADIS Algorithm Evaluation Project Plan' specified in the Statement of Work for the US-VISIT Identity Matching Algorithm Evaluation Program, as deliverable II.D.1. The original plan was delivered in August 2010. This document modifies the plan to reflect modified deliverables reflecting delays in obtaining a database refresh. This document describes the revised schedule of the program deliverables. The detailed description of the processes used, the statistical analysis processes and the results of the statistical analysis will be described fully in the program deliverables. The US-VISIT Identity Matching Algorithm Evaluation Program is work performed by Lawrence Livermore National Laboratory (LLNL) under IAA HSHQVT-07-X-00002 P00004 from the Department of Homeland Security (DHS).

  8. Benchmark Generation and Simulation at Extreme Scale

    SciTech Connect

    Lagadapati, Mahesh; Mueller, Frank; Engelmann, Christian

    2016-01-01

    The path to extreme scale high-performance computing (HPC) poses several challenges related to power, performance, resilience, productivity, programmability, data movement, and data management. Investigating the performance of parallel applications at scale on future architectures and the performance impact of different architectural choices is an important component of HPC hardware/software co-design. Simulations using models of future HPC systems and communication traces from applications running on existing HPC systems can offer an insight into the performance of future architectures. This work targets technology developed for scalable application tracing of communication events. It focuses on extreme-scale simulation of HPC applications and their communication behavior via lightweight parallel discrete event simulation for performance estimation and evaluation. Instead of simply replaying a trace within a simulator, this work promotes the generation of a benchmark from traces. This benchmark is subsequently exposed to simulation using models to reflect the performance characteristics of future-generation HPC systems. This technique provides a number of benefits, such as eliminating the data intensive trace replay and enabling simulations at different scales. The presented work features novel software co-design aspects, combining the ScalaTrace tool to generate scalable trace files, the ScalaBenchGen tool to generate the benchmark, and the xSim tool to assess the benchmark characteristics within a simulator.

  9. Evaluation of Title I ESEA Projects, 1975-1976: Technical Reports. Report No. 77124.

    ERIC Educational Resources Information Center

    Philadelphia School District, PA. Office of Research and Evaluation.

    Technical reports of individual Title I project evaluations conducted during the 1975-76 school year are presented. The volume contains extensive information about each project's rationale, expected outcomes, mode of operation, previous evaluative findings, current implementation, and attainment of its objectives. The Title I evaluations contained…

  10. Methodological Report: Transnational European Evaluation Project II (TEEP II). ENQA Occasional Papers 9

    ERIC Educational Resources Information Center

    ENQA (European Association for Quality Assurance in Higher Education), 2006

    2006-01-01

    The second Transnational European Evaluation Project (TEEP II) was undertaken between August 2004 and June 2006. A methodology for evaluating transnational programmes had previously been tested during 2002-2003 by ENQA (European Association for Quality Assurance in Higher Education) in the first Transnational European Evaluation Project (TEEP I).…

  11. Student Self-Evaluations of Open-Ended Projects in a Grade 9 Science Classroom.

    ERIC Educational Resources Information Center

    Surry, Clint; Roth, Wolff-Michael

    1999-01-01

    Describes one teacher's first attempt to understand the role of students' self-evaluations of their process and products in a science unit centered upon open-ended projects. Describes the social structure in student group self-evaluation, and explains the important role students' self-evaluations can play as part of an open-ended project learning…

  12. Evaluation of Title IV-C ESEA Projects, 1977-1978. Annual Report. Report #7909.

    ERIC Educational Resources Information Center

    Philadelphia School District, PA. Office of Research and Evaluation.

    Reports of fourteen program descriptions and evaluations are presented. All but two were produced by the Department of Federal Evaluation Resource Services, a model state evaluation project. The projects varied in purpose; budget; grades served; and number of students, teachers, and administrators participating. Reports vary in detail from one to…

  13. Evaluating a "Second Life" Problem-Based Learning (PBL) Demonstrator Project: What Can We Learn?

    ERIC Educational Resources Information Center

    Beaumont, Chris; Savin-Baden, Maggi; Conradi, Emily; Poulton, Terry

    2014-01-01

    This article reports the findings of a demonstrator project to evaluate how effectively Immersive Virtual Worlds (IVWs) could support problem-based learning. The project designed, created and evaluated eight scenarios within "Second Life" (SL) for undergraduate courses in health care management and paramedic training. Evaluation was…

  14. An Analysis of Internally Funded Learning and Teaching Project Evaluation in Higher Education

    ERIC Educational Resources Information Center

    Huber, Elaine; Harvey, Marina

    2016-01-01

    Purpose: In the higher education sector, the evaluation of learning and teaching projects is assuming a role as a quality and accountability indicator. The purpose of this paper is to investigate how learning and teaching project evaluation is approached and critiques alignment between evaluation theory and practice. Design/Methodology/Approach:…

  15. Review of Evaluation Procedures Used in Project POWER.

    ERIC Educational Resources Information Center

    Ohio State Univ., Columbus. Center on Education and Training for Employment.

    Project POWER is a workplace literacy program conducted by Triton College. The project offers courses in English as a Second Language (ESL) and Adult Basic Education (ABE) to employers who are willing to pay their employees for half their class time and for 15 percent of the instructional costs. By the end of January 1990, the project had…

  16. Maths in the Kimberley Project: Evaluating the Pedagogical Model

    ERIC Educational Resources Information Center

    Mathematics Education Research Group of Australasia, 2010

    2010-01-01

    The Mathematics in the Kimberley Project is a three-year research and development project that focuses on mathematical pedagogy in remote Aboriginal community schools. The research team has regularly reported on the project at MERGA (Mathematics Education Research Group of Australasia) conferences, and in this symposium the participants evaluate…

  17. Project HEED. Final Evaluation Report, 1974-1975.

    ERIC Educational Resources Information Center

    Edington, Everett D.; Pettibone, Timothy J.

    Project HEED's (Heed Ethnic Education Deplorization) main emphasis in 1974-75 was to develop reading and cultural awareness skills for kindergarten through 4th grades in the 7 project schools on American Indian reservations in Arizona. In its 4th year of operation, the project (funded under Elementary and Secondary Education Title III) involved…

  18. Project Recurso, 1989-1990. Final Evaluation Report. OREA Report.

    ERIC Educational Resources Information Center

    Rivera, Natasha

    This report presents final (fifth year) results of Project Recurso, a federally funded project which provided 147 Spanish-speaking special education students (grades 3-5) in 12 New York City schools with instruction in English as a Second Language (ESL), Native Language Arts (NLA), and bilingual content area subjects. The project also provided…

  19. Internal Benchmarking for Institutional Effectiveness

    ERIC Educational Resources Information Center

    Ronco, Sharron L.

    2012-01-01

    Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…

  20. Benchmark for Strategic Performance Improvement.

    ERIC Educational Resources Information Center

    Gohlke, Annette

    1997-01-01

    Explains benchmarking, a total quality management tool used to measure and compare the work processes in a library with those in other libraries to increase library performance. Topics include the main groups of upper management, clients, and staff; critical success factors for each group; and benefits of benchmarking. (Author/LRW)

  1. The NAS kernel benchmark program

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.; Barton, J. T.

    1985-01-01

    A collection of benchmark test kernels that measure supercomputer performance has been developed for the use of the NAS (Numerical Aerodynamic Simulation) program at the NASA Ames Research Center. This benchmark program is described in detail and the specific ground rules are given for running the program as a performance test.

  2. Evaluation of a novel molecular vibration-based descriptor (EVA) for QSAR studies: 2. Model validation using a benchmark steroid dataset.

    PubMed

    Turner, D B; Willett, P; Ferguson, A M; Heritage, T W

    1999-05-01

    The EVA molecular descriptor derived from calculated molecular vibrational frequencies is validated for use in QSAR studies. EVA provides a conformationally sensitive but, unlike 3D-QSAR methods such as CoMFA, superposition-free descriptor that has been shown to perform well with a wide range of datasets and biological endpoints. A detailed study is made using a benchmark steroid dataset with a training/test set division of structures. Intensive statistical validation tests are undertaken including various forms of crossvalidation and repeated random permutation testing. Latent variable score plots show that the distribution of structures in reduced dimensional space can be rationalized in terms of activity classes and that EVA is sensitive to structural inconsistencies. Together, the findings indicate that EVA is a statistically robust means of detecting structure-activity correlations with performance entirely comparable to that of analogous CoMFAs. The EVA descriptor is shown to be conformationally sensitive and as such can be considered to be a 3D descriptor but with the advantage over CoMFA that structural superposition is not required. EVA has the property that in certain situations the conformational sensitivity can be altered through the appropriate choice of the EVA sigma parameter. PMID:10216834

  3. FireHose Streaming Benchmarks

    2015-01-27

    The FireHose Streaming Benchmarks are a suite of stream-processing benchmarks defined to enable comparison of streaming software and hardware, both quantitatively vis-a-vis the rate at which they can process data, and qualitatively by judging the effort involved to implement and run the benchmarks. Each benchmark has two parts. The first is a generator which produces and outputs datums at a high rate in a specific format. The second is an analytic which reads the streammore » of datums and is required to perform a well-defined calculation on the collection of datums, typically to find anomalous datums that have been created in the stream by the generator. The FireHose suite provides code for the generators, sample code for the analytics (which users are free to re-implement in their own custom frameworks), and a precise definition of each benchmark calculation.« less

  4. FireHose Streaming Benchmarks

    SciTech Connect

    Karl Anderson, Steve Plimpton

    2015-01-27

    The FireHose Streaming Benchmarks are a suite of stream-processing benchmarks defined to enable comparison of streaming software and hardware, both quantitatively vis-a-vis the rate at which they can process data, and qualitatively by judging the effort involved to implement and run the benchmarks. Each benchmark has two parts. The first is a generator which produces and outputs datums at a high rate in a specific format. The second is an analytic which reads the stream of datums and is required to perform a well-defined calculation on the collection of datums, typically to find anomalous datums that have been created in the stream by the generator. The FireHose suite provides code for the generators, sample code for the analytics (which users are free to re-implement in their own custom frameworks), and a precise definition of each benchmark calculation.

  5. Finding a benchmark for monitoring hospital cleanliness.

    PubMed

    Mulvey, D; Redding, P; Robertson, C; Woodall, C; Kingsmore, P; Bedwell, D; Dancer, S J

    2011-01-01

    This study evaluated three methods for monitoring hospital cleanliness. The aim was to find a benchmark that could indicate risk to patients from a contaminated environment. We performed visual monitoring, ATP bioluminescence and microbiological screening of five clinical surfaces before and after detergent-based cleaning on two wards over a four-week period. Five additional sites that were not featured in the routine domestic specification were also sampled. Measurements from all three methods were integrated and compared in order to choose appropriate levels for routine monitoring. We found that visual assessment did not reflect ATP values nor environmental contamination with microbial flora including Staphylococcus aureus and meticillin-resistant S. aureus (MRSA). There was a relationship between microbial growth categories and the proportion of ATP values exceeding a chosen benchmark but neither reliably predicted the presence of S. aureus or MRSA. ATP values were occasionally diverse. Detergent-based cleaning reduced levels of organic soil by 32% (95% confidence interval: 16-44%; P<0.001) but did not necessarily eliminate indicator staphylococci, some of which survived the cleaning process. An ATP benchmark value of 100 relative light units offered the closest correlation with microbial growth levels <2.5 cfu/cm(2) (receiver operating characteristic ROC curve sensitivity: 57%; specificity: 57%). In conclusion, microbiological and ATP monitoring confirmed environmental contamination, persistence of hospital pathogens and measured the effect on the environment from current cleaning practices. This study has provided provisional benchmarks to assist with future assessment of hospital cleanliness. Further work is required to refine practical sampling strategy and choice of benchmarks. PMID:21129820

  6. Benchmarks for Science Literacy.

    ERIC Educational Resources Information Center

    American Association for the Advancement of Science, Washington, DC.

    Project 2061, begun in 1985, is a long-term effort of scientists and educators on behalf of all children, the purpose of which is to help transform the nation's school system so that all students become well educated in science, mathematics, and technology. Science For All Americans, the first Project 2061 publication, answered the question of…

  7. Graphite and Beryllium Reflector Critical Assemblies of UO2 (Benchmark Experiments 2 and 3)

    SciTech Connect

    Margaret A. Marshall; John D. Bess

    2012-11-01

    INTRODUCTION A series of experiments was carried out in 1962-65 at the Oak Ridge National Laboratory Critical Experiments Facility (ORCEF) for use in space reactor research programs. A core containing 93.2 wt% enriched UO2 fuel rods was used in these experiments. The first part of the experimental series consisted of 252 tightly-packed fuel rods (1.27-cm triangular pitch) with graphite reflectors [1], the second part used 252 graphite-reflected fuel rods organized in a 1.506-cm triangular-pitch array [2], and the final part of the experimental series consisted of 253 beryllium-reflected fuel rods in a 1.506-cm-triangular-pitch configuration and in a 7-tube-cluster configuration [3]. Fission rate distribution and cadmium ratio measurements were taken for all three parts of the experimental series. Reactivity coefficient measurements were taken for various materials placed in the beryllium reflected core. All three experiments in the series have been evaluated for inclusion in the International Reactor Physics Experiment Evaluation Project (IRPhEP) [4] and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbooks, [5]. The evaluation of the first experiment in the series was discussed at the 2011 ANS Winter meeting [6]. The evaluations of the second and third experiments are discussed below. These experiments are of interest as benchmarks because they support the validation of compact reactor designs with similar characteristics to the design parameters for a space nuclear fission surface power systems [7].

  8. Developing scheduling benchmark tests for the Space Network

    NASA Technical Reports Server (NTRS)

    Moe, Karen L.; Happell, Nadine; Brady, Sean

    1993-01-01

    A set of benchmark tests were developed to analyze and measure Space Network scheduling characteristics and to assess the potential benefits of a proposed flexible scheduling concept. This paper discusses the role of the benchmark tests in evaluating alternative flexible scheduling approaches and defines a set of performance measurements. The paper describes the rationale for the benchmark tests as well as the benchmark components, which include models of the Tracking and Data Relay Satellite (TDRS), mission spacecraft, their orbital data, and flexible requests for communication services. Parameters which vary in the tests address the degree of request flexibility, the request resource load, and the number of events to schedule. Test results are evaluated based on time to process and schedule quality. Preliminary results and lessons learned are addressed.

  9. Benchmark problems and results for verifying resonance calculation methodologies

    SciTech Connect

    Wu, H.; Yang, W.; Qin, Y.; He, L.; Cao, L.; Zheng, Y.; Liu, Q.

    2012-07-01

    Resonance calculation is one of the most important procedures for the multi-group neutron transport calculation. With the development of nuclear reactor concepts, many new types of fuel assembly are raised. Compared to the traditional designs, most of the new fuel assemblies have different fuel types either with complex isotopes or with complicated geometry. This makes the traditional resonance calculation method invalid. Recently, many advanced resonance calculation methods are proposed. However, there are few benchmark problems for evaluating those methods with a comprehensive comparison. In this paper, we design 5 groups of benchmark problems including 21 typical cases of different geometries and fuel contents. The reference results of the benchmark problems are generated based on the sub-group method, ultra-fine group method, function expanding method and Monte Carlo method. It is shown that those benchmark problems and their results could be helpful to evaluate the validity of the newly developed resonance calculation method in the future work. (authors)

  10. Evaluating the High School Lunar Research Projects Program

    NASA Astrophysics Data System (ADS)

    Shaner, A. J.; Shipp, S. S.; Allen, J.; Kring, D. A.

    2012-12-01

    The Center for Lunar Science and Exploration (CLSE), a collaboration between the Lunar and Planetary Institute and NASA's Johnson Space Center, is one of seven member teams of the NASA Lunar Science Institute (NLSI). In addition to research and exploration activities, the CLSE team is deeply invested in education and outreach. In support of NASA's and NLSI's objective to train the next generation of scientists, CLSE's High School Lunar Research Projects program is a conduit through which high school students can actively participate in lunar science and learn about pathways into scientific careers. The objectives of the program are to enhance 1) student views of the nature of science; 2) student attitudes toward science and science careers; and 3) student knowledge of lunar science. In its first three years, approximately 140 students and 28 teachers from across the United States have participated in the program. Before beginning their research, students undertake Moon 101, a guided-inquiry activity designed to familiarize them with lunar science and exploration. Following Moon 101, and guided by a lunar scientist mentor, teams choose a research topic, ask their own research question, and design their own research approach to direct their investigation. At the conclusion of their research, teams present their results to a panel of lunar scientists. This panel selects four posters to be presented at the annual Lunar Science Forum held at NASA Ames. The top scoring team travels to the forum to present their research. Three instruments have been developed or modified to evaluate the extent to which the High School Lunar Research Projects meets its objectives. These three instruments measure changes in student views of the nature of science, attitudes towards science and science careers, and knowledge of lunar science. Exit surveys for teachers, students, and mentors were also developed to elicit general feedback about the program and its impact. The nature of science

  11. Algebra Project DR K-12 Cohorts--Demonstration Project: Summative Evaluation Report

    ERIC Educational Resources Information Center

    St. John, Mark

    2014-01-01

    The Algebra Project DR K-12, funded by the National Science Foundation as a Research and Development Project, addressed the challenge of offering significant STEM content for students to ensure public literacy and workforce readiness. The project's primary purpose was to test the feasibility and effectiveness of a model for establishing four-year…

  12. A Built-In System of Evaluation for Reform Projects and Programmes in Education.

    ERIC Educational Resources Information Center

    Dave, Ravindra H.

    1980-01-01

    An EIPOL grid which combines five major dimensions of a broad-based evaluation system with different steps of a project cycle provides a basic operational framework for designing and adopting a more functional system of reform evaluation. (Editor)

  13. Combining Phase Identification and Statistic Modeling for Automated Parallel Benchmark Generation

    SciTech Connect

    Jin, Ye; Ma, Xiaosong; Liu, Qing Gary; Liu, Mingliang; Logan, Jeremy S; Podhorszki, Norbert; Choi, Jong Youl; Klasky, Scott A

    2015-01-01

    Parallel application benchmarks are indispensable for evaluating/optimizing HPC software and hardware. However, it is very challenging and costly to obtain high-fidelity benchmarks reflecting the scale and complexity of state-of-the-art parallel applications. Hand-extracted synthetic benchmarks are time-and labor-intensive to create. Real applications themselves, while offering most accurate performance evaluation, are expensive to compile, port, reconfigure, and often plainly inaccessible due to security or ownership concerns. This work contributes APPRIME, a novel tool for trace-based automatic parallel benchmark generation. Taking as input standard communication-I/O traces of an application's execution, it couples accurate automatic phase identification with statistical regeneration of event parameters to create compact, portable, and to some degree reconfigurable parallel application benchmarks. Experiments with four NAS Parallel Benchmarks (NPB) and three real scientific simulation codes confirm the fidelity of APPRIME benchmarks. They retain the original applications' performance characteristics, in particular the relative performance across platforms.

  14. The WACMOS-ET project - Part 2: Evaluation of global terrestrial evaporation data sets

    NASA Astrophysics Data System (ADS)

    Miralles, D. G.; Jiménez, C.; Jung, M.; Michel, D.; Ershadi, A.; McCabe, M. F.; Hirschi, M.; Martens, B.; Dolman, A. J.; Fisher, J. B.; Mu, Q.; Seneviratne, S. I.; Wood, E. F.; Fernández-Prieto, D.

    2016-02-01

    The WAter Cycle Multi-mission Observation Strategy - EvapoTranspiration (WACMOS-ET) project aims to advance the development of land evaporation estimates on global and regional scales. Its main objective is the derivation, validation, and intercomparison of a group of existing evaporation retrieval algorithms driven by a common forcing data set. Three commonly used process-based evaporation methodologies are evaluated: the Penman-Monteith algorithm behind the official Moderate Resolution Imaging Spectroradiometer (MODIS) evaporation product (PM-MOD), the Global Land Evaporation Amsterdam Model (GLEAM), and the Priestley-Taylor Jet Propulsion Laboratory model (PT-JPL). The resulting global spatiotemporal variability of evaporation, the closure of regional water budgets, and the discrete estimation of land evaporation components or sources (i.e. transpiration, interception loss, and direct soil evaporation) are investigated using river discharge data, independent global evaporation data sets and results from previous studies. In a companion article (Part 1), Michel et al. (2016) inspect the performance of these three models at local scales using measurements from eddy-covariance towers and include in the assessment the Surface Energy Balance System (SEBS) model. In agreement with Part 1, our results indicate that the Priestley and Taylor products (PT-JPL and GLEAM) perform best overall for most ecosystems and climate regimes. While all three evaporation products adequately represent the expected average geographical patterns and seasonality, there is a tendency in PM-MOD to underestimate the flux in the tropics and subtropics. Overall, results from GLEAM and PT-JPL appear more realistic when compared to surface water balances from 837 globally distributed catchments and to separate evaporation estimates from ERA-Interim and the model tree ensemble (MTE). Nonetheless, all products show large dissimilarities during conditions of water stress and drought and

  15. The WACMOS-ET project - Part 2: Evaluation of global terrestrial evaporation data sets

    NASA Astrophysics Data System (ADS)

    Miralles, D. G.; Jiménez, C.; Jung, M.; Michel, D.; Ershadi, A.; McCabe, M. F.; Hirschi, M.; Martens, B.; Dolman, A. J.; Fisher, J. B.; Mu, Q.; Seneviratne, S. I.; Wood, E. F.; Fernaìndez-Prieto, D.

    2015-10-01

    The WACMOS-ET project aims to advance the development of land evaporation estimates at global and regional scales. Its main objective is the derivation, validation and inter-comparison of a group of existing evaporation retrieval algorithms driven by a common forcing data set. Three commonly used process-based evaporation methodologies are evaluated: the Penman-Monteith algorithm behind the official Moderate Resolution Imaging Spectroradiometer (MODIS) evaporation product (PM-MOD), the Global Land Evaporation Amsterdam Model (GLEAM), and the Priestley and Taylor Jet Propulsion Laboratory model (PT-JPL). The resulting global spatiotemporal variability of evaporation, the closure of regional water budgets and the discrete estimation of land evaporation components or sources (i.e. transpiration, interception loss and direct soil evaporation) are investigated using river discharge data, independent global evaporation data sets and results from previous studies. In a companion article (Part 1), Michel et al. (2015) inspect the performance of these three models at local scales using measurements from eddy-covariance towers, and include the assessment the Surface Energy Balance System (SEBS) model. In agreement with Part 1, our results here indicate that the Priestley and Taylor based products (PT-JPL and GLEAM) perform overall best for most ecosystems and climate regimes. While all three products adequately represent the expected average geographical patterns and seasonality, there is a tendency from PM-MOD to underestimate the flux in the tropics and subtropics. Overall, results from GLEAM and PT-JPL appear more realistic when compared against surface water balances from 837 globally-distributed catchments, and against separate evaporation estimates from ERA-Interim and the Model Tree Ensemble (MTE). Nonetheless, all products manifest large dissimilarities during conditions of water stress and drought, and deficiencies in the way evaporation is partitioned into its

  16. Phase-covariant quantum benchmarks

    NASA Astrophysics Data System (ADS)

    Calsamiglia, J.; Aspachs, M.; Muñoz-Tapia, R.; Bagan, E.

    2009-05-01

    We give a quantum benchmark for teleportation and quantum storage experiments suited for pure and mixed test states. The benchmark is based on the average fidelity over a family of phase-covariant states and certifies that an experiment cannot be emulated by a classical setup, i.e., by a measure-and-prepare scheme. We give an analytical solution for qubits, which shows important differences with standard state estimation approach, and compute the value of the benchmark for coherent and squeezed states, both pure and mixed.

  17. Issues in Benchmark Metric Selection

    NASA Astrophysics Data System (ADS)

    Crolotte, Alain

    It is true that a metric can influence a benchmark but will esoteric metrics create more problems than they will solve? We answer this question affirmatively by examining the case of the TPC-D metric which used the much debated geometric mean for the single-stream test. We will show how a simple choice influenced the benchmark and its conduct and, to some extent, DBMS development. After examining other alternatives our conclusion is that the “real” measure for a decision-support benchmark is the arithmetic mean.

  18. A Heterogeneous Medium Analytical Benchmark

    SciTech Connect

    Ganapol, B.D.

    1999-09-27

    A benchmark, called benchmark BLUE, has been developed for one-group neutral particle (neutron or photon) transport in a one-dimensional sub-critical heterogeneous plane parallel medium with surface illumination. General anisotropic scattering is accommodated through the Green's Function Method (GFM). Numerical Fourier transform inversion is used to generate the required Green's functions which are kernels to coupled integral equations that give the exiting angular fluxes. The interior scalar flux is then obtained through quadrature. A compound iterative procedure for quadrature order and slab surface source convergence provides highly accurate benchmark qualities (4- to 5- places of accuracy) results.

  19. California commercial building energy benchmarking

    SciTech Connect

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the identities of building owners might be revealed and

  20. Phase-covariant quantum benchmarks

    SciTech Connect

    Calsamiglia, J.; Aspachs, M.; Munoz-Tapia, R.; Bagan, E.

    2009-05-15

    We give a quantum benchmark for teleportation and quantum storage experiments suited for pure and mixed test states. The benchmark is based on the average fidelity over a family of phase-covariant states and certifies that an experiment cannot be emulated by a classical setup, i.e., by a measure-and-prepare scheme. We give an analytical solution for qubits, which shows important differences with standard state estimation approach, and compute the value of the benchmark for coherent and squeezed states, both pure and mixed.

  1. Project Emerge, Dayton, Ohio. 1972-73 Final Evaluation Report.

    ERIC Educational Resources Information Center

    Dayton City School District, OH.

    Project Emerge, funded under Title VIII of the 1965 Elementary Secondary Education Act, is located in the Model Cities Area, a black-inhabited west side section of Dayton, Ohio. The target school student population is 2,300 of which 20 percent come from families with low incomes. Project Emerge's major objectives are to reduce the dropout rate in…

  2. Alberta Education Energy Conservation Project. Phase II: Internal Evaluation.

    ERIC Educational Resources Information Center

    Sundmark, Dana

    This report is based on the Alberta Education Energy Conservation Project - Phase II. The project was a follow-up to an earlier study, extending from June 1980 to June 1983, in which government funding and engineering manpower were used to conduct an energy management program in 52 selected pilot schools in 5 areas of the province. The report…

  3. Project CHAMP, 1985-1986. OEA Evaluation Report.

    ERIC Educational Resources Information Center

    New York City Board of Education, Brooklyn. Office of Educational Assessment.

    The Chinese Achievement and Mastery program, Project CHAMP, was a bilingual (Chinese/English) project offered at three high schools in Manhattan. The major goals were to enable Chinese students of limited English proficiency (LEP) to learn English and to master content in mathematics, science, global history, computer mathematics, and native…

  4. Project PROBE, 1985-1986. OEA Evaluation Report.

    ERIC Educational Resources Information Center

    New York City Board of Education, Brooklyn. Office of Educational Assessment.

    In its second year of operation, Project PROBE (Professions Oriented Bilingual Education) experienced difficulty in meeting some of its instructional objectives. The project had sought to provide instructional and supportive services to 200 Spanish-speaking students from Latin America at Louis D. Brandeis High School (Manhattan, New York) and to…

  5. An Evaluation of Techniques for Projecting Grade-Progression Ratios.

    ERIC Educational Resources Information Center

    Spar, Michael A.

    1994-01-01

    Several different methods for projecting grade-progression ratios were tested with 12 years of grade-specific membership data for Virginia. In nearly all cases, projections made by exponential smoothing produced more accurate results than series produced by averages, moving averages, or regression methods. (SLD)

  6. Project BRIDGES. 1986-1987. OEA Evaluation Report.

    ERIC Educational Resources Information Center

    Scorza, Margaret H.; And Others

    In its first year under Title VII funding, Project BRIDGES (Bilingual Resource Instruction for the Development of Gainful Employment Skills) provided instructional and support services to 346 limited-English-speaking students in three Brooklyn (New York) high schools (South Shore, Sheepshead Bay, Franklin D. Roosevelt). The project's aim was to…

  7. A risk evaluation for the fuel retrieval sub-project

    SciTech Connect

    Carlisle, B.S.

    1996-10-11

    This study reviews the technical, schedule and budget baselines of the sub-project to assure all significant issues have been identified on the sub-project issues management list. The issue resolution dates are identified and resolution plans established. Those issues that could adversely impact procurement activities have been uniquely identified on the list and a risk assessment completed.

  8. Project Aprendizaje, 1988-89. Evaluation Section Report. OREA Report.

    ERIC Educational Resources Information Center

    Berney, Tomi D.; Velasquez, Clara

    In it's first year, Project Aprendizaje served 250 students from the Dominican Republic and Puerto Rico at Seward Park High School in Manhattan (New York). Project objectives were to improve participants' language skills in Spanish and English, help participants successfully complete content area courses needed for graduation, and provide career…

  9. Logic system aids in evaluation of project readiness

    NASA Technical Reports Server (NTRS)

    Maris, S. J.; Obrien, T. J.

    1966-01-01

    Measurement Operational Readiness Requirements /MORR/ assignments logic is used for determining the readiness of a complex project to go forward as planned. The system used logic network which assigns qualities to all important criteria in a project and establishes a logical sequence of measurements to determine what the conditions are.

  10. Project HEED. Final Evaluation Report, 1973-74.

    ERIC Educational Resources Information Center

    Edington, Everett D.; Pettibone, Timothy

    1973-74 approximately 1,100 Indian students in grades 1 through 8 participated in Project HEED (Heed Ethnic Educational Depolarization) in Arizona. The project target sites were 59 classrooms at Sacaton, Sells, Peach Springs, San Carlos, Topowa, Many Farms, St. Charles Mission, and Hoteville. Primary objectives were: (1) improvement in reading…

  11. Project HEED, Title III, Section 306. Final Evaluation Report.

    ERIC Educational Resources Information Center

    Hughes, Orval D.

    Project HEED (Heed Ethnic Educational Depolarization) involves over 1,000 Indian children in grades 1-8 in Arizona. The project target sites are 48 classrooms at Sells, Topowa, San Carlos, Many Farms, Hotevilla, Peach Springs, and Sacaton. Objectives are to increase: (1) reading achievement, (2) affective behavior of teachers, (3) motivation by…

  12. Project Head Start: Evaluation and Research Summary 1965-1967.

    ERIC Educational Resources Information Center

    Office of Economic Opportunity, Washington, DC.

    Project Head Start has as its goal the improvement of the child's physical health, intellectual performance, social attitudes, and sense of self. The project involves over half a million children each year, including children in both summer and yearlong programs. About 40 percent of Head Start pupils are Negro, about 30 percent are white, and the…

  13. Copernicus Project: Learning with Laptops: Year 1 Evaluation Report.

    ERIC Educational Resources Information Center

    Fouts, Jeffrey T.; Stuen, Carol

    The Copernicus Project is a multi-district effort designed to incorporate technology, specifically the laptop computer, into the instructional and learning process of the public schools. Participants included six school districts in Washington state, the Toshiba and Microsoft Corporations, and parents. The project called for a 1 to 1…

  14. Incentives in Education Project, Impact Evaluation Report. Final Report.

    ERIC Educational Resources Information Center

    Planar Corp., Washington, DC.

    This report describes results of a demonstration project carried out in four cities during 1971-72. The project aimed at exploring the feasibility and impact of two different forms of money incentives payments. In one form -- the "Teacher-Only" model -- the teachers in a school were offered a series of bonuses ranging from $150 to $600 per class…

  15. Incorporating Asymmetric Dependency Patterns in the Evaluation of IS/IT projects Using Real Option Analysis

    ERIC Educational Resources Information Center

    Burke, John C.

    2012-01-01

    The objective of my dissertation is to create a general approach to evaluating IS/IT projects using Real Option Analysis (ROA). This is an important problem because an IT Project Portfolio (ITPP) can represent hundreds of projects, millions of dollars of investment and hundreds of thousands of employee hours. Therefore, any advance in the…

  16. Crisis Intervention Project, Boston Public Schools, December 1, 1972-May 1, 1973. Evaluation.

    ERIC Educational Resources Information Center

    Marion, David J.

    The Crisis Prevention-Intervention Project (CPI) of the Boston Public Schools is described in two parts: a six-month evaluation report and an interim report by the project director. The goals of this pilot project for the five Boston schools (three public, two parochial) were: (1) to develop an operational program of crisis intervention and…

  17. Rationale, design, and methods for process evaluation in the Childhood Obesity Research Demonstration project

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The cross-site process evaluation plan for the Childhood Obesity Research Demonstration (CORD) project is described here. The CORD project comprises 3 unique demonstration projects designed to integrate multi-level, multi-setting health care and public health interventions over a 4-year funding peri...

  18. Evaluating success criteria and project monitoring in river enhancement within an adaptive management framework

    USGS Publications Warehouse

    O'Donnell, T. K.; Galat, D.L.

    2008-01-01

    Objective setting, performance measures, and accountability are important components of an adaptive-management approach to river-enhancement programs. Few lessons learned by river-enhancement practitioners in the United States have been documented and disseminated relative to the number of projects implemented. We conducted scripted telephone surveys with river-enhancement project managers and practitioners within the Upper Mississippi River Basin (UMRB) to determine the extent of setting project success criteria, monitoring, evaluation of monitoring data, and data dissemination. Investigation of these elements enabled a determination of those that inhibited adaptive management. Seventy river enhancement projects were surveyed. Only 34% of projects surveyed incorporated a quantified measure of project success. Managers most often relied on geophysical attributes of rivers when setting project success criteria, followed by biological communities. Ninety-one percent of projects that performed monitoring included biologic variables, but the lack of data collection before and after project completion and lack of field-based reference or control sites will make future assessments of ecologic success difficult. Twenty percent of projects that performed monitoring evaluated ???1 variable but did not disseminate their evaluations outside their organization. Results suggest greater incentives may be required to advance the science of river enhancement. Future river-enhancement programs within the UMRB and elsewhere can increase knowledge gained from individual projects by offering better guidance on setting success criteria before project initiation and evaluation through established monitoring protocols. ?? 2007 Springer Science+Business Media, LLC.

  19. Athena, Andrew and Stanford: A Look at Implementation and Evaluation in Three Large Projects.

    ERIC Educational Resources Information Center

    Isaacs, Geoff

    1989-01-01

    Describes implementation, support, and evaluation of computer assisted learning (CAL) projects at three universities: Project Athena at Massachusetts Institute of Technology; the Andrew network at Carnegie Mellon University; and a project at Stanford University. Topics discussed include work stations, microcomputers, computer networks, graphics,…

  20. Parent Leadership Training Project, October 1, 1970-September 30, 1972. Independent Evaluator's Report.

    ERIC Educational Resources Information Center

    Arter, Rhetta M.

    The Parent Leadership Training Project (PLTP) through Adult Basic Education was established as a two-year demonstration project designed to increase the reading skills of adults (16 and over) through a language-experience approach, using topics selected by the participants. The independent project evaluation covers the entire operational period…

  1. Follow-Up Evaluation Project. From July 1, 1981 to June 30, 1983. Final Report.

    ERIC Educational Resources Information Center

    Santa Fe Community Coll., Gainesville, FL.

    A project was undertaken to revise a model competency-based trade and industrial education program that had been developed for use in Florida schools in a project that was implemented earlier. During the followup evaluation, the project staff compiled task listings for each of the following trade and industrial education program areas: automotive;…

  2. Evaluating the Effectiveness of Collaborative Computer-Intensive Projects in an Undergraduate Psychometrics Course

    ERIC Educational Resources Information Center

    Barchard, Kimberly A.; Pace, Larry A.

    2010-01-01

    Undergraduate psychometrics classes often use computer-intensive active learning projects. However, little research has examined active learning or computer-intensive projects in psychometrics courses. We describe two computer-intensive collaborative learning projects used to teach the design and evaluation of psychological tests. Course…

  3. Workplace ESL Literacy in Diverse Small Business Contexts: Final Evaluation Report on Project EXCEL.

    ERIC Educational Resources Information Center

    Hemphill, David F.

    Project EXCEL, a workplace literacy project involving four small business enterprises in San Francisco, is evaluated. The project focused on literacy and basic skills training for limited-English-proficient (LEP) workers. The businesses included the following: a communications and mass mailing firm; a dessert wholesale company; a Mexican…

  4. Moving Stories: Evaluation of a BSW Oral History Project with Older Adults with Diverse Immigration Histories

    ERIC Educational Resources Information Center

    Maschi, Tina; MacMillan, Thalia; Pardasani, Manoj; Lee, Ji Seon; Moreno, Claudia

    2012-01-01

    The purpose of this study was to evaluate an experiential learning project with BSW students to see if their perceptions of older adults have changed. The project consisted of an oral history project and presentation that matched BSW students with older adults from diverse ethnic backgrounds to gather their immigration narratives. The study used a…

  5. Development of an automated platform for the verification, testing, processing and benchmarking of Evaluated Nuclear Data at the NEA Data Bank. Status of the NDEC system

    NASA Astrophysics Data System (ADS)

    Michel-Sendis, F.; Díez, C. J.; Cabellos, O.

    2016-03-01

    Modern nuclear data Quality Assurance (QA) is, in practice, a multistage process that aims at establishing a thorough assessment of the validity of the physical information contained in an evaluated nuclear data file as compared to our best knowledge of available experimental data and theoretical models. It should systematically address the performance of the evaluated file against available pertinent integral experiments, with proper and prior verification that the information encoded in the evaluation is accurately processed and reconstructed for the application conditions. The aim of the NDEC (Nuclear Data Evaluation Cycle) platform currently being developed by the Data Bank is to provide a correct and automated handling of these diverse QA steps in order to facilitate the expert human assessment of evaluated nuclear data files, both by the evaluators and by the end users of nuclear data.

  6. Grand Junction Projects Office Remedial Action Project Building 2 public dose evaluation. Final report

    SciTech Connect

    Morris, R.

    1996-05-01

    Building 2 on the U.S. Department of Energy (DOE) Grand Junction Projects Office (GJPO) site, which is operated by Rust Geotech, is part of the GJPO Remedial Action Program. This report describes measurements and modeling efforts to evaluate the radiation dose to members of the public who might someday occupy or tear down Building 2. The assessment of future doses to those occupying or demolishing Building 2 is based on assumptions about future uses of the building, measured data when available, and predictive modeling when necessary. Future use of the building is likely to be as an office facility. The DOE sponsored program, RESRAD-BUILD, Version. 1.5 was chosen for the modeling tool. Releasing the building for unrestricted use instead of demolishing it now could save a substantial amount of money compared with the baseline cost estimate because the site telecommunications system, housed in Building 2, would not be disabled and replaced. The information developed in this analysis may be used as part of an as low as reasonably achievable (ALARA) cost/benefit determination regarding disposition of Building 2.

  7. PREFERENCE EVALUATION AND DECISION SUPPORT FOR MULTIPLE UTILITIES OF HEAT MITIGATION PROJECTS

    NASA Astrophysics Data System (ADS)

    Nakagawa, Hideharu; Nakatani, Jun; Kurisu, Kiyo; Hanaki, Keisuke

    Heat mitigation projects, such as green roof, waterfront, mist spraying and water-retentive pavement, are mainly intended to decrease outdoor temperature, while some of them have multiple utilities including increase of species, mitigation of flood, improvement of spatial design and environmental enlightenment in addition to decrease in outdoor temperature. This paper proposes and demonstrates a decision support method for alternative design based on prioritization and preference evaluation for multiple utilities of heat mitigation projects. First, applying analytic hierarchy process (AHP), the priority order of project implementation was decided based on subjective evaluation of multi-stakeholders such as benefit recipients, experts and project implementers on multiple utilities of the projects. Then, the preference structure of office workers as benefit recipients of projects was identified using conjoint analysis, each utility was evaluated in monetary value, and discussed which aspects should be emphasized on detailed project planning.

  8. Perspective: Selected benchmarks from commercial CFD codes

    SciTech Connect

    Freitas, C.J.

    1995-06-01

    This paper summarizes the results of a series of five benchmark simulations which were completed using commercial Computational Fluid Dynamics (CFD) codes. These simulations were performed by the vendors themselves, and then reported by them in ASME`s CFD Triathlon Forum and CFD Biathlon Forum. The first group of benchmarks consisted of three laminar flow problems. These were the steady, two-dimensional flow over a backward-facing step, the low Reynolds number flow around a circular cylinder, and the unsteady three-dimensional flow in a shear-driven cubical cavity. The second group of benchmarks consisted of two turbulent flow problems. These were the two-dimensional flow around a square cylinder with periodic separated flow phenomena, and the stead, three-dimensional flow in a 180-degree square bend. All simulation results were evaluated against existing experimental data nd thereby satisfied item 10 of the Journal`s policy statement for numerical accuracy. The objective of this exercise was to provide the engineering and scientific community with a common reference point for the evaluation of commercial CFD codes.

  9. Benchmarking novel approaches for modelling species range dynamics

    PubMed Central

    Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H.; Moore, Kara A.; Zimmermann, Niklaus E.

    2016-01-01

    Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species’ range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species’ response to climate change but also emphasise several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches

  10. Benchmarking novel approaches for modelling species range dynamics.

    PubMed

    Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H; Moore, Kara A; Zimmermann, Niklaus E

    2016-08-01

    Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species' range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species' response to climate change but also emphasize several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches

  11. Evaluation of a Locally Developed Social Studies Curriculum Project: Improving Citizenship Education.

    ERIC Educational Resources Information Center

    Napier, John D.; Hepburn, Mary A.

    Evaluation results from the Improving Citizenship Education (ICE) Project are presented. The purpose of the ICE project was to design and test a model for improving the political/citizenship knowledge and attitudes of K-12 students by infusing citizenship education into an existing social studies curriculum. This evaluation examined the…

  12. Extensive Evaluation of Using a Game Project in a Software Architecture Course

    ERIC Educational Resources Information Center

    Wang, Alf Inge

    2011-01-01

    This article describes an extensive evaluation of introducing a game project to a software architecture course. In this project, university students have to construct and design a type of software architecture, evaluate the architecture, implement an application based on the architecture, and test this implementation. In previous years, the domain…

  13. Law-Related Education Evaluation Project (United States), 1979-1984 [machine-readable data file].

    ERIC Educational Resources Information Center

    Social Science Education Consortium, Inc., Boulder, CO.

    The "Law-Related Education Evaluation Project" evaluated the degree of awareness of and receptivity to law-related education (LRE) among selected relevant professional groups, the progress toward institutionalization of LRE at certain sites, and the impact of LRE on students, especially in terms of delinquency rates. The project ran from 1979 to…

  14. Evaluation of the 1983-84 ECIA, Chapter II Computer Education Project.

    ERIC Educational Resources Information Center

    Morris, Donald R.

    Following an overview of acquisition, distribution, maintenance, and support activities from 1981-83, the Dade County (Florida) 1983-84 Education Consolidation Improvement Act (ECIA) Chapter 2 Computer Education Project is described and evaluated. Evaluation is based on success in meeting several project objectives: (1) maintenance of existing…

  15. Parallel Ada benchmarks for the SVMS

    NASA Technical Reports Server (NTRS)

    Collard, Philippe E.

    1990-01-01

    The use of parallel processing paradigm to design and develop faster and more reliable computers appear to clearly mark the future of information processing. NASA started the development of such an architecture: the Spaceborne VHSIC Multi-processor System (SVMS). Ada will be one of the languages used to program the SVMS. One of the unique characteristics of Ada is that it supports parallel processing at the language level through the tasking constructs. It is important for the SVMS project team to assess how efficiently the SVMS architecture will be implemented, as well as how efficiently Ada environment will be ported to the SVMS. AUTOCLASS II, a Bayesian classifier written in Common Lisp, was selected as one of the benchmarks for SVMS configurations. The purpose of the R and D effort was to provide the SVMS project team with the version of AUTOCLASS II, written in Ada, that would make use of Ada tasking constructs as much as possible so as to constitute a suitable benchmark. Additionally, a set of programs was developed that would measure Ada tasking efficiency on parallel architectures as well as determine the critical parameters influencing tasking efficiency. All this was designed to provide the SVMS project team with a set of suitable tools in the development of the SVMS architecture.

  16. Evaluation of Representative Smart Grid Investment Project Technologies: Demand Response

    SciTech Connect

    Fuller, Jason C.; Prakash Kumar, Nirupama; Bonebrake, Christopher A.

    2012-02-14

    This document is one of a series of reports estimating the benefits of deploying technologies similar to those implemented on the Smart Grid Investment Grant (SGIG) projects. Four technical reports cover the various types of technologies deployed in the SGIG projects, distribution automation, demand response, energy storage, and renewables integration. A fifth report in the series examines the benefits of deploying these technologies on a national level. This technical report examines the impacts of a limited number of demand response technologies and implementations deployed in the SGIG projects.

  17. Benchmarking Measures of Network Influence

    PubMed Central

    Bramson, Aaron; Vandermarliere, Benjamin

    2016-01-01

    Identifying key agents for the transmission of diseases (ideas, technology, etc.) across social networks has predominantly relied on measures of centrality on a static base network or a temporally flattened graph of agent interactions. Various measures have been proposed as the best trackers of influence, such as degree centrality, betweenness, and k-shell, depending on the structure of the connectivity. We consider SIR and SIS propagation dynamics on a temporally-extruded network of observed interactions and measure the conditional marginal spread as the change in the magnitude of the infection given the removal of each agent at each time: its temporal knockout (TKO) score. We argue that this TKO score is an effective benchmark measure for evaluating the accuracy of other, often more practical, measures of influence. We find that none of the network measures applied to the induced flat graphs are accurate predictors of network propagation influence on the systems studied; however, temporal networks and the TKO measure provide the requisite targets for the search for effective predictive measures. PMID:27670635

  18. Benchmarking Measures of Network Influence

    NASA Astrophysics Data System (ADS)

    Bramson, Aaron; Vandermarliere, Benjamin

    2016-09-01

    Identifying key agents for the transmission of diseases (ideas, technology, etc.) across social networks has predominantly relied on measures of centrality on a static base network or a temporally flattened graph of agent interactions. Various measures have been proposed as the best trackers of influence, such as degree centrality, betweenness, and k-shell, depending on the structure of the connectivity. We consider SIR and SIS propagation dynamics on a temporally-extruded network of observed interactions and measure the conditional marginal spread as the change in the magnitude of the infection given the removal of each agent at each time: its temporal knockout (TKO) score. We argue that this TKO score is an effective benchmark measure for evaluating the accuracy of other, often more practical, measures of influence. We find that none of the network measures applied to the induced flat graphs are accurate predictors of network propagation influence on the systems studied; however, temporal networks and the TKO measure provide the requisite targets for the search for effective predictive measures.

  19. Benchmarking hypercube hardware and software

    NASA Technical Reports Server (NTRS)

    Grunwald, Dirk C.; Reed, Daniel A.

    1986-01-01

    It was long a truism in computer systems design that balanced systems achieve the best performance. Message passing parallel processors are no different. To quantify the balance of a hypercube design, an experimental methodology was developed and the associated suite of benchmarks was applied to several existing hypercubes. The benchmark suite includes tests of both processor speed in the absence of internode communication and message transmission speed as a function of communication patterns.

  20. Data-Intensive Benchmarking Suite

    2008-11-26

    The Data-Intensive Benchmark Suite is a set of programs written for the study of data-or storage-intensive science and engineering problems, The benchmark sets cover: general graph searching (basic and Hadoop Map/Reduce breadth-first search), genome sequence searching, HTTP request classification (basic and Hadoop Map/Reduce), low-level data communication, and storage device micro-beachmarking