Science.gov

Sample records for metric development benchmarking

  1. Issues in Benchmark Metric Selection

    NASA Astrophysics Data System (ADS)

    Crolotte, Alain

    It is true that a metric can influence a benchmark but will esoteric metrics create more problems than they will solve? We answer this question affirmatively by examining the case of the TPC-D metric which used the much debated geometric mean for the single-stream test. We will show how a simple choice influenced the benchmark and its conduct and, to some extent, DBMS development. After examining other alternatives our conclusion is that the “real” measure for a decision-support benchmark is the arithmetic mean.

  2. Conceptual Soundness, Metric Development, Benchmarking, and Targeting for PATH Subprogram Evaluation

    SciTech Connect

    Mosey. G.; Doris, E.; Coggeshall, C.; Antes, M.; Ruch, J.; Mortensen, J.

    2009-01-01

    The objective of this study is to evaluate the conceptual soundness of the U.S. Department of Housing and Urban Development (HUD) Partnership for Advancing Technology in Housing (PATH) program's revised goals and establish and apply a framework to identify and recommend metrics that are the most useful for measuring PATH's progress. This report provides an evaluative review of PATH's revised goals, outlines a structured method for identifying and selecting metrics, proposes metrics and benchmarks for a sampling of individual PATH programs, and discusses other metrics that potentially could be developed that may add value to the evaluation process. The framework and individual program metrics can be used for ongoing management improvement efforts and to inform broader program-level metrics for government reporting requirements.

  3. Self-benchmarking Guide for Data Centers: Metrics, Benchmarks, Actions

    SciTech Connect

    Mathew, Paul; Ganguly, Srirupa; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in data centers. This guide is primarily intended for personnel who have responsibility for managing energy use in existing data centers - including facilities managers, energy managers, and their engineering consultants. Additionally, data center designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior data center benchmarking studies supported by the California Energy Commission. Much of the benchmarking data are drawn from the LBNL data center benchmarking database that was developed from these studies. Additional benchmark data were obtained from engineering experts including facility designers and energy managers. This guide also builds on recent research supported by the U.S. Department of Energy's Save Energy Now program.

  4. Self-benchmarking Guide for Laboratory Buildings: Metrics, Benchmarks, Actions

    SciTech Connect

    Mathew, Paul; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  5. Self-benchmarking Guide for Cleanrooms: Metrics, Benchmarks, Actions

    SciTech Connect

    Mathew, Paul; Sartor, Dale; Tschudi, William

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  6. Metrics and Benchmarks for Visualization

    NASA Technical Reports Server (NTRS)

    Uselton, Samuel P.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    What is a "good" visualization? How can the quality of a visualization be measured? How can one tell whether one visualization is "better" than another? I claim that the true quality of a visualization can only be measured in the context of a particular purpose. The same image generated from the same data may be excellent for one purpose and abysmal for another. A good measure of visualization quality will correspond to the performance of users in accomplishing the intended purpose, so the "gold standard" is user testing. As a user of visualization software (or at least a consultant to such users) I don't expect visualization software to have been tested in this way for every possible use. In fact, scientific visualization (as distinct from more "production oriented" uses of visualization) will continually encounter new data, new questions and new purposes; user testing can never keep up. User need software they can trust, and advice on appropriate visualizations of particular purposes. Considering the following four processes, and their impact on visualization trustworthiness, reveals important work needed to create worthwhile metrics and benchmarks for visualization. These four processes are (1) complete system testing (user-in-loop), (2) software testing, (3) software design and (4) information dissemination. Additional information is contained in the original extended abstract.

  7. Beyond Benchmarking: Value-Adding Metrics

    ERIC Educational Resources Information Center

    Fitz-enz, Jac

    2007-01-01

    HR metrics has grown up a bit over the past two decades, moving away from simple benchmarking practices and toward a more inclusive approach to measuring institutional performance and progress. In this article, the acknowledged "father" of human capital performance benchmarking provides an overview of several aspects of today's HR metrics…

  8. Cleanroom Energy Efficiency: Metrics and Benchmarks

    SciTech Connect

    International SEMATECH Manufacturing Initiative; Mathew, Paul A.; Tschudi, William; Sartor, Dale; Beasley, James

    2010-07-07

    Cleanrooms are among the most energy-intensive types of facilities. This is primarily due to the cleanliness requirements that result in high airflow rates and system static pressures, as well as process requirements that result in high cooling loads. Various studies have shown that there is a wide range of cleanroom energy efficiencies and that facility managers may not be aware of how energy efficient their cleanroom facility can be relative to other cleanroom facilities with the same cleanliness requirements. Metrics and benchmarks are an effective way to compare one facility to another and to track the performance of a given facility over time. This article presents the key metrics and benchmarks that facility managers can use to assess, track, and manage their cleanroom energy efficiency or to set energy efficiency targets for new construction. These include system-level metrics such as air change rates, air handling W/cfm, and filter pressure drops. Operational data are presented from over 20 different cleanrooms that were benchmarked with these metrics and that are part of the cleanroom benchmark dataset maintained by Lawrence Berkeley National Laboratory (LBNL). Overall production efficiency metrics for cleanrooms in 28 semiconductor manufacturing facilities in the United States and recorded in the Fabs21 database are also presented.

  9. Metrics and Benchmarks for Energy Efficiency in Laboratories

    SciTech Connect

    Rumsey Engineers; Mathew, Paul; Mathew, Paul; Greenberg, Steve; Sartor, Dale; Rumsey, Peter; Weale, John

    2008-04-10

    A wide spectrum of laboratory owners, ranging from universities to federal agencies, have explicit goals for energy efficiency in their facilities. For example, the Energy Policy Act of 2005 (EPACT 2005) requires all new federal buildings to exceed ASHRAE 90.1-2004 [1] by at least 30%. A new laboratory is much more likely to meet energy efficiency goals if quantitative metrics and targets are specified in programming documents and tracked during the course of the delivery process. If not, any additional capital costs or design time associated with attaining higher efficiencies can be difficult to justify. This article describes key energy efficiency metrics and benchmarks for laboratories, which have been developed and applied to several laboratory buildings--both for design and operation. In addition to traditional whole building energy use metrics (e.g. BTU/ft{sup 2}.yr, kWh/m{sup 2}.yr), the article describes HVAC system metrics (e.g. ventilation W/cfm, W/L.s{sup -1}), which can be used to identify the presence or absence of energy features and opportunities during design and operation.

  10. Collected notes from the Benchmarks and Metrics Workshop

    NASA Technical Reports Server (NTRS)

    Drummond, Mark E.; Kaelbling, Leslie P.; Rosenschein, Stanley J.

    1991-01-01

    In recent years there has been a proliferation of proposals in the artificial intelligence (AI) literature for integrated agent architectures. Each architecture offers an approach to the general problem of constructing an integrated agent. Unfortunately, the ways in which one architecture might be considered better than another are not always clear. There has been a growing realization that many of the positive and negative aspects of an architecture become apparent only when experimental evaluation is performed and that to progress as a discipline, we must develop rigorous experimental methods. In addition to the intrinsic intellectual interest of experimentation, rigorous performance evaluation of systems is also a crucial practical concern to our research sponsors. DARPA, NASA, and AFOSR (among others) are actively searching for better ways of experimentally evaluating alternative approaches to building intelligent agents. One tool for experimental evaluation involves testing systems on benchmark tasks in order to assess their relative performance. As part of a joint DARPA and NASA funded project, NASA-Ames and Teleos Research are carrying out a research effort to establish a set of benchmark tasks and evaluation metrics by which the performance of agent architectures may be determined. As part of this project, we held a workshop on Benchmarks and Metrics at the NASA Ames Research Center on June 25, 1990. The objective of the workshop was to foster early discussion on this important topic. We did not achieve a consensus, nor did we expect to. Collected here is some of the information that was exchanged at the workshop. Given here is an outline of the workshop, a list of the participants, notes taken on the white-board during open discussions, position papers/notes from some participants, and copies of slides used in the presentations.

  11. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design

    PubMed Central

    Pache, Roland A.; Ollikainen, Noah; Kundert, Kale; O'Meara, Matthew J.; Smith, Colin A.; Kortemme, Tanja

    2015-01-01

    The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks) to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a “best practice” set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available. PMID:26335248

  12. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    PubMed

    Ó Conchúir, Shane; Barlow, Kyle A; Pache, Roland A; Ollikainen, Noah; Kundert, Kale; O'Meara, Matthew J; Smith, Colin A; Kortemme, Tanja

    2015-01-01

    The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks) to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available. PMID:26335248

  13. Metrics and Benchmarks for Energy Efficiency in Laboratories

    SciTech Connect

    Mathew, Paul

    2007-10-26

    A wide spectrum of laboratory owners, ranging from universities to federal agencies, have explicit goals for energy efficiency in their facilities. For example, the Energy Policy Act of 2005 (EPACT 2005) requires all new federal buildings to exceed ASHRAE 90.1-2004 1 by at least 30 percent. The University of California Regents Policy requires all new construction to exceed California Title 24 2 by at least 20 percent. A new laboratory is much more likely to meet energy efficiency goals if quantitative metrics and targets are explicitly specified in programming documents and tracked during the course of the delivery process. If efficiency targets are not explicitly and properly defined, any additional capital costs or design time associated with attaining higher efficiencies can be difficult to justify. The purpose of this guide is to provide guidance on how to specify and compute energy efficiency metrics and benchmarks for laboratories, at the whole building as well as the system level. The information in this guide can be used to incorporate quantitative metrics and targets into the programming of new laboratory facilities. Many of these metrics can also be applied to evaluate existing facilities. For information on strategies and technologies to achieve energy efficiency, the reader is referred to Labs21 resources, including technology best practice guides, case studies, and the design guide (available at www.labs21century.gov/toolkit).

  14. Synthetic neuronal datasets for benchmarking directed functional connectivity metrics

    PubMed Central

    Andrade, Alexandre

    2015-01-01

    Background. Datasets consisting of synthetic neural data generated with quantifiable and controlled parameters are a valuable asset in the process of testing and validating directed functional connectivity metrics. Considering the recent debate in the neuroimaging community concerning the use of these metrics for fMRI data, synthetic datasets that emulate the BOLD signal dynamics have played a central role by supporting claims that argue in favor or against certain choices. Generative models often used in studies that simulate neuronal activity, with the aim of gaining insight into specific brain regions and functions, have different requirements from the generative models for benchmarking datasets. Even though the latter must be realistic, there is a tradeoff between realism and computational demand that needs to be contemplated and simulations that efficiently mimic the real behavior of single neurons or neuronal populations are preferred, instead of more cumbersome and marginally precise ones. Methods. This work explores how simple generative models are able to produce neuronal datasets, for benchmarking purposes, that reflect the simulated effective connectivity and, how these can be used to obtain synthetic recordings of EEG and fMRI BOLD signals. The generative models covered here are AR processes, neural mass models consisting of linear and nonlinear stochastic differential equations and populations with thousands of spiking units. Forward models for EEG consist in the simple three-shell head model while the fMRI BOLD signal is modeled with the Balloon-Windkessel model or by convolution with a hemodynamic response function. Results. The simulated datasets are tested for causality with the original spectral formulation for Granger causality. Modeled effective connectivity can be detected in the generated data for varying connection strengths and interaction delays. Discussion. All generative models produce synthetic neuronal data with detectable causal

  15. A screening life cycle metric to benchmark the environmental sustainability of waste management systems.

    PubMed

    Kaufman, Scott M; Krishnan, Nikhil; Themelis, Nickolas J

    2010-08-01

    The disposal of municipal solid waste (MSW) can lead to significant environmental burdens. The implementation of effective waste management practices, however, requires the ability to benchmark alternative systems from an environmental sustainability perspective. Existing metrics--such as recycling and generation rates, or the emissions of individual pollutants--often are not goal-oriented, are not readily comparable, and may not provide insight into the most effective options for improvement. Life cycle assessment (LCA) is an effective approach to quantify and compare systems, but full LCA comparisons typically involve significant expenditure of resources and time. In this work we develop a metric called the Resource Conservation Efficiency (RCE) that is based on a screening-LCA approach, and that can be used to rapidly and effectively benchmark (on a screening level) the ecological sustainability of waste management practices across multiple locations. We first demonstrate that this measure is an effective proxy by comparing RCE results with existing LCA inventory and impact assessment methods. We then demonstrate the use of the RCE metric by benchmarking the sustainability of waste management practices in two U.S. cities: San Francisco and Honolulu. The results show that while San Francisco does an excellent job recovering recyclable materials, adding a waste to energy (WTE) facility to their infrastructure would most beneficially impact the environmental performance of their waste management system. Honolulu would achieve the greatest gains by increasing the capture of easily recycled materials not currently being recovered. Overall results also highlight how the RCE metric may be used to provide insight into effective actions cities can take to boost the environmental performance of their waste management practices. PMID:20666561

  16. Evaluation of mobile phone camera benchmarking using objective camera speed and image quality metrics

    NASA Astrophysics Data System (ADS)

    Peltoketo, Veli-Tapani

    2014-11-01

    When a mobile phone camera is tested and benchmarked, the significance of image quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. However, the speed or rapidity metrics of the mobile phone's camera system has not been used with the quality metrics even if the camera speed has become a more and more important camera performance feature. There are several tasks in this work. First, the most important image quality and speed-related metrics of a mobile phone's camera system are collected from the standards and papers and, also, novel speed metrics are identified. Second, combinations of the quality and speed metrics are validated using mobile phones on the market. The measurements are done toward application programming interface of different operating systems. Finally, the results are evaluated and conclusions are made. The paper defines a solution to combine different image quality and speed metrics to a single benchmarking score. A proposal of the combined benchmarking metric is evaluated using measurements of 25 mobile phone cameras on the market. The paper is a continuation of a previous benchmarking work expanded with visual noise measurement and updates of the latest mobile phone versions.

  17. Improved product energy intensity benchmarking metrics for thermally concentrated food products.

    PubMed

    Walker, Michael E; Arnold, Craig S; Lettieri, David J; Hutchins, Margot J; Masanet, Eric

    2014-10-21

    Product energy intensity (PEI) metrics allow industry and policymakers to quantify manufacturing energy requirements on a product-output basis. However, complexities can arise for benchmarking of thermally concentrated products, particularly in the food processing industry, due to differences in outlet composition, feed material composition, and processing technology. This study analyzes tomato paste as a typical, high-volume concentrated product using a thermodynamics-based model. Results show that PEI for tomato pastes and purees varies from 1200 to 9700 kJ/kg over the range of 8%-40% outlet solids concentration for a 3-effect evaporator, and 980-7000 kJ/kg for a 5-effect evaporator. Further, the PEI for producing paste at 31% outlet solids concentration in a 3-effect evaporator varies from 13,000 kJ/kg at 3% feed solids concentration to 5900 kJ/kg at 6%; for a 5-effect evaporator, the variation is from 9200 kJ/kg at 3%, to 4300 kJ/kg at 6%. Methods to compare the PEI of different product concentrations on a standard basis are evaluated. This paper also presents methods to develop PEI benchmark values for multiple plants. These results focus on the case of a tomato paste processing facility, but can be extended to other products and industries that utilize thermal concentration. PMID:25215537

  18. Developing Financial Benchmarks for Critical Access Hospitals

    PubMed Central

    Pink, George H.; Holmes, George M.; Slifkin, Rebecca T.; Thompson, Roger E.

    2009-01-01

    This study developed and applied benchmarks for five indicators included in the CAH Financial Indicators Report, an annual, hospital-specific report distributed to all critical access hospitals (CAHs). An online survey of Chief Executive Officers and Chief Financial Officers was used to establish benchmarks. Indicator values for 2004, 2005, and 2006 were calculated for 421 CAHs and hospital performance was compared to the benchmarks. Although many hospitals performed better than benchmark on one indicator in 1 year, very few performed better than benchmark on all five indicators in all 3 years. The probability of performing better than benchmark differed among peer groups. PMID:19544935

  19. Metrics for antibody therapeutics development.

    PubMed

    Reichert, Janice M

    2010-01-01

    A wide variety of full-size monoclonal antibodies (mAbs) and therapeutics derived from alternative antibody formats can be produced through genetic and biological engineering techniques. These molecules are now filling the preclinical and clinical pipelines of every major pharmaceutical company and many biotechnology firms. Metrics for the development of antibody therapeutics, including averages for the number of candidates entering clinical study and development phase lengths for mAbs approved in the United States, were derived from analysis of a dataset of over 600 therapeutic mAbs that entered clinical study sponsored, at least in part, by commercial firms. The results presented provide an overview of the field and context for the evaluation of on-going and prospective mAb development programs. The expansion of therapeutic antibody use through supplemental marketing approvals and the increase in the study of therapeutics derived from alternative antibody formats are discussed. PMID:20930555

  20. How Does Your Data Center Measure Up? Energy Efficiency Metrics and Benchmarks for Data Center Infrastructure Systems

    SciTech Connect

    Mathew, Paul; Greenberg, Steve; Ganguly, Srirupa; Sartor, Dale; Tschudi, William

    2009-04-01

    Data centers are among the most energy intensive types of facilities, and they are growing dramatically in terms of size and intensity [EPA 2007]. As a result, in the last few years there has been increasing interest from stakeholders - ranging from data center managers to policy makers - to improve the energy efficiency of data centers, and there are several industry and government organizations that have developed tools, guidelines, and training programs. There are many opportunities to reduce energy use in data centers and benchmarking studies reveal a wide range of efficiency practices. Data center operators may not be aware of how efficient their facility may be relative to their peers, even for the same levels of service. Benchmarking is an effective way to compare one facility to another, and also to track the performance of a given facility over time. Toward that end, this article presents the key metrics that facility managers can use to assess, track, and manage the efficiency of the infrastructure systems in data centers, and thereby identify potential efficiency actions. Most of the benchmarking data presented in this article are drawn from the data center benchmarking database at Lawrence Berkeley National Laboratory (LBNL). The database was developed from studies commissioned by the California Energy Commission, Pacific Gas and Electric Co., the U.S. Department of Energy and the New York State Energy Research and Development Authority.

  1. ImQual: a web-service dedicated to image quality evaluation and metrics benchmark

    NASA Astrophysics Data System (ADS)

    Nauge, Michael; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine

    2011-01-01

    Quality assessment is becoming an important issue in the framework of image and video processing. Images are generally intended to be viewed by human observers and thus the consideration of the visual perception is an intrinsic aspect of the effective assessment of image quality. This observation has been made for different application domains such as printing, compression, transmission, and so on. Recently hundreds of research paper have proposed objective quality metrics dedicated to several image and video applications. With this abundance of quality tools, it is more than ever important to have a set of rules/methods allowing to assess the efficiency of a given metric. In this direction, technical groups such as VQEG (Video Quality Experts Group) or JPEG AIC (Advanced Image Coding) have focused their interest on the definition of test-plans to measure the impact of a metric. Following this wave in the image and video community, we propose in this paper a web-service or a web-application dedicated to the benchmark of quality metrics for image compression and open to all possible extensions. This application is intended to be the reference tool for the JPEG committee in order to ease the evaluation of new compression technologies. Also it is seen as a global help for our community to help researchers time while trying to evaluate their algorithms of watermarking, compression, enhancement, . . . As an illustration of the web-application, we propose a benchmark of many well-known metrics on several image databases to provide a small overview of the possible use.

  2. Benchmarking Learning and Teaching: Developing a Method

    ERIC Educational Resources Information Center

    Henderson-Smart, Cheryl; Winning, Tracey; Gerzina, Tania; King, Shalinie; Hyde, Sarah

    2006-01-01

    Purpose: To develop a method for benchmarking teaching and learning in response to an institutional need to validate a new program in Dentistry at the University of Sydney, Australia. Design/methodology/approach: After a collaborative partner, University of Adelaide, was identified, the areas of teaching and learning to be benchmarked, PBL…

  3. Achieving palliative care research efficiency through defining and benchmarking performance metrics

    PubMed Central

    Lodato, Jordan E.; Aziz, Noreen; Bennett, Rachael E.; Abernethy, Amy P.; Kutner, Jean S.

    2014-01-01

    Purpose of Review Research efficiency is gaining increasing attention in the research enterprise, including palliative care research. The importance of generating meaningful findings and translating these scientific advances to improved patient care creates urgency in the field to address well-documented system inefficiencies. The Palliative Care Research Cooperative Group (PCRC) provides useful examples for ensuring research efficiency in palliative care. Recent Findings Literature on maximizing research efficiency focuses on the importance of clearly delineated process maps, working instructions, and standard operating procedures (SOPs) in creating synchronicity in expectations across research sites. Examples from the PCRC support these objectives and suggest that early creation and employment of performance metrics aligned with these processes are essential to generate clear expectations and identify benchmarks. These benchmarks are critical in effective monitoring and ultimately the generation of high quality findings that are translatable to clinical populations. Prioritization of measurable goals and tasks to ensure that activities align with programmatic aims is critical. Summary Examples from the PCRC affirm and expand the existing literature on research efficiency, providing a palliative care focus. Operating procedures, performance metrics, prioritization, and monitoring for success should all be informed by and inform the process map to achieve maximum research efficiency. PMID:23080309

  4. Developing integrated benchmarks for DOE performance measurement

    SciTech Connect

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  5. Performance Benchmarks for Scholarly Metrics Associated with Fisheries and Wildlife Faculty.

    PubMed

    Swihart, Robert K; Sundaram, Mekala; Höök, Tomas O; DeWoody, J Andrew; Kellner, Kenneth F

    2016-01-01

    Research productivity and impact are often considered in professional evaluations of academics, and performance metrics based on publications and citations increasingly are used in such evaluations. To promote evidence-based and informed use of these metrics, we collected publication and citation data for 437 tenure-track faculty members at 33 research-extensive universities in the United States belonging to the National Association of University Fisheries and Wildlife Programs. For each faculty member, we computed 8 commonly used performance metrics based on numbers of publications and citations, and recorded covariates including academic age (time since Ph.D.), sex, percentage of appointment devoted to research, and the sub-disciplinary research focus. Standardized deviance residuals from regression models were used to compare faculty after accounting for variation in performance due to these covariates. We also aggregated residuals to enable comparison across universities. Finally, we tested for temporal trends in citation practices to assess whether the "law of constant ratios", used to enable comparison of performance metrics between disciplines that differ in citation and publication practices, applied to fisheries and wildlife sub-disciplines when mapped to Web of Science Journal Citation Report categories. Our regression models reduced deviance by ¼ to ½. Standardized residuals for each faculty member, when combined across metrics as a simple average or weighted via factor analysis, produced similar results in terms of performance based on percentile rankings. Significant variation was observed in scholarly performance across universities, after accounting for the influence of covariates. In contrast to findings for other disciplines, normalized citation ratios for fisheries and wildlife sub-disciplines increased across years. Increases were comparable for all sub-disciplines except ecology. We discuss the advantages and limitations of our methods

  6. Performance Benchmarks for Scholarly Metrics Associated with Fisheries and Wildlife Faculty

    PubMed Central

    Swihart, Robert K.; Sundaram, Mekala; Höök, Tomas O.; DeWoody, J. Andrew; Kellner, Kenneth F.

    2016-01-01

    Research productivity and impact are often considered in professional evaluations of academics, and performance metrics based on publications and citations increasingly are used in such evaluations. To promote evidence-based and informed use of these metrics, we collected publication and citation data for 437 tenure-track faculty members at 33 research-extensive universities in the United States belonging to the National Association of University Fisheries and Wildlife Programs. For each faculty member, we computed 8 commonly used performance metrics based on numbers of publications and citations, and recorded covariates including academic age (time since Ph.D.), sex, percentage of appointment devoted to research, and the sub-disciplinary research focus. Standardized deviance residuals from regression models were used to compare faculty after accounting for variation in performance due to these covariates. We also aggregated residuals to enable comparison across universities. Finally, we tested for temporal trends in citation practices to assess whether the “law of constant ratios”, used to enable comparison of performance metrics between disciplines that differ in citation and publication practices, applied to fisheries and wildlife sub-disciplines when mapped to Web of Science Journal Citation Report categories. Our regression models reduced deviance by ¼ to ½. Standardized residuals for each faculty member, when combined across metrics as a simple average or weighted via factor analysis, produced similar results in terms of performance based on percentile rankings. Significant variation was observed in scholarly performance across universities, after accounting for the influence of covariates. In contrast to findings for other disciplines, normalized citation ratios for fisheries and wildlife sub-disciplines increased across years. Increases were comparable for all sub-disciplines except ecology. We discuss the advantages and limitations of our methods

  7. Enhanced Accident Tolerant LWR Fuels: Metrics Development

    SciTech Connect

    Shannon Bragg-Sitton; Lori Braase; Rose Montgomery; Chris Stanek; Robert Montgomery; Lance Snead; Larry Ott; Mike Billone

    2013-09-01

    The Department of Energy (DOE) Fuel Cycle Research and Development (FCRD) Advanced Fuels Campaign (AFC) is conducting research and development on enhanced Accident Tolerant Fuels (ATF) for light water reactors (LWRs). This mission emphasizes the development of novel fuel and cladding concepts to replace the current zirconium alloy-uranium dioxide (UO2) fuel system. The overall mission of the ATF research is to develop advanced fuels/cladding with improved performance, reliability and safety characteristics during normal operations and accident conditions, while minimizing waste generation. The initial effort will focus on implementation in operating reactors or reactors with design certifications. To initiate the development of quantitative metrics for ATR, a LWR Enhanced Accident Tolerant Fuels Metrics Development Workshop was held in October 2012 in Germantown, MD. This paper summarizes the outcome of that workshop and the current status of metrics development for LWR ATF.

  8. Proposing Metrics for Benchmarking Novel EEG Technologies Towards Real-World Measurements

    PubMed Central

    Oliveira, Anderson S.; Schlink, Bryan R.; Hairston, W. David; König, Peter; Ferris, Daniel P.

    2016-01-01

    Recent advances in electroencephalographic (EEG) acquisition allow for recordings using wet and dry sensors during whole-body motion. The large variety of commercially available EEG systems contrasts with the lack of established methods for objectively describing their performance during whole-body motion. Therefore, the aim of this study was to introduce methods for benchmarking the suitability of new EEG technologies for that context. Subjects performed an auditory oddball task using three different EEG systems (Biosemi wet—BSM, Cognionics Wet—Cwet, Conionics Dry—Cdry). Nine subjects performed the oddball task while seated and walking on a treadmill. We calculated EEG epoch rejection rate, pre-stimulus noise (PSN), signal-to-noise ratio (SNR) and EEG amplitude variance across the P300 event window (CVERP) from a subset of 12 channels common to all systems. We also calculated test-retest reliability and the subject’s level of comfort while using each system. Our results showed that using the traditional 75 μV rejection threshold BSM and Cwet epoch rejection rates are ~25% and ~47% in the seated and walking conditions respectively. However, this threshold rejects ~63% of epochs for Cdry in the seated condition and excludes 100% of epochs for the majority of subjects during walking. BSM showed predominantly no statistical differences between seated and walking condition for all metrics, whereas Cwet showed increases in PSN and CVERP, as well as reduced SNR in the walking condition. Data quality from Cdry in seated conditions were predominantly inferior in comparison to the wet systems. Test-retest reliability was mostly moderate/good for these variables, especially in seated conditions. In addition, subjects felt less discomfort and were motivated for longer recording periods while using wet EEG systems in comparison to the dry system. The proposed method was successful in identifying differences across systems that are mostly caused by motion

  9. Proposing Metrics for Benchmarking Novel EEG Technologies Towards Real-World Measurements.

    PubMed

    Oliveira, Anderson S; Schlink, Bryan R; Hairston, W David; König, Peter; Ferris, Daniel P

    2016-01-01

    Recent advances in electroencephalographic (EEG) acquisition allow for recordings using wet and dry sensors during whole-body motion. The large variety of commercially available EEG systems contrasts with the lack of established methods for objectively describing their performance during whole-body motion. Therefore, the aim of this study was to introduce methods for benchmarking the suitability of new EEG technologies for that context. Subjects performed an auditory oddball task using three different EEG systems (Biosemi wet-BSM, Cognionics Wet-Cwet, Conionics Dry-Cdry). Nine subjects performed the oddball task while seated and walking on a treadmill. We calculated EEG epoch rejection rate, pre-stimulus noise (PSN), signal-to-noise ratio (SNR) and EEG amplitude variance across the P300 event window (CVERP) from a subset of 12 channels common to all systems. We also calculated test-retest reliability and the subject's level of comfort while using each system. Our results showed that using the traditional 75 μV rejection threshold BSM and Cwet epoch rejection rates are ~25% and ~47% in the seated and walking conditions respectively. However, this threshold rejects ~63% of epochs for Cdry in the seated condition and excludes 100% of epochs for the majority of subjects during walking. BSM showed predominantly no statistical differences between seated and walking condition for all metrics, whereas Cwet showed increases in PSN and CVERP, as well as reduced SNR in the walking condition. Data quality from Cdry in seated conditions were predominantly inferior in comparison to the wet systems. Test-retest reliability was mostly moderate/good for these variables, especially in seated conditions. In addition, subjects felt less discomfort and were motivated for longer recording periods while using wet EEG systems in comparison to the dry system. The proposed method was successful in identifying differences across systems that are mostly caused by motion-related artifacts and

  10. Benchmarking the performance of fixed-image receptor digital radiography systems. Part 2: system performance metric.

    PubMed

    Lee, Kam L; Bernardo, Michael; Ireland, Timothy A

    2016-06-01

    This is part two of a two-part study in benchmarking system performance of fixed digital radiographic systems. The study compares the system performance of seven fixed digital radiography systems based on quantitative metrics like modulation transfer function (sMTF), normalised noise power spectrum (sNNPS), detective quantum efficiency (sDQE) and entrance surface air kerma (ESAK). It was found that the most efficient image receptors (greatest sDQE) were not necessarily operating at the lowest ESAK. In part one of this study, sMTF is shown to depend on system configuration while sNNPS is shown to be relatively consistent across systems. Systems are ranked on their signal-to-noise ratio efficiency (sDQE) and their ESAK. Systems using the same equipment configuration do not necessarily have the same system performance. This implies radiographic practice at the site will have an impact on the overall system performance. In general, systems are more dose efficient at low dose settings. PMID:27222199

  11. Understanding Acceptance of Software Metrics--A Developer Perspective

    ERIC Educational Resources Information Center

    Umarji, Medha

    2009-01-01

    Software metrics are measures of software products and processes. Metrics are widely used by software organizations to help manage projects, improve product quality and increase efficiency of the software development process. However, metrics programs tend to have a high failure rate in organizations, and developer pushback is one of the sources…

  12. Metrics. [measurement for effective software development and management

    NASA Technical Reports Server (NTRS)

    Mcgarry, Frank

    1991-01-01

    A development status evaluation is presented for practical software performance measurement, or 'metrics', in which major innovations have recently occurred. Metrics address such aspects of software performance as whether a software project is on schedule, how many errors can be expected from it, whether the methodology being used is effective and the relative quality of the software employed. Metrics may be characterized as explicit, analytical, and subjective. Attention is given to the bases for standards and the conduct of metrics research.

  13. Structural Life and Reliability Metrics: Benchmarking and Verification of Probabilistic Life Prediction Codes

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.; Soditus, Sherry; Hendricks, Robert C.; Zaretsky, Erwin V.

    2002-01-01

    Over the past two decades there has been considerable effort by NASA Glenn and others to develop probabilistic codes to predict with reasonable engineering certainty the life and reliability of critical components in rotating machinery and, more specifically, in the rotating sections of airbreathing and rocket engines. These codes have, to a very limited extent, been verified with relatively small bench rig type specimens under uniaxial loading. Because of the small and very narrow database the acceptance of these codes within the aerospace community has been limited. An alternate approach to generating statistically significant data under complex loading and environments simulating aircraft and rocket engine conditions is to obtain, catalog and statistically analyze actual field data. End users of the engines, such as commercial airlines and the military, record and store operational and maintenance information. This presentation describes a cooperative program between the NASA GRC, United Airlines, USAF Wright Laboratory, U.S. Army Research Laboratory and Australian Aeronautical & Maritime Research Laboratory to obtain and analyze these airline data for selected components such as blades, disks and combustors. These airline data will be used to benchmark and compare existing life prediction codes.

  14. Structural Life and Reliability Metrics: Benchmarking and Verification of Probabilistic Life Prediction Codes

    NASA Astrophysics Data System (ADS)

    Litt, Jonathan S.; Soditus, Sherry; Hendricks, Robert C.; Zaretsky, Erwin V.

    2002-10-01

    Over the past two decades there has been considerable effort by NASA Glenn and others to develop probabilistic codes to predict with reasonable engineering certainty the life and reliability of critical components in rotating machinery and, more specifically, in the rotating sections of airbreathing and rocket engines. These codes have, to a very limited extent, been verified with relatively small bench rig type specimens under uniaxial loading. Because of the small and very narrow database the acceptance of these codes within the aerospace community has been limited. An alternate approach to generating statistically significant data under complex loading and environments simulating aircraft and rocket engine conditions is to obtain, catalog and statistically analyze actual field data. End users of the engines, such as commercial airlines and the military, record and store operational and maintenance information. This presentation describes a cooperative program between the NASA GRC, United Airlines, USAF Wright Laboratory, U.S. Army Research Laboratory and Australian Aeronautical & Maritime Research Laboratory to obtain and analyze these airline data for selected components such as blades, disks and combustors. These airline data will be used to benchmark and compare existing life prediction codes.

  15. Can Human Capital Metrics Effectively Benchmark Higher Education with For-Profit Companies?

    ERIC Educational Resources Information Center

    Hagedorn, Kathy; Forlaw, Blair

    2007-01-01

    Last fall, Saint Louis University participated in St. Louis, Missouri's, first Human Capital Performance Study alongside several of the region's largest for-profit employers. The university also participated this year in the benchmarking of employee engagement factors conducted by the St. Louis Business Journal in its effort to quantify and select…

  16. A Question of Accountability: Looking beyond Federal Mandates for Metrics That Accurately Benchmark Community College Success

    ERIC Educational Resources Information Center

    Joch, Alan

    2014-01-01

    The need for increased accountability in higher education and, specifically, the nation's community colleges-is something most educators can agree on. The challenge has, and continues to be, finding a system of metrics that meets the unique needs of two-year institutions versus their four-year-counterparts. Last summer, President Obama unveiled…

  17. Developing Benchmarks to Measure Teacher Candidates' Performance

    ERIC Educational Resources Information Center

    Frazier, Laura Corbin; Brown-Hobbs, Stacy; Palmer, Barbara Martin

    2013-01-01

    This paper traces the development of teacher candidate benchmarks at one liberal arts institution. Begun as a classroom assessment activity over ten years ago, the benchmarks, through collaboration with professional development school partners, now serve as a primary measure of teacher candidates' performance in the final phases of the…

  18. Development of a California commercial building benchmarking database

    SciTech Connect

    Kinney, Satkartar; Piette, Mary Ann

    2002-05-17

    Building energy benchmarking is a useful starting point for commercial building owners and operators to target energy savings opportunities. There are a number of tools and methods for benchmarking energy use. Benchmarking based on regional data can provides more relevant information for California buildings than national tools such as Energy Star. This paper discusses issues related to benchmarking commercial building energy use and the development of Cal-Arch, a building energy benchmarking database for California. Currently Cal-Arch uses existing survey data from California's Commercial End Use Survey (CEUS), a largely underutilized wealth of information collected by California's major utilities. Doe's Commercial Building Energy Consumption Survey (CBECS) is used by a similar tool, Arch, and by a number of other benchmarking tools. Future versions of Arch/Cal-Arch will utilize additional data sources including modeled data and individual buildings to expand the database.

  19. Developing a Security Metrics Scorecard for Healthcare Organizations.

    PubMed

    Elrefaey, Heba; Borycki, Elizabeth; Kushniruk, Andrea

    2015-01-01

    In healthcare, information security is a key aspect of protecting a patient's privacy and ensuring systems availability to support patient care. Security managers need to measure the performance of security systems and this can be achieved by using evidence-based metrics. In this paper, we describe the development of an evidence-based security metrics scorecard specific to healthcare organizations. Study participants were asked to comment on the usability and usefulness of a prototype of a security metrics scorecard that was developed based on current research in the area of general security metrics. Study findings revealed that scorecards need to be customized for the healthcare setting in order for the security information to be useful and usable in healthcare organizations. The study findings resulted in the development of a security metrics scorecard that matches the healthcare security experts' information requirements. PMID:26718256

  20. Developing Metrics in Systems Integration (ISS Program COTS Integration Model)

    NASA Technical Reports Server (NTRS)

    Lueders, Kathryn

    2007-01-01

    This viewgraph presentation reviews some of the complications in developing metrics for systems integration. Specifically it reviews a case study of how two programs within NASA try to develop and measure performance while meeting the encompassing organizational goals.

  1. Hospital Energy Benchmarking Guidance - Version 1.0

    SciTech Connect

    Singer, Brett C.

    2009-09-08

    This document describes an energy benchmarking framework for hospitals. The document is organized as follows. The introduction provides a brief primer on benchmarking and its application to hospitals. The next two sections discuss special considerations including the identification of normalizing factors. The presentation of metrics is preceded by a description of the overall framework and the rationale for the grouping of metrics. Following the presentation of metrics, a high-level protocol is provided. The next section presents draft benchmarks for some metrics; benchmarks are not available for many metrics owing to a lack of data. This document ends with a list of research needs for further development.

  2. Advanced Life Support Research and Technology Development Metric

    NASA Technical Reports Server (NTRS)

    Hanford, A. J.

    2004-01-01

    The Metric is one of several measures employed by the NASA to assess the Agency s progress as mandated by the United States Congress and the Office of Management and Budget. Because any measure must have a reference point, whether explicitly defined or implied, the Metric is a comparison between a selected ALS Project life support system and an equivalently detailed life support system using technology from the Environmental Control and Life Support System (ECLSS) for the International Space Station (ISS). This document provides the official calculation of the Advanced Life Support (ALS) Research and Technology Development Metric (the Metric) for Fiscal Year 2004. The values are primarily based on Systems Integration, Modeling, and Analysis (SIMA) Element approved software tools or reviewed and approved reference documents. For Fiscal Year 2004, the Advanced Life Support Research and Technology Development Metric value is 2.03 for an Orbiting Research Facility and 1.62 for an Independent Exploration Mission.

  3. Development of Technology Transfer Economic Growth Metrics

    NASA Technical Reports Server (NTRS)

    Mastrangelo, Christina M.

    1998-01-01

    The primary objective of this project is to determine the feasibility of producing technology transfer metrics that answer the question: Do NASA/MSFC technical assistance activities impact economic growth? The data for this project resides in a 7800-record database maintained by Tec-Masters, Incorporated. The technology assistance data results from survey responses from companies and individuals who have interacted with NASA via a Technology Transfer Agreement, or TTA. The goal of this project was to determine if the existing data could provide indications of increased wealth. This work demonstrates that there is evidence that companies that used NASA technology transfer have a higher job growth rate than the rest of the economy. It also shows that the jobs being supported are jobs in higher wage SIC codes, and this indicates improvements in personal wealth. Finally, this work suggests that with correct data, the wealth issue may be addressed.

  4. Developing scheduling benchmark tests for the Space Network

    NASA Technical Reports Server (NTRS)

    Moe, Karen L.; Happell, Nadine; Brady, Sean

    1993-01-01

    A set of benchmark tests were developed to analyze and measure Space Network scheduling characteristics and to assess the potential benefits of a proposed flexible scheduling concept. This paper discusses the role of the benchmark tests in evaluating alternative flexible scheduling approaches and defines a set of performance measurements. The paper describes the rationale for the benchmark tests as well as the benchmark components, which include models of the Tracking and Data Relay Satellite (TDRS), mission spacecraft, their orbital data, and flexible requests for communication services. Parameters which vary in the tests address the degree of request flexibility, the request resource load, and the number of events to schedule. Test results are evaluated based on time to process and schedule quality. Preliminary results and lessons learned are addressed.

  5. Developing Metrics for Managing Soybean Aphids

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Stage-specific economic injury levels form the basis of integrated pest management for soybean aphid (Aphis glycines Matsumura) in soybean (Glycine max L.). Experimental objectives were to develop a procedure for calculating economic injury levels of the soybean aphid specific to the R2 (full bloom...

  6. Metrics in Urban Health: Current Developments and Future Prospects.

    PubMed

    Prasad, Amit; Gray, Chelsea Bettina; Ross, Alex; Kano, Megumi

    2016-01-01

    The research community has shown increasing interest in developing and using metrics to determine the relationships between urban living and health. In particular, we have seen a recent exponential increase in efforts aiming to investigate and apply metrics for urban health, especially the health impacts of the social and built environments as well as air pollution. A greater recognition of the need to investigate the impacts and trends of health inequities is also evident through more recent literature. Data availability and accuracy have improved through new affordable technologies for mapping, geographic information systems (GIS), and remote sensing. However, less research has been conducted in low- and middle-income countries where quality data are not always available, and capacity for analyzing available data may be limited. For this increased interest in research and development of metrics to be meaningful, the best available evidence must be accessible to decision makers to improve health impacts through urban policies. PMID:26789382

  7. Measures and metrics for software development

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The evaluations of and recommendations for the use of software development measures based on the practical and analytical experience of the Software Engineering Laboratory are discussed. The basic concepts of measurement and system of classification for measures are described. The principal classes of measures defined are explicit, analytic, and subjective. Some of the major software measurement schemes appearing in the literature are derived. The applications of specific measures in a production environment are explained. These applications include prediction and planning, review and assessment, and evaluation and selection.

  8. Development of a Benchmark Example for Delamination Fatigue Growth Prediction

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2010-01-01

    The development of a benchmark example for cyclic delamination growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of a Double Cantilever Beam (DCB) specimen, which is independent of the analysis software used and allows the assessment of the delamination growth prediction capabilities in commercial finite element codes. First, the benchmark result was created for the specimen. Second, starting from an initially straight front, the delamination was allowed to grow under cyclic loading in a finite element model of a commercial code. The number of cycles to delamination onset and the number of cycles during stable delamination growth for each growth increment were obtained from the analysis. In general, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. Overall, the results are encouraging but further assessment for mixed-mode delamination is required.

  9. Development of a Benchmark Example for Delamination Fatigue Growth Prediction

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2010-01-01

    The development of a benchmark example for cyclic delamination growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of a Double Cantilever Beam (DCB) specimen, which is independent of the analysis software used and allows the assessment of the delamination growth prediction capabilities in commercial finite element codes. First, the benchmark result was created for the specimen. Second, starting from an initially straight front, the delamination was allowed to grow under cyclic loading in a finite element model of a commercial code. The number of cycles to delamination onset and the number of cycles during stable delamination growth for each growth increment were obtained from the analysis. In general, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. Overall, the results are encouraging but further assessment for mixed-mode delamination is required

  10. Developing a Benchmark Tool for Sustainable Consumption: An Iterative Process

    ERIC Educational Resources Information Center

    Heiskanen, E.; Timonen, P.; Nissinen, A.; Gronroos, J.; Honkanen, A.; Katajajuuri, J. -M.; Kettunen, J.; Kurppa, S.; Makinen, T.; Seppala, J.; Silvenius, F.; Virtanen, Y.; Voutilainen, P.

    2007-01-01

    This article presents the development process of a consumer-oriented, illustrative benchmarking tool enabling consumers to use the results of environmental life cycle assessment (LCA) to make informed decisions. LCA provides a wealth of information on the environmental impacts of products, but its results are very difficult to present concisely…

  11. The Applicability of Proposed Object-Oriented Metrics to Developer Feedback in Time to Impact Development

    NASA Technical Reports Server (NTRS)

    Neal, Ralph D.

    1996-01-01

    This paper looks closely at each of the software metrics generated by the McCabe object-Oriented Tool(TM) and its ability to convey timely information to developers. The metrics are examined for meaningfulness in terms of the scale assignable to the metric by the rules of measurement theory and the software dimension being measured. Recommendations are made as to the proper use of each metric and its ability to influence development at an early stage. The metrics of the McCabe Object-Oriented Tool(TM) set were selected because of the tool's use in a couple of NASA IV&V projects.

  12. Development of Technology Readiness Level (TRL) Metrics and Risk Measures

    SciTech Connect

    Engel, David W.; Dalton, Angela C.; Anderson, K. K.; Sivaramakrishnan, Chandrika; Lansing, Carina

    2012-10-01

    This is an internal project milestone report to document the CCSI Element 7 team's progress on developing Technology Readiness Level (TRL) metrics and risk measures. In this report, we provide a brief overview of the current technology readiness assessment research, document the development of technology readiness levels (TRLs) specific to carbon capture technologies, describe the risk measures and uncertainty quantification approaches used in our research, and conclude by discussing the next steps that the CCSI Task 7 team aims to accomplish.

  13. Development of Management Metrics for Research and Technology

    NASA Technical Reports Server (NTRS)

    Sheskin, Theodore J.

    2003-01-01

    Professor Ted Sheskin from CSU will be tasked to research and investigate metrics that can be used to determine the technical progress for advanced development and research tasks. These metrics will be implemented in a software environment that hosts engineering design, analysis and management tools to be used to support power system and component research work at GRC. Professor Sheskin is an Industrial Engineer and has been involved in issues related to management of engineering tasks and will use his knowledge from this area to allow extrapolation into the research and technology management area. Over the course of the summer, Professor Sheskin will develop a bibliography of management papers covering current management methods that may be applicable to research management. At the completion of the summer work we expect to have him recommend a metric system to be reviewed prior to implementation in the software environment. This task has been discussed with Professor Sheskin and some review material has already been given to him.

  14. Pragmatic quality metrics for evolutionary software development models

    NASA Technical Reports Server (NTRS)

    Royce, Walker

    1990-01-01

    Due to the large number of product, project, and people parameters which impact large custom software development efforts, measurement of software product quality is a complex undertaking. Furthermore, the absolute perspective from which quality is measured (customer satisfaction) is intangible. While we probably can't say what the absolute quality of a software product is, we can determine the relative quality, the adequacy of this quality with respect to pragmatic considerations, and identify good and bad trends during development. While no two software engineers will ever agree on an optimum definition of software quality, they will agree that the most important perspective of software quality is its ease of change. We can call this flexibility, adaptability, or some other vague term, but the critical characteristic of software is that it is soft. The easier the product is to modify, the easier it is to achieve any other software quality perspective. This paper presents objective quality metrics derived from consistent lifecycle perspectives of rework which, when used in concert with an evolutionary development approach, can provide useful insight to produce better quality per unit cost/schedule or to achieve adequate quality more efficiently. The usefulness of these metrics is evaluated by applying them to a large, real world, Ada project.

  15. Learning Through Benchmarking: Developing a Relational, Prospective Approach to Benchmarking ICT in Learning and Teaching

    ERIC Educational Resources Information Center

    Ellis, Robert A.; Moore, Roger R.

    2006-01-01

    This study discusses benchmarking the use of information and communication technologies (ICT) in teaching and learning between two universities with different missions: one an Australian campus-based metropolitan university and the other a British distance-education provider. It argues that the differences notwithstanding, it is possible to…

  16. Verification and validation benchmarks.

    SciTech Connect

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  17. Career performance trajectories of Olympic swimmers: benchmarks for talent development.

    PubMed

    Allen, Sian V; Vandenbogaerde, Tom J; Hopkins, William G

    2014-01-01

    The age-related progression of elite athletes to their career-best performances can provide benchmarks for talent development. The purpose of this study was to model career performance trajectories of Olympic swimmers to develop these benchmarks. We searched the Web for annual best times of swimmers who were top 16 in pool events at the 2008 or 2012 Olympics, from each swimmer's earliest available competitive performance through to 2012. There were 6959 times in the 13 events for each sex, for 683 swimmers, with 10 ± 3 performances per swimmer (mean ± s). Progression to peak performance was tracked with individual quadratic trajectories derived using a mixed linear model that included adjustments for better performance in Olympic years and for the use of full-body polyurethane swimsuits in 2009. Analysis of residuals revealed appropriate fit of quadratic trends to the data. The trajectories provided estimates of age of peak performance and the duration of the age window of trivial improvement and decline around the peak. Men achieved peak performance later than women (24.2 ± 2.1 vs. 22.5 ± 2.4 years), while peak performance occurred at later ages for the shorter distances for both sexes (∼1.5-2.0 years between sprint and distance-event groups). Men and women had a similar duration in the peak-performance window (2.6 ± 1.5 years) and similar progressions to peak performance over four years (2.4 ± 1.2%) and eight years (9.5 ± 4.8%). These data provide performance targets for swimmers aiming to achieve elite-level performance. PMID:24597644

  18. Developing a Metrics-Based Online Strategy for Libraries

    ERIC Educational Resources Information Center

    Pagano, Joe

    2009-01-01

    Purpose: The purpose of this paper is to provide an introduction to the various web metrics tools that are available, and to indicate how these might be used in libraries. Design/methodology/approach: The paper describes ways in which web metrics can be used to inform strategic decision making in libraries. Findings: A framework of possible web…

  19. Hospital readiness for health information exchange: development of metrics associated with successful collaboration for quality improvement

    PubMed Central

    Korst, Lisa M.; Aydin, Carolyn E.; Signer, Jordana M. K.; Fink, Arlene

    2011-01-01

    Objective The development of readiness metrics for organizational participation in health information exchange is critical for monitoring progress toward, and achievement of, successful inter-organizational collaboration. In preparation for the development of a tool to measure readiness for data-sharing, we tested whether organizational capacities known to be related to readiness were associated with successful participation in an American data-sharing collaborative for quality improvement. Design Cross-sectional design, using an on-line survey of hospitals in a large, mature data-sharing collaborative organized for benchmarking and improvement in nursing care quality. Measurements Factor analysis was used to identify salient constructs, and identified factors were analyzed with respect to “successful” participation. “Success” was defined as the incorporation of comparative performance data into the hospital dashboard. Results The most important factor in predicting success included survey items measuring the strength of organizational leadership in fostering a culture of quality improvement (QI Leadership): 1) presence of a supportive hospital executive; 2) the extent to which a hospital values data; 3) the presence of leaders’ vision for how the collaborative advances the hospital’s strategic goals; 4) hospital use of the collaborative data to track quality outcomes; and 5) staff recognition of a strong mandate for collaborative participation (α = 0.84, correlation with Success 0.68 [P < 0.0001]). Conclusion The data emphasize the importance of hospital QI Leadership in collaboratives that aim to share data for QI or safety purposes. Such metrics should prove useful in the planning and development of this complex form of inter-organizational collaboration. PMID:21330191

  20. Benchmarks and Quality Assurance for Online Course Development in Higher Education

    ERIC Educational Resources Information Center

    Wang, Hong

    2008-01-01

    As online education has entered the main stream of the U.S. higher education, quality assurance in online course development has become a critical topic in distance education. This short article summarizes the major benchmarks related to online course development, listing and comparing the benchmarks of the National Education Association (NEA),…

  1. 40 CFR 141.540 - Who has to develop a disinfection benchmark?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Who has to develop a disinfection benchmark? 141.540 Section 141.540 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... Disinfection-Systems Serving Fewer Than 10,000 People Disinfection Benchmark § 141.540 Who has to develop...

  2. Development of a Benchmark Hydroclimate Data Library for N. America

    NASA Astrophysics Data System (ADS)

    Lall, U.; Cook, E.

    2001-12-01

    This poster presents the recommendations of an international workshop held May 24-25, 2001, at the Lamont-Doherty Earth Observatory, Palisades, New York. The purpose of the workshop was to: (1) Identify the needs for a continental and eventually global benchmark hydroclimatic dataset; (2) Evaluate how they are currently being met in the 3 countries of N. America; and (3)Identify the main scientific and institutional challenges in improving access, and associated implementation strategies to improve the data elements and access. An initial focus on N. American streamflow was suggested. The estimation of streamflow (or its specific statistics) at ungaged, poorly gaged locations or locations with a substantial modification of the hydrologic regime was identified as a priority. The potential for the use of extended (to 1856) climate records and of tree rings and other proxies (that may go back multiple centuries)for the reconstruction of a comprehensive data set of concurrent hydrologic and climate fields was considered. Specific recommendations for the implementation of a research program to support the development and enhance availability of the products in conjunction with the major federal and state agencies in the three countries of continental N. America were made. The implications of these recommendations for the Hydrologic Information Systems initiative of the Consortium of Universities for the Advanced of Hydrologic Science are discussed.

  3. Metrics Evolution in an Energy Research & Development Program

    SciTech Connect

    Brent Dixon

    2011-08-01

    All technology programs progress through three phases: Discovery, Definition, and Deployment. The form and application of program metrics needs to evolve with each phase. During the discovery phase, the program determines what is achievable. A set of tools is needed to define program goals, to analyze credible technical options, and to ensure that the options are compatible and meet the program objectives. A metrics system that scores the potential performance of technical options is part of this system of tools, supporting screening of concepts and aiding in the overall definition of objectives. During the definition phase, the program defines what specifically is wanted. What is achievable is translated into specific systems and specific technical options are selected and optimized. A metrics system can help with the identification of options for optimization and the selection of the option for deployment. During the deployment phase, the program shows that the selected system works. Demonstration projects are established and classical systems engineering is employed. During this phase, the metrics communicate system performance. This paper discusses an approach to metrics evolution within the Department of Energy's Nuclear Fuel Cycle R&D Program, which is working to improve the sustainability of nuclear energy.

  4. Development and application of freshwater sediment-toxicity benchmarks for currently used pesticides.

    PubMed

    Nowell, Lisa H; Norman, Julia E; Ingersoll, Christopher G; Moran, Patrick W

    2016-04-15

    Sediment-toxicity benchmarks are needed to interpret the biological significance of currently used pesticides detected in whole sediments. Two types of freshwater sediment benchmarks for pesticides were developed using spiked-sediment bioassay (SSB) data from the literature. These benchmarks can be used to interpret sediment-toxicity data or to assess the potential toxicity of pesticides in whole sediment. The Likely Effect Benchmark (LEB) defines a pesticide concentration in whole sediment above which there is a high probability of adverse effects on benthic invertebrates, and the Threshold Effect Benchmark (TEB) defines a concentration below which adverse effects are unlikely. For compounds without available SSBs, benchmarks were estimated using equilibrium partitioning (EqP). When a sediment sample contains a pesticide mixture, benchmark quotients can be summed for all detected pesticides to produce an indicator of potential toxicity for that mixture. Benchmarks were developed for 48 pesticide compounds using SSB data and 81 compounds using the EqP approach. In an example application, data for pesticides measured in sediment from 197 streams across the United States were evaluated using these benchmarks, and compared to measured toxicity from whole-sediment toxicity tests conducted with the amphipod Hyalella azteca (28-d exposures) and the midge Chironomus dilutus (10-d exposures). Amphipod survival, weight, and biomass were significantly and inversely related to summed benchmark quotients, whereas midge survival, weight, and biomass showed no relationship to benchmarks. Samples with LEB exceedances were rare (n=3), but all were toxic to amphipods (i.e., significantly different from control). Significant toxicity to amphipods was observed for 72% of samples exceeding one or more TEBs, compared to 18% of samples below all TEBs. Factors affecting toxicity below TEBs may include the presence of contaminants other than pesticides, physical/chemical characteristics

  5. Development and application of freshwater sediment-toxicity benchmarks for currently used pesticides

    USGS Publications Warehouse

    Nowell, Lisa H.; Norman, Julia E.; Ingersoll, Christopher G.; Moran, Patrick W.

    2016-01-01

    Sediment-toxicity benchmarks are needed to interpret the biological significance of currently used pesticides detected in whole sediments. Two types of freshwater sediment benchmarks for pesticides were developed using spiked-sediment bioassay (SSB) data from the literature. These benchmarks can be used to interpret sediment-toxicity data or to assess the potential toxicity of pesticides in whole sediment. The Likely Effect Benchmark (LEB) defines a pesticide concentration in whole sediment above which there is a high probability of adverse effects on benthic invertebrates, and the Threshold Effect Benchmark (TEB) defines a concentration below which adverse effects are unlikely. For compounds without available SSBs, benchmarks were estimated using equilibrium partitioning (EqP). When a sediment sample contains a pesticide mixture, benchmark quotients can be summed for all detected pesticides to produce an indicator of potential toxicity for that mixture. Benchmarks were developed for 48 pesticide compounds using SSB data and 81 compounds using the EqP approach. In an example application, data for pesticides measured in sediment from 197 streams across the United States were evaluated using these benchmarks, and compared to measured toxicity from whole-sediment toxicity tests conducted with the amphipod Hyalella azteca (28-d exposures) and the midge Chironomus dilutus (10-d exposures). Amphipod survival, weight, and biomass were significantly and inversely related to summed benchmark quotients, whereas midge survival, weight, and biomass showed no relationship to benchmarks. Samples with LEB exceedances were rare (n = 3), but all were toxic to amphipods (i.e., significantly different from control). Significant toxicity to amphipods was observed for 72% of samples exceeding one or more TEBs, compared to 18% of samples below all TEBs. Factors affecting toxicity below TEBs may include the presence of contaminants other than pesticides, physical

  6. Developing Image Processing Meta-Algorithms with Data Mining of Multiple Metrics

    PubMed Central

    Cunha, Alexandre; Toga, A. W.; Parker, D. Stott

    2014-01-01

    People often use multiple metrics in image processing, but here we take a novel approach of mining the values of batteries of metrics on image processing results. We present a case for extending image processing methods to incorporate automated mining of multiple image metric values. Here by a metric we mean any image similarity or distance measure, and in this paper we consider intensity-based and statistical image measures and focus on registration as an image processing problem. We show how it is possible to develop meta-algorithms that evaluate different image processing results with a number of different metrics and mine the results in an automated fashion so as to select the best results. We show that the mining of multiple metrics offers a variety of potential benefits for many image processing problems, including improved robustness and validation. PMID:24653748

  7. Benchmarking University Community Engagement: Developing a National Approach in Australia

    ERIC Educational Resources Information Center

    Garlick, Steve; Langworthy, Anne

    2008-01-01

    This article provides the background and describes the processes involved in establishing a national approach to benchmarking the way universities engage with their local and regional communities in Australia. Local and regional community engagement is a rapidly expanding activity in Australian public universities and is increasingly being seen as…

  8. A rationale for developing benchmarks for the treatment of muscle-invasive bladder cancer.

    PubMed

    Lee, Cheryl T

    2007-01-01

    Benchmarks are established standards of operation developed by a given group or industry generally designed to improve outcomes. The health care industry is increasingly required to develop such standards and document adherence to meet demands of regulatory bodies. Although established practice patterns exist for the treatment of invasive bladder cancer, there is significant treatment variation. This article provides a rationale for the development of benchmarks in the treatment of invasive bladder cancer. Such benchmarks may permit advances in treatment application and potentially improve patient outcomes. PMID:17208141

  9. Development of a perceptually calibrated objective metric of noise

    NASA Astrophysics Data System (ADS)

    Keelan, Brian W.; Jin, Elaine W.; Prokushkin, Sergey

    2011-01-01

    A system simulation model was used to create scene-dependent noise masks that reflect current performance of mobile phone cameras. Stimuli with different overall magnitudes of noise and with varying mixtures of red, green, blue, and luminance noises were included in the study. Eleven treatments in each of ten pictorial scenes were evaluated by twenty observers using the softcopy ruler method. In addition to determining the quality loss function in just noticeable differences (JNDs) for the average observer and scene, transformations for different combinations of observer sensitivity and scene susceptibility were derived. The psychophysical results were used to optimize an objective metric of isotropic noise based on system noise power spectra (NPS), which were integrated over a visual frequency weighting function to yield perceptually relevant variances and covariances in CIE L*a*b* space. Because the frequency weighting function is expressed in terms of cycles per degree at the retina, it accounts for display pixel size and viewing distance effects, so application-specific predictions can be made. Excellent results were obtained using only L* and a* variances and L*a* covariance, with relative weights of 100, 5, and 12, respectively. The positive a* weight suggests that the luminance (photopic) weighting is slightly narrow on the long wavelength side for predicting perceived noisiness. The L*a* covariance term, which is normally negative, reflects masking between L* and a* noise, as confirmed in informal evaluations. Test targets in linear sRGB and rendered L*a*b* spaces for each treatment are available at http://www.aptina.com/ImArch/ to enable other researchers to test metrics of their own design and calibrate them to JNDs of quality loss without performing additional observer experiments. Such JND-calibrated noise metrics are particularly valuable for comparing the impact of noise and other attributes, and for computing overall image quality.

  10. Performance metric development for a group state estimator in airborne UHF GMTI applications

    NASA Astrophysics Data System (ADS)

    Elwell, Ryan A.

    2013-05-01

    This paper describes the development and implementation of evaluation metrics for group state estimator (GSE, i.e. group tracking) algorithms. Key differences between group tracker metrics and individual tracker metrics are the method used for track-to-truth association and the characterization of group raid size. Another significant contribution of this work is the incorporation of measured radar performance in assessing tracker performance. The result of this work is a set of measures of performance derived from canonical individual target tracker metrics, extended to characterize the additional information provided by a group tracker. The paper discusses additional considerations in group tracker evaluation, including the definition of a group and group-to-group confusion. Metrics are computed on real field data to provide examples of real-world analysis, demonstrating an approach which provides characterization of group tracker performance, independent of the sensor's performance.

  11. Development of a Quantitative Decision Metric for Selecting the Most Suitable Discretization Method for SN Transport Problems

    NASA Astrophysics Data System (ADS)

    Schunert, Sebastian

    In this work we develop a quantitative decision metric for spatial discretization methods of the SN equations. The quantitative decision metric utilizes performance data from selected test problems for computing a fitness score that is used for the selection of the most suitable discretization method for a particular SN transport application. The fitness score is aggregated as a weighted geometric mean of single performance indicators representing various performance aspects relevant to the user. Thus, the fitness function can be adjusted to the particular needs of the code practitioner by adding/removing single performance indicators or changing their importance via the supplied weights. Within this work a special, broad class of methods is considered, referred to as nodal methods. This class is naturally comprised of the DGFEM methods of all function space families. Within this work it is also shown that the Higher Order Diamond Difference (HODD) method is a nodal method. Building on earlier findings that the Arbitrarily High Order Method of the Nodal type (AHOTN) is also a nodal method, a generalized finite-element framework is created to yield as special cases various methods that were developed independently using profoundly different formalisms. A selection of test problems related to a certain performance aspect are considered: an Method of Manufactured Solutions (MMS) test suite for assessing accuracy and execution time, Lathrop's test problem for assessing resilience against occurrence of negative fluxes, and a simple, homogeneous cube test problem to verify if a method possesses the thick diffusive limit. The contending methods are implemented as efficiently as possible under a common SN transport code framework to level the playing field for a fair comparison of their computational load. Numerical results are presented for all three test problems and a qualitative rating of each method's performance is provided for each aspect: accuracy

  12. Development of Benchmark Examples for Quasi-Static Delamination Propagation and Fatigue Growth Predictions

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2012-01-01

    The development of benchmark examples for quasi-static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for Abaqus/Standard. The example is based on a finite element model of a Double-Cantilever Beam specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, a quasi-static benchmark example was created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall the results are encouraging, but further assessment for mixed-mode delamination is required.

  13. Development and Application of Benchmark Examples for Mode II Static Delamination Propagation and Fatigue Growth Predictions

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2011-01-01

    The development of benchmark examples for static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of an End-Notched Flexure (ENF) specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, static benchmark examples were created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall the results are encouraging, but further assessment for mixed-mode delamination is required.

  14. Development of Benchmark Examples for Static Delamination Propagation and Fatigue Growth Predictions

    NASA Technical Reports Server (NTRS)

    Kruger, Ronald

    2011-01-01

    The development of benchmark examples for static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of an End-Notched Flexure (ENF) specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, static benchmark examples were created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during stable delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall, the results are encouraging but further assessment for mixed-mode delamination is required.

  15. Advanced Life Support Research and Technology Development Metric: Fiscal Year 2003

    NASA Technical Reports Server (NTRS)

    Hanford, A. J.

    2004-01-01

    This document provides the official calculation of the Advanced Life Support (ALS) Research and Technology Development Metric (the Metric) for Fiscal Year 2003. As such, the values herein are primarily based on Systems Integration, Modeling, and Analysis (SIMA) Element approved software tools or reviewed and approved reference documents. The Metric is one of several measures employed by the National Aeronautics and Space Administration (NASA) to assess the Agency s progress as mandated by the United States Congress and the Office of Management and Budget. Because any measure must have a reference point, whether explicitly defined or implied, the Metric is a comparison between a selected ALS Project life support system and an equivalently detailed life support system using technology from the Environmental Control and Life Support System (ECLSS) for the International Space Station (ISS). More specifically, the Metric is the ratio defined by the equivalent system mass (ESM) of a life support system for a specific mission using the ISS ECLSS technologies divided by the ESM for an equivalent life support system using the best ALS technologies. As defined, the Metric should increase in value as the ALS technologies become lighter, less power intensive, and require less volume. For Fiscal Year 2003, the Advanced Life Support Research and Technology Development Metric value is 1.47 for an Orbiting Research Facility and 1.36 for an Independent Exploration Mission.

  16. Developing Common Metrics for the Clinical and Translational Science Awards (CTSAs): Lessons Learned.

    PubMed

    Rubio, Doris M; Blank, Arthur E; Dozier, Ann; Hites, Lisle; Gilliam, Victoria A; Hunt, Joe; Rainwater, Julie; Trochim, William M

    2015-10-01

    The National Institutes of Health (NIH) Roadmap for Medical Research initiative, funded by the NIH Common Fund and offered through the Clinical and Translational Science Award (CTSA) program, developed more than 60 unique models for achieving the NIH goal of accelerating discoveries toward better public health. The variety of these models enabled participating academic centers to experiment with different approaches to fit their research environment. A central challenge related to the diversity of approaches is the ability to determine the success and contribution of each model. This paper describes the effort by the Evaluation Key Function Committee to develop and test a methodology for identifying a set of common metrics to assess the efficiency of clinical research processes and for pilot testing these processes for collecting and analyzing metrics. The project involved more than one-fourth of all CTSAs and resulted in useful information regarding the challenges in developing common metrics, the complexity and costs of acquiring data for the metrics, and limitations on the utility of the metrics in assessing clinical research performance. The results of this process led to the identification of lessons learned and recommendations for development and use of common metrics to evaluate the CTSA effort. PMID:26073891

  17. Thermal Performance Benchmarking (Presentation)

    SciTech Connect

    Moreno, G.

    2014-11-01

    This project will benchmark the thermal characteristics of automotive power electronics and electric motor thermal management systems. Recent vehicle systems will be benchmarked to establish baseline metrics, evaluate advantages and disadvantages of different thermal management systems, and identify areas of improvement to advance the state-of-the-art.

  18. Development and Implementation of a Design Metric for Systems Containing Long-Term Fluid Loops

    NASA Technical Reports Server (NTRS)

    Steele, John W.

    2016-01-01

    John Steele, a chemist and technical fellow from United Technologies Corporation, provided a water quality module to assist engineers and scientists with a metric tool to evaluate risks associated with the design of space systems with fluid loops. This design metric is a methodical, quantitative, lessons-learned based means to evaluate the robustness of a long-term fluid loop system design. The tool was developed by a cross-section of engineering disciplines who had decades of experience and problem resolution.

  19. A framework for benchmarking land models

    SciTech Connect

    Luo, Yiqi; Randerson, J.; Abramowitz, G.; Bacour, C.; Blyth, E.; Carvalhais, N.; Ciais, Philippe; Dalmonech, D.; Fisher, J.B.; Fisher, R.; Friedlingstein, P.; Hibbard, Kathleen A.; Hoffman, F. M.; Huntzinger, Deborah; Jones, C.; Koven, C.; Lawrence, David M.; Li, D.J.; Mahecha, M.; Niu, S.L.; Norby, Richard J.; Piao, S.L.; Qi, X.; Peylin, P.; Prentice, I.C.; Riley, William; Reichstein, M.; Schwalm, C.; Wang, Y.; Xia, J. Y.; Zaehle, S.; Zhou, X. H.

    2012-10-09

    Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1) targeted aspects of model performance to be evaluated, (2) a set of benchmarks as defined references to test model performance, (3) metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4) model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1) a priori thresholds of acceptable model performance and (2) a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties of land models

  20. Developing a Benchmarking Process in Perfusion: A Report of the Perfusion Downunder Collaboration

    PubMed Central

    Baker, Robert A.; Newland, Richard F.; Fenton, Carmel; McDonald, Michael; Willcox, Timothy W.; Merry, Alan F.

    2012-01-01

    Abstract: Improving and understanding clinical practice is an appropriate goal for the perfusion community. The Perfusion Downunder Collaboration has established a multi-center perfusion focused database aimed at achieving these goals through the development of quantitative quality indicators for clinical improvement through benchmarking. Data were collected using the Perfusion Downunder Collaboration database from procedures performed in eight Australian and New Zealand cardiac centers between March 2007 and February 2011. At the Perfusion Downunder Meeting in 2010, it was agreed by consensus, to report quality indicators (QI) for glucose level, arterial outlet temperature, and pCO2 management during cardiopulmonary bypass. The values chosen for each QI were: blood glucose ≥4 mmol/L and ≤10 mmol/L; arterial outlet temperature ≤37°C; and arterial blood gas pCO2 ≥ 35 and ≤45 mmHg. The QI data were used to derive benchmarks using the Achievable Benchmark of Care (ABC™) methodology to identify the incidence of QIs at the best performing centers. Five thousand four hundred and sixty-five procedures were evaluated to derive QI and benchmark data. The incidence of the blood glucose QI ranged from 37–96% of procedures, with a benchmark value of 90%. The arterial outlet temperature QI occurred in 16–98% of procedures with the benchmark of 94%; while the arterial pCO2 QI occurred in 21–91%, with the benchmark value of 80%. We have derived QIs and benchmark calculations for the management of several key aspects of cardiopulmonary bypass to provide a platform for improving the quality of perfusion practice. PMID:22730861

  1. Metrics for Developing an Endorsed Set of Radiographic Threat Surrogates for JINII/CAARS

    SciTech Connect

    Wurtz, R; Walston, S; Dietrich, D; Martz, H

    2009-02-11

    CAARS (Cargo Advanced Automated Radiography System) is developing x-ray dual energy and x-ray backscatter methods to automatically detect materials that are greater than Z=72 (hafnium). This works well for simple geometry materials, where most of the radiographic path is through one material. However, this is usually not the case. Instead, the radiographic path includes many materials of different lengths. Single energy can be used to compute {mu}y{sub l} which is related to areal density (mass per unit area) while dual energy yields more information. This report describes a set of metrics suitable and sufficient for characterizing the appearance of assemblies as detected by x-ray radiographic imaging systems, such as those being tested by Joint Integrated Non-Intrusive Inspection (JINII) or developed under CAARS. These metrics will be simulated both for threat assemblies and surrogate threat assemblies (such as are found in Roney et al. 2007) using geometrical and compositional information of the assemblies. The imaging systems are intended to distinguish assemblies containing high-Z material from those containing low-Z material, regardless of thickness, density, or compounds and mixtures. The systems in question operate on the principle of comparing images obtained by using two different x-ray end-point energies--so-called 'dual energy' imaging systems. At the direction of the DHS JINII sponsor, this report does not cover metrics that implement scattering, in the form of either forward-scattered radiation or high-Z detection systems operating on the principle of backscatter detection. Such methods and effects will be covered in a later report. The metrics described here are to be used to compare assemblies and not x-ray radiography systems. We intend to use these metrics to determine whether two assemblies do or do not look the same. We are tasked to develop a set of assemblies whose appearance using this class of detection systems is indistinguishable from the

  2. Development of a benchmarking model for lithium battery electrodes

    NASA Astrophysics Data System (ADS)

    Bergholz, Timm; Korte, Carsten; Stolten, Detlef

    2016-07-01

    This paper presents a benchmarking model to enable systematic selection of anode and cathode materials for lithium batteries in stationary applications, hybrid and battery electric vehicles. The model incorporates parameters for energy density, power density, safety, lifetime, costs and raw materials. Combinations of carbon anodes, Li4Ti5O12 or TiO2 with LiFePO4 cathodes comprise interesting combinations for application in hybrid power trains. Higher cost and raw material prioritization of stationary applications hinders the breakthrough of Li4Ti5O12, while a combination of TiO2 and LiFePO4 is suggested. The favored combinations resemble state-of-the-art materials, whereas novel cell chemistries must be optimized for cells in battery electric vehicles. In contrast to actual research efforts, sulfur as a cathode material is excluded due to its low volumetric energy density and its known lifetime and safety issues. Lithium as anode materials is discarded due to safety issues linked to electrode melting and dendrite formation. A high capacity composite Li2MnO3·LiNi0.5Co0.5O2 and high voltage spinel LiNi0.5Mn1.5O4 cathode with silicon as anode material promise high energy densities with sufficient lifetime and safety properties if electrochemical and thermal stabilization of the electrolyte/electrode interfaces and bulk materials is achieved. The model allows a systematic top-down orientation of research on lithium batteries.

  3. International E-Benchmarking: Flexible Peer Development of Authentic Learning Principles in Higher Education

    ERIC Educational Resources Information Center

    Leppisaari, Irja; Vainio, Leena; Herrington, Jan; Im, Yeonwook

    2011-01-01

    More and more, social technologies and virtual work methods are facilitating new ways of crossing boundaries in professional development and international collaborations. This paper examines the peer development of higher education teachers through the experiences of the IVBM project (International Virtual Benchmarking, 2009-2010). The…

  4. Metric transition

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This report describes NASA's metric transition in terms of seven major program elements. Six are technical areas involving research, technology development, and operations; they are managed by specific Program Offices at NASA Headquarters. The final program element, Institutional Management, covers both NASA-wide functional management under control of NASA Headquarters and metric capability development at the individual NASA Field Installations. This area addresses issues common to all NASA program elements, including: Federal, state, and local coordination; standards; private industry initiatives; public-awareness initiatives; and employee training. The concluding section identifies current barriers and impediments to metric transition; NASA has no specific recommendations for consideration by the Congress.

  5. Using Web Metric Software to Drive: Mobile Website Development

    ERIC Educational Resources Information Center

    Tidal, Junior

    2011-01-01

    Many libraries have developed mobile versions of their websites. In order to understand their users, web developers have conducted both usability tests and focus groups, yet analytical software and web server logs can also be used to better understand users. Using data collected from these tools, the Ursula C. Schwerin Library has made informed…

  6. Using Participatory Action Research to Study the Implementation of Career Development Benchmarks at a New Zealand University

    ERIC Educational Resources Information Center

    Furbish, Dale S.; Bailey, Robyn; Trought, David

    2016-01-01

    Benchmarks for career development services at tertiary institutions have been developed by Careers New Zealand. The benchmarks are intended to provide standards derived from international best practices to guide career development services. A new career development service was initiated at a large New Zealand university just after the benchmarks…

  7. Benchmarking Organizational Career Development in the United States.

    ERIC Educational Resources Information Center

    Simonsen, Peggy

    Career development has evolved from the mid-1970s, when it was rarely linked with the word "organizational," to Walter Storey's work in organizational career development at General Electric in 1978. Its evolution has continued with career development workshops in organizations in the early 1980s to implementation of Corning's organizational career…

  8. Development of an Objective Space Suit Mobility Performance Metric Using Metabolic Cost and Functional Tasks

    NASA Technical Reports Server (NTRS)

    McFarland, Shane M.; Norcross, Jason

    2016-01-01

    Existing methods for evaluating EVA suit performance and mobility have historically concentrated on isolated joint range of motion and torque. However, these techniques do little to evaluate how well a suited crewmember can actually perform during an EVA. An alternative method of characterizing suited mobility through measurement of metabolic cost to the wearer has been evaluated at Johnson Space Center over the past several years. The most recent study involved six test subjects completing multiple trials of various functional tasks in each of three different space suits; the results indicated it was often possible to discern between different suit designs on the basis of metabolic cost alone. However, other variables may have an effect on real-world suited performance; namely, completion time of the task, the gravity field in which the task is completed, etc. While previous results have analyzed completion time, metabolic cost, and metabolic cost normalized to system mass individually, it is desirable to develop a single metric comprising these (and potentially other) performance metrics. This paper outlines the background upon which this single-score metric is determined to be feasible, and initial efforts to develop such a metric. Forward work includes variable coefficient determination and verification of the metric through repeated testing.

  9. Benchmarking infrastructure for mutation text mining

    PubMed Central

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  10. Development of Adherence Metrics for Caloric Restriction Interventions

    PubMed Central

    Pieper, Carl F.; Redman, Leanne M.; Bapkar, Manju; Roberts, Susan B.; Racette, Susan B.; Rochon, James; Martin, Corby K.; Kraus, William E.; Das, Sai; Williamson, Donald; Ravussin, Eric

    2011-01-01

    Background Objective measures are needed to quantify dietary adherence during caloric restriction (CR) while participants are free-living. One method to monitor adherence is to compare observed weight loss to the expected weight loss during a prescribed level of CR. Normograms (graphs) of expected weight loss can be created from mathematical modeling of weight change to a given level of CR, conditional on the individual's set of baseline characteristics. These normograms can then be used by counselors to help the participant adhere to their caloric target. Purpose (1) To develop models of weight loss over a year of caloric restriction given demographics (age and sex), and well defined measurements of of Body Mass Index, total daily energy expenditure (TDEE) and %CR. (2) To utilize these models to develop normograms given level of caloric restriction, and measures of these variables. Methods Seventy-seven individuals completing a 6-12 month CR intervention (CALERIE) had body weight and body composition measured frequently. Energy intake (and %CR) was estimated from TDEE (by doubly labeled water) and body composition (by DXA) at baseline and months 1, 3, 6 and 12. Body weight was modeled to determine the predictors and distribution of the expected trajectory of percent weight change over 12 months of caloric restriction. Results As expected, CR was related to change in body weight. Controlling for time-varying measures, initially simple models of the functional form indicated that the trajectory of percent weight change was predicted by a non-linear function of initial age, TDEE, %CR, and sex. Using these estimates, normograms for the weight change expected during a 25%CR were developed. Our model estimates that the mean weight loss (% change from baseline weight) for an individual adherent to a 25% CR regimen is -10.9±6.3% for females and -13.9±6.4% for men after 12 months. Limitations There are several limitations. Sample sizes are small (n=77), and, by design

  11. Developing Evidence for Action on the Postgraduate Experience: An Effective Local Instrument to Move beyond Benchmarking

    ERIC Educational Resources Information Center

    Sampson, K. A.; Johnston, L.; Comer, K.; Brogt, E.

    2016-01-01

    Summative and benchmarking surveys to measure the postgraduate student research experience are well reported in the literature. While useful, we argue that local instruments that provide formative resources with an academic development focus are also required. If higher education institutions are to move beyond the identification of issues and…

  12. Developing Student Character through Disciplinary Curricula: An Analysis of UK QAA Subject Benchmark Statements

    ERIC Educational Resources Information Center

    Quinlan, Kathleen M.

    2016-01-01

    What aspects of student character are expected to be developed through disciplinary curricula? This paper examines the UK written curriculum through an analysis of the Quality Assurance Agency's subject benchmark statements for the most popular subjects studied in the UK. It explores the language, principles and intended outcomes that suggest…

  13. Developing of Indicators of an E-Learning Benchmarking Model for Higher Education Institutions

    ERIC Educational Resources Information Center

    Sae-Khow, Jirasak

    2014-01-01

    This study was the development of e-learning indicators used as an e-learning benchmarking model for higher education institutes. Specifically, it aimed to: 1) synthesize the e-learning indicators; 2) examine content validity by specialists; and 3) explore appropriateness of the e-learning indicators. Review of related literature included…

  14. Benchmarking and Threshold Standards in Higher Education. Staff and Educational Development Series.

    ERIC Educational Resources Information Center

    Smith, Helen, Ed.; Armstrong, Michael, Ed.; Brown, Sally, Ed.

    This book explores the issues involved in developing standards in higher education, examining the practical issues involved in benchmarking and offering a critical analysis of the problems associated with this developmental tool. The book focuses primarily on experience in the United Kingdom (UK), but looks also at international activity in this…

  15. Development of oil product toxicity benchmarks using SSDs

    EPA Science Inventory

    Determining the sensitivity of a diversity of species to spilled oil and chemically dispersed oil continues to be a significant challenge in spill response and impact assessment. We developed species sensitivity distributions (SSDs) of acute toxicity values using standardized te...

  16. Defining Exercise Performance Metrics for Flight Hardware Development

    NASA Technical Reports Server (NTRS)

    Beyene, Nahon M.

    2004-01-01

    The space industry has prevailed over numerous design challenges in the spirit of exploration. Manned space flight entails creating products for use by humans and the Johnson Space Center has pioneered this effort as NASA's center for manned space flight. NASA Astronauts use a suite of flight exercise hardware to maintain strength for extravehicular activities and to minimize losses in muscle mass and bone mineral density. With a cycle ergometer, treadmill, and the Resistive Exercise Device available on the International Space Station (ISS), the Space Medicine community aspires to reproduce physical loading schemes that match exercise performance in Earth s gravity. The resistive exercise device presents the greatest challenge with the duty of accommodating 20 different exercises and many variations on the core set of exercises. This paper presents a methodology for capturing engineering parameters that can quantify proper resistive exercise performance techniques. For each specified exercise, the method provides engineering parameters on hand spacing, foot spacing, and positions of the point of load application at the starting point, midpoint, and end point of the exercise. As humans vary in height and fitness levels, the methodology presents values as ranges. In addition, this method shows engineers the proper load application regions on the human body. The methodology applies to resistive exercise in general and is in use for the current development of a Resistive Exercise Device. Exercise hardware systems must remain available for use and conducive to proper exercise performance as a contributor to mission success. The astronauts depend on exercise hardware to support extended stays aboard the ISS. Future plans towards exploration of Mars and beyond acknowledge the necessity of exercise. Continuous improvement in technology and our understanding of human health maintenance in space will allow us to support the exploration of Mars and the future of space

  17. Benchmark Dose Software Development and Maintenance Ten Berge Cxt Models

    EPA Science Inventory

    This report is intended to provide an overview of beta version 1.0 of the implementation of a concentration-time (CxT) model originally programmed and provided by Wil ten Berge (referred to hereafter as the ten Berge model). The recoding and development described here represent ...

  18. USING BROAD-SCALE METRICS TO DEVELOP INDICATORS OF WATERSHED VULNERABILITY IN THE OZARK MOUNTAINS (USA)

    EPA Science Inventory

    Multiple broad-scale landscape metrics were tested as potential indicators of total phosphorus (TP) concentration, total ammonia (TA) concentration, and Escherichia coli (E. coli) bacteria count, among 244 sub-watersheds in the Ozark Mountains (USA). Indicator models were develop...

  19. Performation Metrics Development Analysis for Information and Communications Technology Outsourcing: A Case Study

    ERIC Educational Resources Information Center

    Travis, James L., III

    2014-01-01

    This study investigated how and to what extent the development and use of the OV-5a operational architecture decomposition tree (OADT) from the Department of Defense (DoD) Architecture Framework (DoDAF) affects requirements analysis with respect to complete performance metrics for performance-based services acquisition of ICT under rigid…

  20. Development of PE Metrics Elementary Assessments for National Physical Education Standard 1

    ERIC Educational Resources Information Center

    Dyson, Ben; Placek, Judith H.; Graber, Kim C.; Fisette, Jennifer L.; Rink, Judy; Zhu, Weimo; Avery, Marybell; Franck, Marian; Fox, Connie; Raynes, De; Park, Youngsik

    2011-01-01

    This article describes how assessments in PE Metrics were developed following six steps: (a) determining test blueprint, (b) writing assessment tasks and scoring rubrics, (c) establishing content validity, (d) piloting assessments, (e) conducting item analysis, and (f) modifying the assessments based on analysis and expert opinion. A task force,…

  1. IBI METRIC DEVELOPMENT FOR STREAMS AND RIVERS IN WESTERN FORESTED MOUNTAINS AND ARID LANDS

    EPA Science Inventory

    In the western USA, development of metrics and indices of vertebrate assemblage condition in streams and rivers is challenged by low species richness, by strong natural gradients, by human impact gradients that co-vary with natural gradients, and by a shortage of minimally-distur...

  2. Developing and Benchmarking Native Linux Applications on Android

    NASA Astrophysics Data System (ADS)

    Batyuk, Leonid; Schmidt, Aubrey-Derrick; Schmidt, Hans-Gunther; Camtepe, Ahmet; Albayrak, Sahin

    Smartphones get increasingly popular where more and more smartphone platforms emerge. Special attention was gained by the open source platform Android which was presented by the Open Handset Alliance (OHA) hosting members like Google, Motorola, and HTC. Android uses a Linux kernel and a stripped-down userland with a custom Java VM set on top. The resulting system joins the advantages of both environments, while third-parties are intended to develop only Java applications at the moment.

  3. Benchmarking for the Learning and Skills Sector.

    ERIC Educational Resources Information Center

    Owen, Jane

    This document is designed to introduce practitioners in the United Kingdom's learning and skills sector to the principles and practice of benchmarking. The first section defines benchmarking and differentiates metric, diagnostic, and process benchmarking. The remainder of the booklet details the following steps of the benchmarking process: (1) get…

  4. Benchmarks of fairness for health care reform: a policy tool for developing countries.

    PubMed Central

    Daniels, N.; Bryant, J.; Castano, R. A.; Dantes, O. G.; Khan, K. S.; Pannarunothai, S.

    2000-01-01

    Teams of collaborators from Colombia, Mexico, Pakistan, and Thailand have adapted a policy tool originally developed for evaluating health insurance reforms in the United States into "benchmarks of fairness" for assessing health system reform in developing countries. We describe briefly the history of the benchmark approach, the tool itself, and the uses to which it may be put. Fairness is a wide term that includes exposure to risk factors, access to all forms of care, and to financing. It also includes efficiency of management and resource allocation, accountability, and patient and provider autonomy. The benchmarks standardize the criteria for fairness. Reforms are then evaluated by scoring according to the degree to which they improve the situation, i.e. on a scale of -5 to 5, with zero representing the status quo. The object is to promote discussion about fairness across the disciplinary divisions that keep policy analysts and the public from understanding how trade-offs between different effects of reforms can affect the overall fairness of the reform. The benchmarks can be used at both national and provincial or district levels, and we describe plans for such uses in the collaborating sites. A striking feature of the adaptation process is that there was wide agreement on this ethical framework among the collaborating sites despite their large historical, political and cultural differences. PMID:10916911

  5. Development of a Computer-based Benchmarking and Analytical Tool. Benchmarking and Energy & Water Savings Tool in Dairy Plants (BEST-Dairy)

    SciTech Connect

    Xu, Tengfang; Flapper, Joris; Ke, Jing; Kramer, Klaas; Sathaye, Jayant

    2012-02-01

    The overall goal of the project is to develop a computer-based benchmarking and energy and water savings tool (BEST-Dairy) for use in the California dairy industry – including four dairy processes – cheese, fluid milk, butter, and milk powder.

  6. Development of a HEX-Z Partially Homogenized Benchmark Model for the FFTF Isothermal Physics Measurements

    SciTech Connect

    John D. Bess

    2012-05-01

    A series of isothermal physics measurements were performed as part of an acceptance testing program for the Fast Flux Test Facility (FFTF). A HEX-Z partially-homogenized benchmark model of the FFTF fully-loaded core configuration was developed for evaluation of these measurements. Evaluated measurements include the critical eigenvalue of the fully-loaded core, two neutron spectra, 32 reactivity effects measurements, an isothermal temperature coefficient, and low-energy gamma and electron spectra. Dominant uncertainties in the critical configuration include the placement of radial shielding around the core, reactor core assembly pitch, composition of the stainless steel components, plutonium content in the fuel pellets, and boron content in the absorber pellets. Calculations of criticality, reactivity effects measurements, and the isothermal temperature coefficient using MCNP5 and ENDF/B-VII.0 cross sections with the benchmark model are in good agreement with the benchmark experiment measurements. There is only some correlation between calculated and measured spectral measurements; homogenization of many of the core components may have impacted computational assessment of these measurements. This benchmark evaluation has been added to the IRPhEP Handbook.

  7. Development of Benchmark Examples for Delamination Onset and Fatigue Growth Prediction

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2011-01-01

    An approach for assessing the delamination propagation and growth capabilities in commercial finite element codes was developed and demonstrated for the Virtual Crack Closure Technique (VCCT) implementations in ABAQUS. The Double Cantilever Beam (DCB) specimen was chosen as an example. First, benchmark results to assess delamination propagation capabilities under static loading were created using models simulating specimens with different delamination lengths. For each delamination length modeled, the load and displacement at the load point were monitored. The mixed-mode strain energy release rate components were calculated along the delamination front across the width of the specimen. A failure index was calculated by correlating the results with the mixed-mode failure criterion of the graphite/epoxy material. The calculated critical loads and critical displacements for delamination onset for each delamination length modeled were used as a benchmark. The load/displacement relationship computed during automatic propagation should closely match the benchmark case. Second, starting from an initially straight front, the delamination was allowed to propagate based on the algorithms implemented in the commercial finite element software. The load-displacement relationship obtained from the propagation analysis results and the benchmark results were compared. Good agreements could be achieved by selecting the appropriate input parameters, which were determined in an iterative procedure.

  8. Coral growth on three reefs: development of recovery benchmarks using a space for time approach

    NASA Astrophysics Data System (ADS)

    Done, T. J.; Devantier, L. M.; Turak, E.; Fisk, D. A.; Wakeford, M.; van Woesik, R.

    2010-12-01

    This 14-year study (1989-2003) develops recovery benchmarks based on a period of very strong coral recovery in Acropora-dominated assemblages on the Great Barrier Reef (GBR) following major setbacks from the predatory sea-star Acanthaster planci in the early 1980s. A space for time approach was used in developing the benchmarks, made possible by the choice of three study reefs (Green Island, Feather Reef and Rib Reef), spread along 3 degrees of latitude (300 km) of the GBR. The sea-star outbreaks progressed north to south, causing death of corals that reached maximum levels in the years 1980 (Green), 1982 (Feather) and 1984 (Rib). The reefs were initially surveyed in 1989, 1990, 1993 and 1994, which represent recovery years 5-14 in the space for time protocol. Benchmark trajectories for coral abundance, colony sizes, coral cover and diversity were plotted against nominal recovery time (years 5-14) and defined as non-linear functions. A single survey of the same three reefs was conducted in 2003, when the reefs were nominally 1, 3 and 5 years into a second recovery period, following further Acanthaster impacts and coincident coral bleaching events around the turn of the century. The 2003 coral cover was marginally above the benchmark trajectory, but colony density (colonies.m-2) was an order of magnitude lower than the benchmark, and size structure was biased toward larger colonies that survived the turn of the century disturbances. The under-representation of small size classes in 2003 suggests that mass recruitment of corals had been suppressed, reflecting low regional coral abundance and depression of coral fecundity by recent bleaching events. The marginally higher cover and large colonies of 2003 were thus indicative of a depleted and aging assemblage not yet rejuvenated by a strong cohort of recruits.

  9. Measuring in Metric.

    ERIC Educational Resources Information Center

    Sorenson, Juanita S.

    Eight modules for an in-service course on metric education for elementary teachers are included in this document. The modules are on an introduction to the metric system, length and basic prefixes, volume, mass, temperature, relationships within the metric system, and metric and English system relationships. The eighth one is on developing a…

  10. The relationship between settlement population size and sustainable development measured by two sustainability metrics

    SciTech Connect

    O'Regan, Bernadette Morrissey, John; Foley, Walter; Moles, Richard

    2009-04-15

    This paper reports on a study of the relative sustainability of 79 Irish villages, towns and a small city (collectively called 'settlements') classified by population size. Quantitative data on more than 300 economic, social and environmental attributes of each settlement were assembled into a database. Two aggregated metrics were selected to model the relative sustainability of settlements: Ecological Footprint (EF) and Sustainable Development Index (SDI). Subsequently these were aggregated to create a single Combined Sustainable Development Index. Creation of this database meant that metric calculations did not rely on proxies, and were therefore considered to be robust. Methods employed provided values for indicators at various stages of the aggregation process. This allowed both the first reported empirical analysis of the relationship between settlement sustainability and population size, and the elucidation of information provided at different stages of aggregation. At the highest level of aggregation, settlement sustainability increased with population size, but important differences amongst individual settlements were masked by aggregation. EF and SDI metrics ranked settlements in differing orders of relative sustainability. Aggregation of indicators to provide Ecological Footprint values was found to be especially problematic, and this metric was inadequately sensitive to distinguish amongst the relative sustainability achieved by all settlements. Many authors have argued that, for policy makers to be able to inform planning decisions using sustainability indicators, it is necessary that they adopt a toolkit of aggregated indicators. Here it is argued that to interpret correctly each aggregated metric value, policy makers also require a hierarchy of disaggregated component indicator values, each explained fully. Possible implications for urban planning are briefly reviewed.

  11. The State of Energy and Performance Benchmarking for Enterprise Servers

    NASA Astrophysics Data System (ADS)

    Fanara, Andrew; Haines, Evan; Howard, Arthur

    To address the server industry’s marketing focus on performance, benchmarking organizations have played a pivotal role in developing techniques to determine the maximum achievable performance level of a system. Generally missing has been an assessment of energy use to achieve that performance. The connection between performance and energy consumption is becoming necessary information for designers and operators as they grapple with power constraints in the data center. While industry and policy makers continue to strategize about a universal metric to holistically measure IT equipment efficiency, existing server benchmarks for various workloads could provide an interim proxy to assess the relative energy efficiency of general servers. This paper discusses ideal characteristics a future energy-performance benchmark might contain, suggests ways in which current benchmarks might be adapted to provide a transitional step to this end, and notes the need for multiple workloads to provide a holistic proxy for a universal metric.

  12. HPGMG 1.0: A Benchmark for Ranking High Performance Computing Systems

    SciTech Connect

    Adams, Mark; Brown, Jed; Shalf, John; Straalen, Brian Van; Strohmaier, Erich; Williams, Sam

    2014-05-05

    This document provides an overview of the benchmark ? HPGMG ? for ranking large scale general purpose computers for use on the Top500 list [8]. We provide a rationale for the need for a replacement for the current metric HPL, some background of the Top500 list and the challenges of developing such a metric; we discuss our design philosophy and methodology, and an overview of the specification of the benchmark. The primary documentation with maintained details on the specification can be found at hpgmg.org and the Wiki and benchmark code itself can be found in the repository https://bitbucket.org/hpgmg/hpgmg.

  13. Design and development of a community carbon cycle benchmarking system for CMIP5 models

    NASA Astrophysics Data System (ADS)

    Mu, M.; Hoffman, F. M.; Lawrence, D. M.; Riley, W. J.; Keppel-Aleks, G.; Randerson, J. T.

    2013-12-01

    Benchmarking has been widely used to assess the ability of atmosphere, ocean, sea ice, and land surface models to capture the spatial and temporal variability of observations during the historical period. For the carbon cycle and terrestrial ecosystems, the design and development of an open-source community platform has been an important goal as part of the International Land Model Benchmarking (ILAMB) project. Here we designed and developed a software system that enables the user to specify the models, benchmarks, and scoring systems so that results can be tailored to specific model intercomparison projects. We used this system to evaluate the performance of CMIP5 Earth system models (ESMs). Our scoring system used information from four different aspects of climate, including the climatological mean spatial pattern of gridded surface variables, seasonal cycle dynamics, the amplitude of interannual variability, and long-term decadal trends. We used this system to evaluate burned area, global biomass stocks, net ecosystem exchange, gross primary production, and ecosystem respiration from CMIP5 historical simulations. Initial results indicated that the multi-model mean often performed better than many of the individual models for most of the observational constraints.

  14. Development and Applications of Benchmark Examples for Static Delamination Propagation Predictions

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2013-01-01

    The development and application of benchmark examples for the assessment of quasistatic delamination propagation capabilities was demonstrated for ANSYS (TradeMark) and Abaqus/Standard (TradeMark). The examples selected were based on finite element models of Double Cantilever Beam (DCB) and Mixed-Mode Bending (MMB) specimens. First, quasi-static benchmark results were created based on an approach developed previously. Second, the delamination was allowed to propagate under quasi-static loading from its initial location using the automated procedure implemented in ANSYS (TradeMark) and Abaqus/Standard (TradeMark). Input control parameters were varied to study the effect on the computed delamination propagation. Overall, the benchmarking procedure proved valuable by highlighting the issues associated with choosing the appropriate input parameters for the VCCT implementations in ANSYS® and Abaqus/Standard®. However, further assessment for mixed-mode delamination fatigue onset and growth is required. Additionally studies should include the assessment of the propagation capabilities in more complex specimens and on a structural level.

  15. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  16. International land Model Benchmarking (ILAMB) Package v002.00

    DOE Data Explorer

    Collier, Nathaniel [Oak Ridge National Laboratory; Hoffman, Forrest M. [Oak Ridge National Laboratory; Mu, Mingquan [University of California, Irvine; Randerson, James T. [University of California, Irvine; Riley, William J. [Lawrence Berkeley National Laboratory

    2016-05-09

    As a contribution to International Land Model Benchmarking (ILAMB) Project, we are providing new analysis approaches, benchmarking tools, and science leadership. The goal of ILAMB is to assess and improve the performance of land models through international cooperation and to inform the design of new measurement campaigns and field studies to reduce uncertainties associated with key biogeochemical processes and feedbacks. ILAMB is expected to be a primary analysis tool for CMIP6 and future model-data intercomparison experiments. This team has developed initial prototype benchmarking systems for ILAMB, which will be improved and extended to include ocean model metrics and diagnostics.

  17. International land Model Benchmarking (ILAMB) Package v001.00

    DOE Data Explorer

    Mu, Mingquan [University of California, Irvine; Randerson, James T. [University of California, Irvine; Riley, William J. [Lawrence Berkeley National Laboratory; Hoffman, Forrest M. [Oak Ridge National Laboratory

    2016-05-02

    As a contribution to International Land Model Benchmarking (ILAMB) Project, we are providing new analysis approaches, benchmarking tools, and science leadership. The goal of ILAMB is to assess and improve the performance of land models through international cooperation and to inform the design of new measurement campaigns and field studies to reduce uncertainties associated with key biogeochemical processes and feedbacks. ILAMB is expected to be a primary analysis tool for CMIP6 and future model-data intercomparison experiments. This team has developed initial prototype benchmarking systems for ILAMB, which will be improved and extended to include ocean model metrics and diagnostics.

  18. Development and Analysis of Psychomotor Skills Metrics for Procedural Skills Decay.

    PubMed

    Parthiban, Chembian; Ray, Rebecca; Rutherford, Drew; Zinn, Mike; Pugh, Carla

    2016-01-01

    In this paper we develop and analyze the metrics associated with a force production task involving a stationary target with the help of advanced VR and Force Dimension Omega 6 haptic device. We study the effects of force magnitude and direction on the various metrics namely path length, movement smoothness, velocity and acceleration patterns, reaction time and overall error in achieving the target. Data was collected from 47 participants who were residents. Results show a positive correlation between the maximum force applied and the deflection error, velocity while reducing the path length and increasing smoothness with a force of higher magnitude showing the stabilizing characteristics of higher magnitude forces. This approach paves a way to assess and model procedural skills decay. PMID:27046593

  19. A newly developed dispersal metric indicates the succession of benthic invertebrates in restored rivers.

    PubMed

    Li, Fengqing; Sundermann, Andrea; Stoll, Stefan; Haase, Peter

    2016-11-01

    Dispersal capacity plays a fundamental role in the riverine benthic invertebrate colonization of new habitats that emerges following flash floods or restoration. However, an appropriate measure of dispersal capacity for benthic invertebrates is still lacking. The dispersal of benthic invertebrates occurs mainly during the aquatic (larval) and aerial (adult) life stages, and the dispersal of each stage can be further subdivided into active and passive modes. Based on these four possible dispersal modes, we first developed a metric (which is very similar to the well-known and widely used saprobic index) to estimate the dispersal capacity for 802 benthic invertebrate taxa by incorporating a weight for each mode. Second, we tested this metric using benthic invertebrate community data from a) 23 large restored river sites with substantial improvements of river bottom habitats dating back 1 to 10years, b) 23 unrestored sites very close to the restored sites, and c) 298 adjacent surrounding sites (mean±standard deviation: 13.0±9.5 per site) within a distance of up to 5km for each restored site in the low mountain and lowland areas of Germany. We hypothesize that our metric will reflect the temporal succession process of benthic invertebrate communities colonizing the restored sites, whereas no temporal changes are expected in the unrestored and surrounding sites. By applying our metric to these three river treatment categories, we found that the average dispersal capacity of benthic invertebrate communities in the restored sites significantly decreased in the early years following restoration, whereas there were no changes in either the unrestored or the surrounding sites. After all taxa had been divided into quartiles representing weak to strong dispersers, this pattern became even more obvious; strong dispersers colonized the restored sites during the first year after restoration and then significantly decreased over time, whereas weak dispersers continued to increase

  20. Development of a reference dose for BDE-47, 99, and 209 using benchmark dose methods.

    PubMed

    Li, Lu Xi; Chen, Li; Cao, Dan; Chen, Bing Heng; Zhao, Yan; Meng, Xiang Zhou; Xie, Chang Ming; Zhang, Yun Hui

    2014-09-01

    Eleven recently completed toxicological studies were critically reviewed to identify toxicologically significant endpoints and dose-response information. Dose-response data were compiled and entered into the USEPA's benchmark dose software (BMDS) for calculation of a benchmark dose (BMD) and a benchmark dose low (BMDL). After assessing 91 endpoints across the nine studies, a total of 23 of these endpoints were identified for BMD modeling, and BMDL estimates corresponding to various dose-response models were compiled for these separate endpoints. Thyroid, neurobehavior and reproductive endpoints for BDE-47, -99, -209 were quantitatively evaluated. According to methods and feature of each study, different uncertainty factor (UF) value was decided and subsequently reference doses (RfDs) were proposed. Consistent with USEPA, the lowest BMDLs of 2.10, 81.77, and 1698 µg/kg were used to develop RfDs for BDE-47, -99, and -209, respectively. RfDs for BDE-99 and BDE-209 were comparable to EPA results, and however, RfD of BDE-47 was much lower than that of EPA, which may result from that reproductive/developmental proves to be more sensitive than neurobehavior for BDE-47 and the principal study uses very-low-dose exposure. PMID:25256863

  1. Deriving phenological metrics from NDVI through an open source tool developed in QGIS

    NASA Astrophysics Data System (ADS)

    Duarte, Lia; Teodoro, A. C.; Gonçalves, Hernãni

    2014-10-01

    Vegetation indices have been commonly used over the past 30 years for studying vegetation characteristics using images collected by remote sensing satellites. One of the most commonly used is the Normalized Difference Vegetation Index (NDVI). The various stages that green vegetation undergoes during a complete growing season can be summarized through time-series analysis of NDVI data. The analysis of such time-series allow for extracting key phenological variables or metrics of a particular season. These characteristics may not necessarily correspond directly to conventional, ground-based phenological events, but do provide indications of ecosystem dynamics. A complete list of the phenological metrics that can be extracted from smoothed, time-series NDVI data is available in the USGS online resources (http://phenology.cr.usgs.gov/methods_deriving.php).This work aims to develop an open source application to automatically extract these phenological metrics from a set of satellite input data. The main advantage of QGIS for this specific application relies on the easiness and quickness in developing new plug-ins, using Python language, based on the experience of the research group in other related works. QGIS has its own application programming interface (API) with functionalities and programs to develop new features. The toolbar developed for this application was implemented using the plug-in NDVIToolbar.py. The user introduces the raster files as input and obtains a plot and a report with the metrics. The report includes the following eight metrics: SOST (Start Of Season - Time) corresponding to the day of the year identified as having a consistent upward trend in the NDVI time series; SOSN (Start Of Season - NDVI) corresponding to the NDVI value associated with SOST; EOST (End of Season - Time) which corresponds to the day of year identified at the end of a consistent downward trend in the NDVI time series; EOSN (End of Season - NDVI) corresponding to the NDVI value

  2. Pollutant Emissions and Energy Efficiency under Controlled Conditions for Household Biomass Cookstoves and Implications for Metrics Useful in Setting International Test Standards

    EPA Science Inventory

    Realistic metrics and methods for testing household biomass cookstoves are required to develop standards needed by international policy makers, donors, and investors. Application of consistent test practices allows emissions and energy efficiency performance to be benchmarked and...

  3. NASA metric transition plan

    NASA Astrophysics Data System (ADS)

    NASA science publications have used the metric system of measurement since 1970. Although NASA has maintained a metric use policy since 1979, practical constraints have restricted actual use of metric units. In 1988, an amendment to the Metric Conversion Act of 1975 required the Federal Government to adopt the metric system except where impractical. In response to Public Law 100-418 and Executive Order 12770, NASA revised its metric use policy and developed this Metric Transition Plan. NASA's goal is to use the metric system for program development and functional support activities to the greatest practical extent by the end of 1995. The introduction of the metric system into new flight programs will determine the pace of the metric transition. Transition of institutional capabilities and support functions will be phased to enable use of the metric system in flight program development and operations. Externally oriented elements of this plan will introduce and actively support use of the metric system in education, public information, and small business programs. The plan also establishes a procedure for evaluating and approving waivers and exceptions to the required use of the metric system for new programs. Coordination with other Federal agencies and departments (through the Interagency Council on Metric Policy) and industry (directly and through professional societies and interest groups) will identify sources of external support and minimize duplication of effort.

  4. NASA metric transition plan

    NASA Technical Reports Server (NTRS)

    1992-01-01

    NASA science publications have used the metric system of measurement since 1970. Although NASA has maintained a metric use policy since 1979, practical constraints have restricted actual use of metric units. In 1988, an amendment to the Metric Conversion Act of 1975 required the Federal Government to adopt the metric system except where impractical. In response to Public Law 100-418 and Executive Order 12770, NASA revised its metric use policy and developed this Metric Transition Plan. NASA's goal is to use the metric system for program development and functional support activities to the greatest practical extent by the end of 1995. The introduction of the metric system into new flight programs will determine the pace of the metric transition. Transition of institutional capabilities and support functions will be phased to enable use of the metric system in flight program development and operations. Externally oriented elements of this plan will introduce and actively support use of the metric system in education, public information, and small business programs. The plan also establishes a procedure for evaluating and approving waivers and exceptions to the required use of the metric system for new programs. Coordination with other Federal agencies and departments (through the Interagency Council on Metric Policy) and industry (directly and through professional societies and interest groups) will identify sources of external support and minimize duplication of effort.

  5. Benchmarking for Higher Education.

    ERIC Educational Resources Information Center

    Jackson, Norman, Ed.; Lund, Helen, Ed.

    The chapters in this collection explore the concept of benchmarking as it is being used and developed in higher education (HE). Case studies and reviews show how universities in the United Kingdom are using benchmarking to aid in self-regulation and self-improvement. The chapters are: (1) "Introduction to Benchmarking" (Norman Jackson and Helen…

  6. International small dam safety assurance policy benchmarks to avoid dam failure flood disasters in developing countries

    NASA Astrophysics Data System (ADS)

    Pisaniello, John D.; Dam, Tuyet Thi; Tingey-Holyoak, Joanne L.

    2015-12-01

    In developing countries small dam failure disasters are common yet research on their dam safety management is lacking. This paper reviews available small dam safety assurance policy benchmarks from international literature, synthesises them for applicability in developing countries, and provides example application through a case study of Vietnam. Generic models from 'minimum' to 'best' practice (Pisaniello, 1997) are synthesised with the World Bank's 'essential' and 'desirable' elements (Bradlow et al., 2002) leading to novel policy analysis and design criteria for developing countries. The case study involved 22 on-site dam surveys finding micro level physical and management inadequacies that indicates macro dam safety management policy performs far below the minimum benchmark in Vietnam. Moving assurance policy towards 'best practice' is necessary to improve the safety of Vietnam's considerable number of hazardous dams to acceptable community standards, but firstly achieving 'minimum practice' per the developed guidance is essential. The policy analysis/design process provides an exemplar for other developing countries to follow for avoiding dam failure flood disasters.

  7. Millennium development health metrics: where do Africa’s children and women of childbearing age live?

    PubMed Central

    2013-01-01

    The Millennium Development Goals (MDGs) have prompted an expansion in approaches to deriving health metrics to measure progress toward their achievement. Accurate measurements should take into account the high degrees of spatial heterogeneity in health risks across countries, and this has prompted the development of sophisticated cartographic techniques for mapping and modeling risks. Conversion of these risks to relevant population-based metrics requires equally detailed information on the spatial distribution and attributes of the denominator populations. However, spatial information on age and sex composition over large areas is lacking, prompting many influential studies that have rigorously accounted for health risk heterogeneities to overlook the substantial demographic variations that exist subnationally and merely apply national-level adjustments. Here we outline the development of high resolution age- and sex-structured spatial population datasets for Africa in 2000-2015 built from over a million measurements from more than 20,000 subnational units, increasing input data detail from previous studies by over 400-fold. We analyze the large spatial variations seen within countries and across the continent for key MDG indicator groups, focusing on children under 5 and women of childbearing age, and find that substantial differences in health and development indicators can result through using only national level statistics, compared to accounting for subnational variation. Progress toward meeting the MDGs will be measured through national-level indicators that mask substantial inequalities and heterogeneities across nations. Cartographic approaches are providing opportunities for quantitative assessments of these inequalities and the targeting of interventions, but demographic spatial datasets to support such efforts remain reliant on coarse and outdated input data for accurately locating risk groups. We have shown here that sufficient data exist to map the

  8. Millennium development health metrics: where do Africa's children and women of childbearing age live?

    PubMed

    Tatem, Andrew J; Garcia, Andres J; Snow, Robert W; Noor, Abdisalan M; Gaughan, Andrea E; Gilbert, Marius; Linard, Catherine

    2013-01-01

    The Millennium Development Goals (MDGs) have prompted an expansion in approaches to deriving health metrics to measure progress toward their achievement. Accurate measurements should take into account the high degrees of spatial heterogeneity in health risks across countries, and this has prompted the development of sophisticated cartographic techniques for mapping and modeling risks. Conversion of these risks to relevant population-based metrics requires equally detailed information on the spatial distribution and attributes of the denominator populations. However, spatial information on age and sex composition over large areas is lacking, prompting many influential studies that have rigorously accounted for health risk heterogeneities to overlook the substantial demographic variations that exist subnationally and merely apply national-level adjustments.Here we outline the development of high resolution age- and sex-structured spatial population datasets for Africa in 2000-2015 built from over a million measurements from more than 20,000 subnational units, increasing input data detail from previous studies by over 400-fold. We analyze the large spatial variations seen within countries and across the continent for key MDG indicator groups, focusing on children under 5 and women of childbearing age, and find that substantial differences in health and development indicators can result through using only national level statistics, compared to accounting for subnational variation.Progress toward meeting the MDGs will be measured through national-level indicators that mask substantial inequalities and heterogeneities across nations. Cartographic approaches are providing opportunities for quantitative assessments of these inequalities and the targeting of interventions, but demographic spatial datasets to support such efforts remain reliant on coarse and outdated input data for accurately locating risk groups. We have shown here that sufficient data exist to map the

  9. Restaurant Energy Use Benchmarking Guideline

    SciTech Connect

    Hedrick, R.; Smith, V.; Field, K.

    2011-07-01

    A significant operational challenge for food service operators is defining energy use benchmark metrics to compare against the performance of individual stores. Without metrics, multiunit operators and managers have difficulty identifying which stores in their portfolios require extra attention to bring their energy performance in line with expectations. This report presents a method whereby multiunit operators may use their own utility data to create suitable metrics for evaluating their operations.

  10. Metric Madness

    ERIC Educational Resources Information Center

    Kroon, Cindy D.

    2007-01-01

    Created for a Metric Day activity, Metric Madness is a board game for two to four players. Students review and practice metric vocabulary, measurement, and calculations by playing the game. Playing time is approximately twenty to thirty minutes.

  11. Benchmarking short sequence mapping tools

    PubMed Central

    2013-01-01

    Background The development of next-generation sequencing instruments has led to the generation of millions of short sequences in a single run. The process of aligning these reads to a reference genome is time consuming and demands the development of fast and accurate alignment tools. However, the current proposed tools make different compromises between the accuracy and the speed of mapping. Moreover, many important aspects are overlooked while comparing the performance of a newly developed tool to the state of the art. Therefore, there is a need for an objective evaluation method that covers all the aspects. In this work, we introduce a benchmarking suite to extensively analyze sequencing tools with respect to various aspects and provide an objective comparison. Results We applied our benchmarking tests on 9 well known mapping tools, namely, Bowtie, Bowtie2, BWA, SOAP2, MAQ, RMAP, GSNAP, Novoalign, and mrsFAST (mrFAST) using synthetic data and real RNA-Seq data. MAQ and RMAP are based on building hash tables for the reads, whereas the remaining tools are based on indexing the reference genome. The benchmarking tests reveal the strengths and weaknesses of each tool. The results show that no single tool outperforms all others in all metrics. However, Bowtie maintained the best throughput for most of the tests while BWA performed better for longer read lengths. The benchmarking tests are not restricted to the mentioned tools and can be further applied to others. Conclusion The mapping process is still a hard problem that is affected by many factors. In this work, we provided a benchmarking suite that reveals and evaluates the different factors affecting the mapping process. Still, there is no tool that outperforms all of the others in all the tests. Therefore, the end user should clearly specify his needs in order to choose the tool that provides the best results. PMID:23758764

  12. Benchmarking and Its Relevance to the Library and Information Sector. Interim Findings of "Best Practice Benchmarking in the Library and Information Sector," a British Library Research and Development Department Project.

    ERIC Educational Resources Information Center

    Kinnell, Margaret; Garrod, Penny

    This British Library Research and Development Department study assesses current activities and attitudes toward quality management in library and information services (LIS) in the academic sector as well as the commercial/industrial sector. Definitions and types of benchmarking are described, and the relevance of benchmarking to LIS is evaluated.…

  13. Development of water quality criteria and screening benchmarks for 2,4,6 trinitrotoluene

    SciTech Connect

    Talmage, S.S.; Opresko, D.M.

    1995-12-31

    Munitions compounds and their degradation products are present at many Army Ammunition Plant Superfund sites. Neither Water Quality Criteria (WQC) for aquatic organisms nor safe soil levels for terrestrial plants and animals have been developed for munitions compounds including trinitrotoluene (TNT). Data are available for the calculation of an acute WQC for TNT according to US EPA guidelines but are insufficient to calculate a chronic criterion. However, available data can be used to determine a Secondary Chronic Value (SCV) and to determine lowest chronic values for fish and daphnids (used by EPA in the absence of criteria). Based on data from eight genera of aquatic organisms, an acute WOC of 0.566 mg/L was calculated. Using available data, a SCV of 0.137 mg/L was calculated. Lowest chronic values for fish and for daphnids are 0.04 mg/L and 1.03 mg/L, respectively. The lowest concentration that affected the growth of aquatic plants was 1.0 mg/L. For terrestrial animals, data from studies of laboratory animals can be extrapolated to derive screening benchmarks in the same way in which human toxicity values are derived from laboratory animal data. For terrestrial animals, a no-observed-adverse-effect-level (NOAEL) for reproductive effects of 1.60 mg/kg/day was determined from a subchronic laboratory feeding study with rats. By scaling the test NOAEL on the basis of differences in body size, screening benchmarks were calculated for oral intake for selected mammalian wildlife species. Screening benchmarks were also derived for protection of benthic organisms in sediment, for soil invertebrates, and for terrestrial plants.

  14. Development of Methodologies, Metrics, and Tools for Investigating Human-Robot Interaction in Space Robotics

    NASA Technical Reports Server (NTRS)

    Ezer, Neta; Zumbado, Jennifer Rochlis; Sandor, Aniko; Boyer, Jennifer

    2011-01-01

    Human-robot systems are expected to have a central role in future space exploration missions that extend beyond low-earth orbit [1]. As part of a directed research project funded by NASA s Human Research Program (HRP), researchers at the Johnson Space Center have started to use a variety of techniques, including literature reviews, case studies, knowledge capture, field studies, and experiments to understand critical human-robot interaction (HRI) variables for current and future systems. Activities accomplished to date include observations of the International Space Station s Special Purpose Dexterous Manipulator (SPDM), Robonaut, and Space Exploration Vehicle (SEV), as well as interviews with robotics trainers, robot operators, and developers of gesture interfaces. A survey of methods and metrics used in HRI was completed to identify those most applicable to space robotics. These methods and metrics included techniques and tools associated with task performance, the quantification of human-robot interactions and communication, usability, human workload, and situation awareness. The need for more research in areas such as natural interfaces, compensations for loss of signal and poor video quality, psycho-physiological feedback, and common HRI testbeds were identified. The initial findings from these activities and planned future research are discussed. Human-robot systems are expected to have a central role in future space exploration missions that extend beyond low-earth orbit [1]. As part of a directed research project funded by NASA s Human Research Program (HRP), researchers at the Johnson Space Center have started to use a variety of techniques, including literature reviews, case studies, knowledge capture, field studies, and experiments to understand critical human-robot interaction (HRI) variables for current and future systems. Activities accomplished to date include observations of the International Space Station s Special Purpose Dexterous Manipulator

  15. [Development of lead benchmarks for soil based on human blood lead level in China].

    PubMed

    Zhang, Hong-zhen; Luo, Yong-ming; Zhang, Hai-bo; Song, Jing; Xia, Jia-qi; Zhao, Qi-guo

    2009-10-15

    Lead benchmarks for soil are mainly established based on blood lead concentration of children. This is because lead plays a dramatically negative role in children's cognitive development and intellectual performance and thus soil lead has been concerned as main lead exposure source for children. Based on the extensively collection of domestic available data, lead levels in air, drinking water are 0.12-1.0 microg x m(-3) and 2-10 microg x L(-1); ingestion of lead from food by children of 0-6 years old is 10-25 microg x d(-1); geometric mean of women blood lead 1concentration of child bearing age is 4.79 microg x dL(-1), with 1.48 GSD. Lead benchmarks for soil were calculated with the Integration Exposure Uptake Biokinetic Model (IEUBK) and the Adult Lead Model (ALM). The results showed the lead criteria values for residual land and commercial/industrial land was 282 mg x kg(-1) and 627 mg x kg(-1) respectively, which was slightly lower compared with U.S.A. and U.K. Parameters sensitivity analysis indicated that lead exposure scenario of children in China was significantly different from children in developed countries and children lead exposure level in China was obviously higher. Urgent work is required for the relationship studies between lead exposure scenario and blood lead level of children and establishment of risk assessment guideline of lead contaminated soil based on human blood lead level. PMID:19968127

  16. Recognition and Assessment of Eosinophilic Esophagitis: The Development of New Clinical Outcome Metrics

    PubMed Central

    Nguyen, Nathalie; Menard-Katcher, Calies

    2015-01-01

    Eosinophilic esophagitis (EoE) is a chronic, food-allergic disease manifest by symptoms of esophageal dysfunction and dense esophageal eosinophilia in which other causes have been excluded. Treatments include dietary restriction of the offending allergens, topical corticosteroids, and dilation of strictures. EoE has become increasingly prevalent over the past decade and has been increasingly recognized as a major health concern. Advancements in research and clinical needs have led to the development of novel pediatric- and adult-specific clinical outcome metrics (COMs). These COMs provide ways to measure clinically relevant features in EoE and set the stage for measuring outcomes in future therapeutic trials. In this article, we review novel symptom measurement assessments, the use of radiographic imaging to serve as a metric for therapeutic interventions, recently developed standardized methods for endoscopic assessment, novel techniques to evaluate esophageal mucosal inflammation, and methods for functional assessment of the esophagus. These advancements, in conjunction with current consensus recommendations, will improve the clinical assessment of patients with EoE. PMID:27330494

  17. Development and evaluation of aperture-based complexity metrics using film and EPID measurements of static MLC openings

    SciTech Connect

    Götstedt, Julia; Karlsson Hauer, Anna; Bäck, Anna

    2015-07-15

    Purpose: Complexity metrics have been suggested as a complement to measurement-based quality assurance for intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT). However, these metrics have not yet been sufficiently validated. This study develops and evaluates new aperture-based complexity metrics in the context of static multileaf collimator (MLC) openings and compares them to previously published metrics. Methods: This study develops the converted aperture metric and the edge area metric. The converted aperture metric is based on small and irregular parts within the MLC opening that are quantified as measured distances between MLC leaves. The edge area metric is based on the relative size of the region around the edges defined by the MLC. Another metric suggested in this study is the circumference/area ratio. Earlier defined aperture-based complexity metrics—the modulation complexity score, the edge metric, the ratio monitor units (MU)/Gy, the aperture area, and the aperture irregularity—are compared to the newly proposed metrics. A set of small and irregular static MLC openings are created which simulate individual IMRT/VMAT control points of various complexities. These are measured with both an amorphous silicon electronic portal imaging device and EBT3 film. The differences between calculated and measured dose distributions are evaluated using a pixel-by-pixel comparison with two global dose difference criteria of 3% and 5%. The extent of the dose differences, expressed in terms of pass rate, is used as a measure of the complexity of the MLC openings and used for the evaluation of the metrics compared in this study. The different complexity scores are calculated for each created static MLC opening. The correlation between the calculated complexity scores and the extent of the dose differences (pass rate) are analyzed in scatter plots and using Pearson’s r-values. Results: The complexity scores calculated by the edge

  18. Development of a Computer Program for Analyzing Preliminary Aircraft Configurations in Relationship to Emerging Agility Metrics

    NASA Technical Reports Server (NTRS)

    Bauer, Brent

    1993-01-01

    This paper discusses the development of a FORTRAN computer code to perform agility analysis on aircraft configurations. This code is to be part of the NASA-Ames ACSYNT (AirCraft SYNThesis) design code. This paper begins with a discussion of contemporary agility research in the aircraft industry and a survey of a few agility metrics. The methodology, techniques and models developed for the code are then presented. Finally, example trade studies using the agility module along with ACSYNT are illustrated. These trade studies were conducted using a Northrop F-20 Tigershark aircraft model. The studies show that the agility module is effective in analyzing the influence of common parameters such as thrust-to-weight ratio and wing loading on agility criteria. The module can compare the agility potential between different configurations. In addition, one study illustrates the module's ability to optimize a configuration's agility performance.

  19. Software development predictors, error analysis, reliability models and software metric analysis

    NASA Technical Reports Server (NTRS)

    Basili, Victor

    1983-01-01

    The use of dynamic characteristics as predictors for software development was studied. It was found that there are some significant factors that could be useful as predictors. From a study on software errors and complexity, it was shown that meaningful results can be obtained which allow insight into software traits and the environment in which it is developed. Reliability models were studied. The research included the field of program testing because the validity of some reliability models depends on the answers to some unanswered questions about testing. In studying software metrics, data collected from seven software engineering laboratory (FORTRAN) projects were examined and three effort reporting accuracy checks were applied to demonstrate the need to validate a data base. Results are discussed.

  20. Translating diagnostic assays from the laboratory to the clinic: analytical and clinical metrics for device development and evaluation.

    PubMed

    Borysiak, Mark D; Thompson, Matthew J; Posner, Jonathan D

    2016-04-21

    As lab-on-a-chip health diagnostic technologies mature, there is a push to translate them from the laboratory to the clinic. For these diagnostics to achieve maximum impact on patient care, scientists and engineers developing the tests should understand the analytical and clinical statistical metrics that determine the efficacy of the test. Appreciating and using these metrics will benefit test developers by providing consistent measures to evaluate analytical and clinical test performance, as well as guide the design of tests that will most benefit clinicians and patients. This paper is broken into four sections that discuss metrics related to general stages of development including: (1) laboratory assay development (analytical sensitivity, limit of detection, analytical selectivity, and trueness/precision), (2) pre-clinical development (diagnostic sensitivity, diagnostic specificity, clinical cutoffs, and receiver-operator curves), (3) clinical use (prevalence, predictive values, and likelihood ratios), and (4) case studies from existing clinical data for tests relevant to the lab-on-a-chip community (HIV, group A strep, and chlamydia). Each section contains definitions of recommended statistical measures, as well as examples demonstrating the importance of these metrics at various stages of the development process. Increasing the use of these metrics in lab-on-a-chip research will improve the rigor of diagnostic performance reporting and provide a better understanding of how to design tests that will ultimately meet clinical needs. PMID:27043204

  1. Benchmark Development in Support of Generation-IV Reactor Validation (IRPhEP 2010 Handbook)

    SciTech Connect

    John D. Bess; J. Blair Briggs

    2010-06-01

    The March 2010 edition of the International Reactor Physics Experiment Evaluation Project (IRPhEP) Handbook includes additional benchmark data that can be implemented in the validation of data and methods for Generation IV (GEN-IV) reactor designs. Evaluations supporting sodium-cooled fast reactor (SFR) efforts include the initial isothermal tests of the Fast Flux Test Facility (FFTF) at the Hanford Site, the Zero Power Physics Reactor (ZPPR) 10B and 10C experiments at the Idaho National Laboratory (INL), and the burn-up reactivity coefficient of Japan’s JOYO reactor. An assessment of Russia’s BFS-61 assemblies at the Institute of Physics and Power Engineering (IPPE) provides additional information for lead-cooled fast reactor (LFR) systems. Benchmarks in support of the very high temperature reactor (VHTR) project include evaluations of the HTR-PROTEUS experiments performed at the Paul Scherrer Institut (PSI) in Switzerland and the start-up core physics tests of Japan’s High Temperature Engineering Test Reactor. The critical configuration of the Power Burst Facility (PBF) at the INL which used ternary ceramic fuel, U(18)O2-CaO-ZrO2, is of interest for fuel cycle research and development (FCR&D) and has some similarities to “inert-matrix” fuels that are of interest in GEN-IV advanced reactor design. Two additional evaluations were revised to include additional evaluated experimental data, in support of light water reactor (LWR) and heavy water reactor (HWR) research; these include reactor physics experiments at Brazil’s IPEN/MB-01 Research Reactor Facility and the French High Flux Reactor (RHF), respectively. The IRPhEP Handbook now includes data from 45 experimental series (representing 24 reactor facilities) and represents contributions from 15 countries. These experimental measurements represent large investments of infrastructure, experience, and cost that have been evaluated and preserved as benchmarks for the validation of methods and collection of

  2. Toward the Development of Cognitive Task Difficulty Metrics to Support Intelligence Analysis Research

    SciTech Connect

    Greitzer, Frank L.

    2005-08-08

    Intelligence analysis is a cognitively complex task that is the subject of considerable research aimed at developing methods and tools to aid the analysis process. To support such research, it is necessary to characterize the difficulty or complexity of intelligence analysis tasks in order to facilitate assessments of the impact or effectiveness of tools that are being considered for deployment. A number of informal accounts of ''What makes intelligence analysis hard'' are available, but there has been no attempt to establish a more rigorous characterization with well-defined difficulty factors or dimensions. This paper takes an initial step in this direction by describing a set of proposed difficulty metrics based on cognitive principles.

  3. Benchmarking Using Basic DBMS Operations

    NASA Astrophysics Data System (ADS)

    Crolotte, Alain; Ghazal, Ahmad

    The TPC-H benchmark proved to be successful in the decision support area. Many commercial database vendors and their related hardware vendors used these benchmarks to show the superiority and competitive edge of their products. However, over time, the TPC-H became less representative of industry trends as vendors keep tuning their database to this benchmark-specific workload. In this paper, we present XMarq, a simple benchmark framework that can be used to compare various software/hardware combinations. Our benchmark model is currently composed of 25 queries that measure the performance of basic operations such as scans, aggregations, joins and index access. This benchmark model is based on the TPC-H data model due to its maturity and well-understood data generation capability. We also propose metrics to evaluate single-system performance and compare two systems. Finally we illustrate the effectiveness of this model by showing experimental results comparing two systems under different conditions.

  4. Stakeholder insights on the planning and development of an independent benchmark standard for responsible food marketing.

    PubMed

    Cairns, Georgina; Macdonald, Laura

    2016-06-01

    A mixed methods qualitative survey investigated stakeholder responses to the proposal to develop an independently defined, audited and certifiable set of benchmark standards for responsible food marketing. Its purpose was to inform the policy planning and development process. A majority of respondents were supportive of the proposal. A majority also viewed the engagement and collaboration of a broad base of stakeholders in its planning and development as potentially beneficial. Positive responses were associated with views that policy controls can and should be extended to include all form of marketing, that obesity and non-communicable diseases prevention and control was a shared responsibility and an urgent policy priority and prior experience of independent standardisation as a policy lever for good practice. Strong policy leadership, demonstrable utilisation of the evidence base in its development and deployment and a conceptually clear communications plan were identified as priority targets for future policy planning. Future research priorities include generating more evidence on the feasibility of developing an effective community of practice and theory of change, the strengths and limitations of these and developing an evidence-based step-wise communications strategy. PMID:27085486

  5. Development and Implementation of a Metric Inservice Program for Teachers at Samuel Morse Elementary School.

    ERIC Educational Resources Information Center

    Butler, Thelma R.

    A model for organizing an introductory in-service workshop for elementary school teachers in the basic fundamentals and contents of the metric system is presented. Data collected from various questionnaires and tests suggest that the program improved the teacher's performance in presenting the metric system and that this improvement had a positive…

  6. Subsystem Details for the Fiscal Year 2004 Advanced Life Support Research and Technology Development Metric

    NASA Technical Reports Server (NTRS)

    Hanford, Anthony J.

    2004-01-01

    This document provides values at the assembly level for the subsystems described in the Fiscal Year 2004 Advanced Life Support Research and Technology Development Metric (Hanford, 2004). Hanford (2004) summarizes the subordinate computational values for the Advanced Life Support Research and Technology Development (ALS R&TD) Metric at the subsystem level, while this manuscript provides a summary at the assembly level. Hanford (2004) lists mass, volume, power, cooling, and crewtime for each mission examined by the ALS R&TD Metric according to the nominal organization for the Advanced Life Support (ALS) elements. The values in the tables below, Table 2.1 through Table 2.8, list the assemblies, using the organization and names within the Advanced Life Support Sizing Analysis Tool (ALSSAT) for each ALS element. These tables specifically detail mass, volume, power, cooling, and crewtime. Additionally, mass and volume are designated in terms of values associated with initial hardware and resupplied hardware just as they are within ALSSAT. The overall subsystem values are listed on the line following each subsystem entry. These values are consistent with those reported in Hanford (2004) for each listed mission. Any deviations between these values and those in Hanford (2004) arise from differences in when individual numerical values are rounded within each report, and therefore the resulting minor differences should not concern even a careful reader. Hanford (2004) u es the uni ts kW(sub e) and kW(sub th) for power and cooling, respectively, while the nomenclature below uses W(sub e) and W(sub th), which is consistent with the native units within ALSSAT. The assemblies, as specified within ALSSAT, are listed in bold below their respective subsystems. When recognizable assembly components are not listed within ALSSAT, a summary of the assembly is provided on the same line as the entry for the assembly. Assemblies with one or more recognizable components are further

  7. Evaluation of Two Crew Module Boilerplate Tests Using Newly Developed Calibration Metrics

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.

    2012-01-01

    The paper discusses a application of multi-dimensional calibration metrics to evaluate pressure data from water drop tests of the Max Launch Abort System (MLAS) crew module boilerplate. Specifically, three metrics are discussed: 1) a metric to assess the probability of enveloping the measured data with the model, 2) a multi-dimensional orthogonality metric to assess model adequacy between test and analysis, and 3) a prediction error metric to conduct sensor placement to minimize pressure prediction errors. Data from similar (nearly repeated) capsule drop tests shows significant variability in the measured pressure responses. When compared to expected variability using model predictions, it is demonstrated that the measured variability cannot be explained by the model under the current uncertainty assumptions.

  8. Benchmarking HRD.

    ERIC Educational Resources Information Center

    Ford, Donald J.

    1993-01-01

    Discusses benchmarking, the continuous process of measuring one's products, services, and practices against those recognized as leaders in that field to identify areas for improvement. Examines ways in which benchmarking can benefit human resources functions. (JOW)

  9. Benchmarking progress in tackling the challenges of intellectual property, and access to medicines in developing countries.

    PubMed Central

    Musungu, Sisule F.

    2006-01-01

    The impact of intellectual property protection in the pharmaceutical sector on developing countries has been a central issue in the fierce debate during the past 10 years in a number of international fora, particularly the World Trade Organization (WTO) and WHO. The debate centres on whether the intellectual property system is: (1) providing sufficient incentives for research and development into medicines for diseases that disproportionately affect developing countries; and (2) restricting access to existing medicines for these countries. The Doha Declaration was adopted at WTO in 2001 and the Commission on Intellectual Property, Innovation and Public Health was established at WHO in 2004, but their respective contributions to tackling intellectual property-related challenges are disputed. Objective parameters are needed to measure whether a particular series of actions, events, decisions or processes contribute to progress in this area. This article proposes six possible benchmarks for intellectual property-related challenges with regard to the development of medicines and ensuring access to medicines in developing countries. PMID:16710545

  10. A Locally Weighted Fixation Density-Based Metric for Assessing the Quality of Visual Saliency Predictions.

    PubMed

    Gide, Milind S; Karam, Lina J

    2016-08-01

    With the increased focus on visual attention (VA) in the last decade, a large number of computational visual saliency methods have been developed over the past few years. These models are traditionally evaluated by using performance evaluation metrics that quantify the match between predicted saliency and fixation data obtained from eye-tracking experiments on human observers. Though a considerable number of such metrics have been proposed in the literature, there are notable problems in them. In this paper, we discuss shortcomings in the existing metrics through illustrative examples and propose a new metric that uses local weights based on fixation density, which overcomes these flaws. To compare the performance of our proposed metric at assessing the quality of saliency prediction with other existing metrics, we construct a ground-truth subjective database in which saliency maps obtained from 17 different VA models are evaluated by 16 human observers on a five-point categorical scale in terms of their visual resemblance with corresponding ground-truth fixation density maps obtained from eye-tracking data. The metrics are evaluated by correlating metric scores with the human subjective ratings. The correlation results show that the proposed evaluation metric outperforms all other popular existing metrics. In addition, the constructed database and corresponding subjective ratings provide an insight into which of the existing metrics and future metrics are better at estimating the quality of saliency prediction and can be used as a benchmark. PMID:27295671

  11. Developing meaningful metrics of clinical productivity for military treatment facility anesthesiology departments and operative services.

    PubMed

    Mongan, Paul D; Van der Schuur, L T Brian; Damiano, Louis A; Via, Darin K

    2003-11-01

    Comparing clinical productivity is important for strategic planning and the evaluation of resource allocation in any large organization. This process of benchmarking performance allows for the comparison of groups with similar characteristics. However, this process is often difficult when comparing the operative service productivity of large and small military treatment facilities because of the significant heterogeneity in mission focus and case complexity. However, in this article, we describe the application of a new method of benchmarking operative service productivity based on normalizing data for operating room sites, cases, and total American Society of Anesthesiologists units produced per hour. We demonstrate how these benchmarks allow for valid comparisons of operative service productivity among these military treatment facilities and how the data could be used in expanding or contracting operating locations. In addition, these benchmarks are compared with those derived from the use of this system in the civilian sector. PMID:14680041

  12. Development and Application of Benchmark Examples for Mixed-Mode I/II Quasi-Static Delamination Propagation Predictions

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2012-01-01

    The development of benchmark examples for quasi-static delamination propagation prediction is presented and demonstrated for a commercial code. The examples are based on finite element models of the Mixed-Mode Bending (MMB) specimen. The examples are independent of the analysis software used and allow the assessment of the automated delamination propagation prediction capability in commercial finite element codes based on the virtual crack closure technique (VCCT). First, quasi-static benchmark examples were created for the specimen. Second, starting from an initially straight front, the delamination was allowed to propagate under quasi-static loading. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Good agreement between the results obtained from the automated propagation analysis and the benchmark results could be achieved by selecting input parameters that had previously been determined during analyses of mode I Double Cantilever Beam and mode II End Notched Flexure specimens. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Overall the results are encouraging, but further assessment for mixed-mode delamination fatigue onset and growth is required.

  13. Development and Application of Benchmark Examples for Mixed-Mode I/II Quasi-Static Delamination Propagation Predictions

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2012-01-01

    The development of benchmark examples for quasi-static delamination propagation prediction is presented. The example is based on a finite element model of the Mixed-Mode Bending (MMB) specimen for 50% mode II. The benchmarking is demonstrated for Abaqus/Standard, however, the example is independent of the analysis software used and allows the assessment of the automated delamination propagation prediction capability in commercial finite element codes based on the virtual crack closure technique (VCCT). First, a quasi-static benchmark example was created for the specimen. Second, starting from an initially straight front, the delamination was allowed to propagate under quasi-static loading. Third, the load-displacement as well as delamination length versus applied load/displacement relationships from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Overall, the results are encouraging, but further assessment for mixed-mode delamination fatigue onset and growth is required.

  14. Public Interest Energy Research (PIER) Program Development of a Computer-based Benchmarking and Analytical Tool. Benchmarking and Energy & Water Savings Tool in Dairy Plants (BEST-Dairy)

    SciTech Connect

    Xu, Tengfang; Flapper, Joris; Ke, Jing; Kramer, Klaas; Sathaye, Jayant

    2012-02-01

    The overall goal of the project is to develop a computer-based benchmarking and energy and water savings tool (BEST-Dairy) for use in the California dairy industry - including four dairy processes - cheese, fluid milk, butter, and milk powder. BEST-Dairy tool developed in this project provides three options for the user to benchmark each of the dairy product included in the tool, with each option differentiated based on specific detail level of process or plant, i.e., 1) plant level; 2) process-group level, and 3) process-step level. For each detail level, the tool accounts for differences in production and other variables affecting energy use in dairy processes. The dairy products include cheese, fluid milk, butter, milk powder, etc. The BEST-Dairy tool can be applied to a wide range of dairy facilities to provide energy and water savings estimates, which are based upon the comparisons with the best available reference cases that were established through reviewing information from international and national samples. We have performed and completed alpha- and beta-testing (field testing) of the BEST-Dairy tool, through which feedback from voluntary users in the U.S. dairy industry was gathered to validate and improve the tool's functionality. BEST-Dairy v1.2 was formally published in May 2011, and has been made available for free downloads from the internet (i.e., http://best-dairy.lbl.gov). A user's manual has been developed and published as the companion documentation for use with the BEST-Dairy tool. In addition, we also carried out technology transfer activities by engaging the dairy industry in the process of tool development and testing, including field testing, technical presentations, and technical assistance throughout the project. To date, users from more than ten countries in addition to those in the U.S. have downloaded the BEST-Dairy from the LBNL website. It is expected that the use of BEST-Dairy tool will advance understanding of energy and water

  15. Analysis of urban development by means of multi-temporal fragmentation metrics from LULC data

    NASA Astrophysics Data System (ADS)

    Sapena, M.; Ruiz, L. A.

    2015-04-01

    The monitoring and modelling of the evolution of urban areas is increasingly attracting the attention of land managers and administration. New data, tools and methods are being developed and made available for a better understanding of these dynamic areas. We study and analyse the concept of landscape fragmentation by means of GIS and remote sensing techniques, particularly focused on urban areas. Using LULC data obtained from the European Urban Atlas dataset developed by the local component of Copernicus Land Monitoring Services (scale 1:10,000), the urban fragmentation of the province of Rome is studied at 2006 and 2012. A selection of indices that are able to measure the land cover fragmentation level in the landscape are obtained employing a tool called IndiFrag, using as input data LULC data in vector format. In order to monitor the urban morphological changes and growth patterns, a new module with additional multi-temporal metrics has been developed for this purpose. These urban fragmentation and multi-temporal indices have been applied to the municipalities and districts of Rome, analysed and interpreted to characterise quantity, spatial distribution and structure of the urban change. This methodology is applicable to different regions, affording a dynamic quantification of urban spatial patterns and urban sprawl. The results show that urban form monitoring with multi-temporal data using these techniques highlights urbanization trends, having a great potential to quantify and model geographic development of metropolitan areas and to analyse its relationship with socioeconomic factors through the time.

  16. Degree-Day Benchmarks for Sparganothis sulfureana (Lepidoptera: Tortricidae) Development in Cranberries.

    PubMed

    Deutsch, Annie E; Rodriguez-Saona, Cesar R; Kyryczenko-Roth, Vera; Sojka, Jayne; Zalapa, Juan E; Steffan, Shawn A

    2014-12-01

    Sparganothis sulfureana Clemens is a severe pest of cranberries in the Midwest and northeast United States. Timing for insecticide applications has relied primarily on calendar dates and pheromone trap-catch; however, abiotic conditions can vary greatly, rendering such methods unreliable as indicators of optimal treatment timing. Phenology models based on degree-day (DD) accrual represent a proven, superior approach to assessing the development of insect populations, particularly for larvae. Previous studies of S. sulfureana development showed that the lower and upper temperature thresholds for larval development were 10.0 and 29.9°C (49.9 and 85.8°F), respectively. We used these thresholds to generate DD accumulations specific to S. sulfureana, and then linked these DD accumulations to discrete biological events observed during S. sulfureana development in Wisconsin and New Jersey cranberries. Here, we provide the DDs associated with flight initiation, peak flight, flight termination, adult life span, preovipositional period, ovipositional period, and egg hatch. These DD accumulations represent key developmental benchmarks, allowing for the creation of a phenology model that facilitates wiser management of S. sulfureana in the cranberry system. PMID:26470078

  17. Physical Model Development and Benchmarking for MHD Flows in Blanket Design

    SciTech Connect

    Ramakanth Munipalli; P.-Y.Huang; C.Chandler; C.Rowell; M.-J.Ni; N.Morley; S.Smolentsev; M.Abdou

    2008-06-05

    An advanced simulation environment to model incompressible MHD flows relevant to blanket conditions in fusion reactors has been developed at HyPerComp in research collaboration with TEXCEL. The goals of this phase-II project are two-fold: The first is the incorporation of crucial physical phenomena such as induced magnetic field modeling, and extending the capabilities beyond fluid flow prediction to model heat transfer with natural convection and mass transfer including tritium transport and permeation. The second is the design of a sequence of benchmark tests to establish code competence for several classes of physical phenomena in isolation as well as in select (termed here as “canonical”,) combinations. No previous attempts to develop such a comprehensive MHD modeling capability exist in the literature, and this study represents essentially uncharted territory. During the course of this Phase-II project, a significant breakthrough was achieved in modeling liquid metal flows at high Hartmann numbers. We developed a unique mathematical technique to accurately compute the fluid flow in complex geometries at extremely high Hartmann numbers (10,000 and greater), thus extending the state of the art of liquid metal MHD modeling relevant to fusion reactors at the present time. These developments have been published in noted international journals. A sequence of theoretical and experimental results was used to verify and validate the results obtained. The code was applied to a complete DCLL module simulation study with promising results.

  18. Color Metric.

    ERIC Educational Resources Information Center

    Illinois State Office of Education, Springfield.

    This booklet was designed to convey metric information in pictoral form. The use of pictures in the coloring book enables the more mature person to grasp the metric message instantly, whereas the younger person, while coloring the picture, will be exposed to the metric information long enough to make the proper associations. Sheets of the booklet…

  19. SAT Benchmarks: Development of a College Readiness Benchmark and Its Relationship to Secondary and Postsecondary School Performance. Research Report 2011-5

    ERIC Educational Resources Information Center

    Wyatt, Jeffrey; Kobrin, Jennifer; Wiley, Andrew; Camara, Wayne J.; Proestler, Nina

    2011-01-01

    The current study was part of an ongoing effort at the College Board to establish college readiness benchmarks on the SAT[R], PSAT/NMSQT[R], and ReadiStep[TM] as well as to provide schools, districts, and states with a view of their students' college readiness. College readiness benchmarks were established based on SAT performance, using a…

  20. Process for the development of image quality metrics for underwater electro-optic sensors

    NASA Astrophysics Data System (ADS)

    Taylor, James S., Jr.; Cordes, Brett

    2003-09-01

    Electro-optic identification (EOID) sensors have been demonstrated as an important tool in the identification of bottom sea mines and are transitioning to the fleet. These sensors produce two and three-dimensional images that will be used by operators and algorithms to make the all-important decision regarding use of neutralization systems against sonar contacts classified as mine-like. The quality of EOID images produced can vary dramatically depending on system design, operating parameters, and ocean environment, necessitating the need for a common scale of image quality or interpretability as a basic measure of the information content of the output images and the expected performance that they provide. Two candidate approaches have been identified for the development of an image quality metric. The first approach is the development of a modified National Imagery Interpretability Rating Scale (NIIRS) based on the EOID tasks. Coupled with this new scale would be a modified form of the General Image Quality Equation (GIQE) to provide a bridge from the system parameters to the NIIRS scale. The other approach is based on the Target Acquisition Model (TAM) that has foundations in Johnson"s criteria and a set of tasks. The following paper presents these two approaches along with an explanation of the application to the EOID problem.

  1. Metrics That Matter.

    PubMed

    Prentice, Julia C; Frakt, Austin B; Pizer, Steven D

    2016-04-01

    Increasingly, performance metrics are seen as key components for accurately measuring and improving health care value. Disappointment in the ability of chosen metrics to meet these goals is exemplified in a recent Institute of Medicine report that argues for a consensus-building process to determine a simplified set of reliable metrics. Overall health care goals should be defined and then metrics to measure these goals should be considered. If appropriate data for the identified goals are not available, they should be developed. We use examples from our work in the Veterans Health Administration (VHA) on validating waiting time and mental health metrics to highlight other key issues for metric selection and implementation. First, we focus on the need for specification and predictive validation of metrics. Second, we discuss strategies to maintain the fidelity of the data used in performance metrics over time. These strategies include using appropriate incentives and data sources, using composite metrics, and ongoing monitoring. Finally, we discuss the VA's leadership in developing performance metrics through a planned upgrade in its electronic medical record system to collect more comprehensive VHA and non-VHA data, increasing the ability to comprehensively measure outcomes. PMID:26951272

  2. Benchmarking Model Variants in Development of a Hardware-in-the-Loop Simulation System

    NASA Technical Reports Server (NTRS)

    Aretskin-Hariton, Eliot D.; Zinnecker, Alicia M.; Kratz, Jonathan L.; Culley, Dennis E.; Thomas, George L.

    2016-01-01

    Distributed engine control architecture presents a significant increase in complexity over traditional implementations when viewed from the perspective of system simulation and hardware design and test. Even if the overall function of the control scheme remains the same, the hardware implementation can have a significant effect on the overall system performance due to differences in the creation and flow of data between control elements. A Hardware-in-the-Loop (HIL) simulation system is under development at NASA Glenn Research Center that enables the exploration of these hardware dependent issues. The system is based on, but not limited to, the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k). This paper describes the step-by-step conversion from the self-contained baseline model to the hardware in the loop model, and the validation of each step. As the control model hardware fidelity was improved during HIL system development, benchmarking simulations were performed to verify that engine system performance characteristics remained the same. The results demonstrate the goal of the effort; the new HIL configurations have similar functionality and performance compared to the baseline C-MAPSS40k system.

  3. Alquimia: Exposing mature biogeochemistry capabilities for easier benchmarking and development of next-generation subsurface codes

    NASA Astrophysics Data System (ADS)

    Johnson, J. N.; Molins, S.

    2015-12-01

    The complexity of subsurface models is increasing in order to address pressing scientific questions in hydrology and climate science. In particular, models that attempt to explore the coupling between microbial metabolic activity and hydrology at larger scales need an accurate representation of their underlying biogeochemical systems. These systems tend to be very complicated, and they result in large nonlinear systems that have to be coupled with flow and transport algorithms in reactive transport codes. The complexity inherent in implementing a robust treatment of biogeochemistry is a significant obstacle in the development of new codes. Alquimia is an open-source software library intended to help developers of these codes overcome this obstacle by exposing tried-and-true biogeochemical capabilities in existing software. It provides an interface through which a reactive transport code can access and evolve a chemical system, using one of several supported geochemical "engines." We will describe Alquimia's current capabilities, and how they can be used for benchmarking reactive transport codes. We will also discuss upcoming features that will facilitate the coupling of biogeochemistry to other processes in new codes.

  4. Conceptual Framework for Developing Resilience Metrics for the Electricity, Oil, and Gas Sectors in the United States

    SciTech Connect

    Watson, Jean-Paul; Guttromson, Ross; Silva-Monroy, Cesar; Jeffers, Robert; Jones, Katherine; Ellison, James; Rath, Charles; Gearhart, Jared; Jones, Dean; Corbet, Tom; Hanley, Charles; Walker, La Tonya

    2014-09-01

    This report has been written for the Department of Energy’s Energy Policy and Systems Analysis Office to inform their writing of the Quadrennial Energy Review in the area of energy resilience. The topics of measuring and increasing energy resilience are addressed, including definitions, means of measuring, and analytic methodologies that can be used to make decisions for policy, infrastructure planning, and operations. A risk-based framework is presented which provides a standard definition of a resilience metric. Additionally, a process is identified which explains how the metrics can be applied. Research and development is articulated that will further accelerate the resilience of energy infrastructures.

  5. A Strategy for Developing a Common Metric in Item Response Theory when Parameter Posterior Distributions Are Known

    ERIC Educational Resources Information Center

    Baldwin, Peter

    2011-01-01

    Growing interest in fully Bayesian item response models begs the question: To what extent can model parameter posterior draws enhance existing practices? One practice that has traditionally relied on model parameter point estimates but may be improved by using posterior draws is the development of a common metric for two independently calibrated…

  6. How to Advance TPC Benchmarks with Dependability Aspects

    NASA Astrophysics Data System (ADS)

    Almeida, Raquel; Poess, Meikel; Nambiar, Raghunath; Patil, Indira; Vieira, Marco

    Transactional systems are the core of the information systems of most organizations. Although there is general acknowledgement that failures in these systems often entail significant impact both on the proceeds and reputation of companies, the benchmarks developed and managed by the Transaction Processing Performance Council (TPC) still maintain their focus on reporting bare performance. Each TPC benchmark has to pass a list of dependability-related tests (to verify ACID properties), but not all benchmarks require measuring their performances. While TPC-E measures the recovery time of some system failures, TPC-H and TPC-C only require functional correctness of such recovery. Consequently, systems used in TPC benchmarks are tuned mostly for performance. In this paper we argue that nowadays systems should be tuned for a more comprehensive suite of dependability tests, and that a dependability metric should be part of TPC benchmark publications. The paper discusses WHY and HOW this can be achieved. Two approaches are introduced and discussed: augmenting each TPC benchmark in a customized way, by extending each specification individually; and pursuing a more unified approach, defining a generic specification that could be adjoined to any TPC benchmark.

  7. Cone beam computed tomography: Development of system characterization metrics and applications

    NASA Astrophysics Data System (ADS)

    Betancourt Benitez, Jose Ricardo

    Cone beam computed tomography has emerged as a promising medical imaging tool due to its short scanning time, large volume coverage and its isotropic spatial resolution in three dimensions among other characteristics. However, due to its inherent three-dimensionality, it is important to understand and characterize its physical characteristics to be able to improve its performance and extends its applications in medical imaging. One of the main components of a Cone beam computed tomography system is its flat panel detector. Its physical characteristics were evaluated in terms of spatial resolution, linearity, image lag, noise power spectrum and detective quantum efficiency. After evaluating the physical performance of the flat panel detector, metrics to evaluate the image quality of the system were developed and used to evaluate the systems image quality. Especially, the modulation transfer function and the noise power spectrum were characterized and evaluated for a PaxScan 4030CB FPD-based cone beam computed tomography system. Finally, novel applications using cone beam computed tomography images were suggested and evaluated for its practical application. For example, the characterization of breast density was evaluated and further studies were suggested that could impact the health system related to breast cancer. Another novel application was the utilization of cone beam computed tomography for orthopedic imaging. In this thesis, an initial assessment of its practical application was perform. Overall, three cone beam computed tomography systems were evaluated and utilized for different novel applications that would advance the field of medical imaging.

  8. Making Benchmark Testing Work

    ERIC Educational Resources Information Center

    Herman, Joan L.; Baker, Eva L.

    2005-01-01

    Many schools are moving to develop benchmark tests to monitor their students' progress toward state standards throughout the academic year. Benchmark tests can provide the ongoing information that schools need to guide instructional programs and to address student learning problems. The authors discuss six criteria that educators can use to…

  9. DNS benchmark solution of the fully developed turbulent channel flow with heat transfer

    NASA Astrophysics Data System (ADS)

    Jaszczur, M.

    2014-08-01

    In the present paper direct numerical simulation (DNS) of the fully developed turbulent non-isothermal flow has been study for Reτ=150 and for Pr=1.0. The focus is on the role of the thermal boundary condition type on the results. Various types of thermal boundary conditions presented in literature has been considered in this work: isoflux wall boundary conditions, symmetrical isofluxes wall boundary conditions and isothermal b.c. also with combination with adiabatic or isothermal second wall. Turbulence statistics for the fluid flow and thermal field as well turbulence structures are presented and compared. Numerical analysis assuming both zero and non-zero temperature fluctuations at the wall and zero and non-zero temperature gradient in the channel centre shows that thermal structures may differ depend on case and region. Results shows that the type of thermal boundary conditions significantly influence temperature fluctuations while the mean temperature is not affected. Difference in temperature fluctuation generate the difference in turbulent heat fluxes. Presented results are prepared in the form of the benchmark solution data and will be available in the digital form on the website http://home.agh.edu.pl/jaszczur.

  10. Developing chemical criteria for wildlife: The benchmark dose versus NOAEL approach

    SciTech Connect

    Linder, G.

    1995-12-31

    Wildlife may be exposed to a wide variety of chemicals in their environment, and various strategies for evaluating wildlife risk for these chemicals have been developed. One, a ``no-observable-adverse-effects-level`` or NOAEL-approach has increasingly been applied to develop chemical criteria for wildlife. In this approach, the NOAEL represents the highest experimental concentration at which there is no statistically significant change in some toxicity endpoint relative to a control. Another, the ``benchmark dose`` or BMD-approach relies on the lower confidence limit for a concentration that corresponds to a small, but statistically significant, change in effect over some reference condition. Rather than corresponding to a single experimental concentration as does the NOAEL, the BMD-approach considers the full concentration response curve for derivation of the BMD. Here, using a variety of vertebrates and an assortment of chemicals (including carbofuran, paraquat, methylmercury, cadmium, zinc, and copper), the NOAEL-approach will be critically evaluated relative to the BMD approach. Statistical models used in the BMD approach suggest these methods are potentially available for eliminating safety factors in risk calculations. A reluctance to recommend this, however, stems from the uncertainty associated with the shape of concentration-response curves at low concentrations. Also, with existing data the derivation of BMDs has shortcomings when sample size is small (10 or fewer animals per treatment). The success of BMD models clearly depends upon the continued collection of wildlife data in the field and laboratory, the design of toxicity studies sufficient for BMD calculations, and complete reporting of these results in the literature. Overall, the BMD approach for developing chemical criteria for wildlife should be given further consideration, since it more fully evaluates concentration-response data.

  11. Benchmarking B-Cell Epitope Prediction with Quantitative Dose-Response Data on Antipeptide Antibodies: Towards Novel Pharmaceutical Product Development

    PubMed Central

    Caoili, Salvador Eugenio C.

    2014-01-01

    B-cell epitope prediction can enable novel pharmaceutical product development. However, a mechanistically framed consensus has yet to emerge on benchmarking such prediction, thus presenting an opportunity to establish standards of practice that circumvent epistemic inconsistencies of casting the epitope prediction task as a binary-classification problem. As an alternative to conventional dichotomous qualitative benchmark data, quantitative dose-response data on antibody-mediated biological effects are more meaningful from an information-theoretic perspective in the sense that such effects may be expressed as probabilities (e.g., of functional inhibition by antibody) for which the Shannon information entropy (SIE) can be evaluated as a measure of informativeness. Accordingly, half-maximal biological effects (e.g., at median inhibitory concentrations of antibody) correspond to maximally informative data while undetectable and maximal biological effects correspond to minimally informative data. This applies to benchmarking B-cell epitope prediction for the design of peptide-based immunogens that elicit antipeptide antibodies with functionally relevant cross-reactivity. Presently, the Immune Epitope Database (IEDB) contains relatively few quantitative dose-response data on such cross-reactivity. Only a small fraction of these IEDB data is maximally informative, and many more of them are minimally informative (i.e., with zero SIE). Nevertheless, the numerous qualitative data in IEDB suggest how to overcome the paucity of informative benchmark data. PMID:24949474

  12. Development of a new, robust and accurate, spectroscopic metric for scatterer size estimation in optical coherence tomography (OCT) images

    NASA Astrophysics Data System (ADS)

    Kassinopoulos, Michalis; Pitris, Costas

    2016-03-01

    The modulations appearing on the backscattering spectrum originating from a scatterer are related to its diameter as described by Mie theory for spherical particles. Many metrics for Spectroscopic Optical Coherence Tomography (SOCT) take advantage of this observation in order to enhance the contrast of Optical Coherence Tomography (OCT) images. However, none of these metrics has achieved high accuracy when calculating the scatterer size. In this work, Mie theory was used to further investigate the relationship between the degree of modulation in the spectrum and the scatterer size. From this study, a new spectroscopic metric, the bandwidth of the Correlation of the Derivative (COD) was developed which is more robust and accurate, compared to previously reported techniques, in the estimation of scatterer size. The self-normalizing nature of the derivative and the robustness of the first minimum of the correlation as a measure of its width, offer significant advantages over other spectral analysis approaches especially for scatterer sizes above 3 μm. The feasibility of this technique was demonstrated using phantom samples containing 6, 10 and 16 μm diameter microspheres as well as images of normal and cancerous human colon. The results are very promising, suggesting that the proposed metric could be implemented in OCT spectral analysis for measuring nuclear size distribution in biological tissues. A technique providing such information would be of great clinical significance since it would allow the detection of nuclear enlargement at the earliest stages of precancerous development.

  13. Developing and Using Benchmarks for Eddy Current Simulation Codes Validation to Address Industrial Issues

    NASA Astrophysics Data System (ADS)

    Mayos, M.; Buvat, F.; Costan, V.; Moreau, O.; Gilles-Pascaud, C.; Reboud, C.; Foucher, F.

    2011-06-01

    To achieve performance demonstration, which is a legal requirement for the qualification of NDE processes applied on French nuclear power plants, the use of modeling tools is a valuable support, provided that the employed models have been previously validated. To achieve this, in particular for eddy current modeling, a validation methodology based on the use of specific benchmarks close to the actual industrial issue has to be defined. Nonetheless, considering the high variability in code origin and complexity, the feedback from experience on actual cases has shown that it was critical to define simpler generic and public benchmarks in order to perform a preliminary selection. A specific Working Group has been launched in the frame of COFREND, the French Association for NDE, resulting in the definition of several benchmark problems. This action is now ready for mutualization with similar international approaches.

  14. Benchmarking the CRBLASTER Computational Framework on the 350-MHz 49-core Maestro Development Board

    NASA Astrophysics Data System (ADS)

    Mighell, K. J.

    2012-09-01

    I describe the performance of the CRBLASTER computational framework on a 350-MHz 49-core Maestro Development Board (MBD). The 49-core Interim Test Chip (ITC) was developed by the U.S. Government and is based on the intellectual property of the 64-core TILE64 processor of the Tilera Corporation. The Maestro processor is intended for use in the high radiation environments found in space; the ITC was fabricated using IBM 90-nm CMOS 9SF technology and Radiation-Hardening-by-Design (RHDB) rules. CRBLASTER is a parallel-processing cosmic-ray rejection application based on a simple computational framework that uses the high-performance computing industry standard Message Passing Interface (MPI) library. CRBLASTER was designed to be used by research scientists to easily port image-analysis programs based on embarrassingly-parallel algorithms to a parallel-processing environment such as a multi-node Beowulf cluster or multi-core processors using MPI. I describe my experience of porting CRBLASTER to the 64-core TILE64 processor, the Maestro simulator, and finally the 49-core Maestro processor itself. Performance comparisons using the ITC are presented between emulating all floating-point operations in software and doing all floating point operations with hardware assist from an IEEE-754 compliant Aurora FPU (floating point unit) that is attached to each of the 49 cores. Benchmarking of the CRBLASTER computational framework using the memory-intensive L.A.COSMIC cosmic ray rejection algorithm and a computational-intensive Poisson noise generator reveal subtleties of the Maestro hardware design. Lastly, I describe the importance of using real scientific applications during the testing phase of next-generation computer hardware; complex real-world scientific applications can stress hardware in novel ways that may not necessarily be revealed while executing simple applications or unit tests.

  15. Benchmarking Diagnostic Algorithms on an Electrical Power System Testbed

    NASA Technical Reports Server (NTRS)

    Kurtoglu, Tolga; Narasimhan, Sriram; Poll, Scott; Garcia, David; Wright, Stephanie

    2009-01-01

    Diagnostic algorithms (DAs) are key to enabling automated health management. These algorithms are designed to detect and isolate anomalies of either a component or the whole system based on observations received from sensors. In recent years a wide range of algorithms, both model-based and data-driven, have been developed to increase autonomy and improve system reliability and affordability. However, the lack of support to perform systematic benchmarking of these algorithms continues to create barriers for effective development and deployment of diagnostic technologies. In this paper, we present our efforts to benchmark a set of DAs on a common platform using a framework that was developed to evaluate and compare various performance metrics for diagnostic technologies. The diagnosed system is an electrical power system, namely the Advanced Diagnostics and Prognostics Testbed (ADAPT) developed and located at the NASA Ames Research Center. The paper presents the fundamentals of the benchmarking framework, the ADAPT system, description of faults and data sets, the metrics used for evaluation, and an in-depth analysis of benchmarking results obtained from testing ten diagnostic algorithms on the ADAPT electrical power system testbed.

  16. Primary Metrics.

    ERIC Educational Resources Information Center

    Otto, Karen; And Others

    These 55 activity cards were created to help teachers implement a unit on metric measurement. They were designed for students aged 5 to 10, but could be used with older students. Cards are color-coded in terms of activities on basic metric terms, prefixes, length, and other measures. Both individual and small-group games and ideas are included.…

  17. Mastering Metrics

    ERIC Educational Resources Information Center

    Parrot, Annette M.

    2005-01-01

    By the time students reach a middle school science course, they are expected to make measurements using the metric system. However, most are not practiced in its use, as their experience in metrics is often limited to one unit they were taught in elementary school. This lack of knowledge is not wholly the fault of formal education. Although the…

  18. Metric Education Evaluation Package.

    ERIC Educational Resources Information Center

    Kansky, Bob; And Others

    This document was developed out of a need for a complete, carefully designed set of evaluation instruments and procedures that might be applied in metric inservice programs across the nation. Components of this package were prepared in such a way as to permit local adaptation to the evaluation of a broad spectrum of metric education activities.…

  19. Numerical studies and metric development for validation of magnetohydrodynamic models on the HIT-SI experiment

    SciTech Connect

    Hansen, C.; Victor, B.; Morgan, K.; Hossack, A.; Sutherland, D.; Jarboe, T.; Nelson, B. A.; Marklin, G.

    2015-05-15

    We present application of three scalar metrics derived from the Biorthogonal Decomposition (BD) technique to evaluate the level of agreement between macroscopic plasma dynamics in different data sets. BD decomposes large data sets, as produced by distributed diagnostic arrays, into principal mode structures without assumptions on spatial or temporal structure. These metrics have been applied to validation of the Hall-MHD model using experimental data from the Helicity Injected Torus with Steady Inductive helicity injection experiment. Each metric provides a measure of correlation between mode structures extracted from experimental data and simulations for an array of 192 surface-mounted magnetic probes. Numerical validation studies have been performed using the NIMROD code, where the injectors are modeled as boundary conditions on the flux conserver, and the PSI-TET code, where the entire plasma volume is treated. Initial results from a comprehensive validation study of high performance operation with different injector frequencies are presented, illustrating application of the BD method. Using a simplified (constant, uniform density and temperature) Hall-MHD model, simulation results agree with experimental observation for two of the three defined metrics when the injectors are driven with a frequency of 14.5 kHz.

  20. Aircraft Engine Gas Path Diagnostic Methods: Public Benchmarking Results

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Borguet, Sebastien; Leonard, Olivier; Zhang, Xiaodong (Frank)

    2013-01-01

    Recent technology reviews have identified the need for objective assessments of aircraft engine health management (EHM) technologies. To help address this issue, a gas path diagnostic benchmark problem has been created and made publicly available. This software tool, referred to as the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES), has been constructed based on feedback provided by the aircraft EHM community. It provides a standard benchmark problem enabling users to develop, evaluate and compare diagnostic methods. This paper will present an overview of ProDiMES along with a description of four gas path diagnostic methods developed and applied to the problem. These methods, which include analytical and empirical diagnostic techniques, will be described and associated blind-test-case metric results will be presented and compared. Lessons learned along with recommendations for improving the public benchmarking processes will also be presented and discussed.

  1. Development of new VOC exposure metrics and their relationship to ''Sick Building Syndrome'' symptoms

    SciTech Connect

    Ten Brinke, JoAnn

    1995-08-01

    Volatile organic compounds (VOCs) are suspected to contribute significantly to ''Sick Building Syndrome'' (SBS), a complex of subchronic symptoms that occurs during and in general decreases away from occupancy of the building in question. A new approach takes into account individual VOC potencies, as well as the highly correlated nature of the complex VOC mixtures found indoors. The new VOC metrics are statistically significant predictors of symptom outcomes from the California Healthy Buildings Study data. Multivariate logistic regression analyses were used to test the hypothesis that a summary measure of the VOC mixture, other risk factors, and covariates for each worker will lead to better prediction of symptom outcome. VOC metrics based on animal irritancy measures and principal component analysis had the most influence in the prediction of eye, dermal, and nasal symptoms. After adjustment, a water-based paints and solvents source was found to be associated with dermal and eye irritation. The more typical VOC exposure metrics used in prior analyses were not useful in symptom prediction in the adjusted model (total VOC (TVOC), or sum of individually identified VOCs ({Sigma}VOC{sub i})). Also not useful were three other VOC metrics that took into account potency, but did not adjust for the highly correlated nature of the data set, or the presence of VOCs that were not measured. High TVOC values (2--7 mg m{sup {minus}3}) due to the presence of liquid-process photocopiers observed in several study spaces significantly influenced symptoms. Analyses without the high TVOC values reduced, but did not eliminate the ability of the VOC exposure metric based on irritancy and principal component analysis to explain symptom outcome.

  2. Nuclear Energy Readiness Indicator Index (NERI): A benchmarking tool for assessing nuclear capacity in developing countries

    SciTech Connect

    Saum-Manning,L.

    2008-07-13

    Declining natural resources, rising oil prices, looming climate change and the introduction of nuclear energy partnerships, such as GNEP, have reinvigorated global interest in nuclear energy. The convergence of such issues has prompted countries to move ahead quickly to deal with the challenges that lie ahead. However, developing countries, in particular, often lack the domestic infrastructure and public support needed to implement a nuclear energy program in a safe, secure, and nonproliferation-conscious environment. How might countries become ready for nuclear energy? What is needed is a framework for assessing a country's readiness for nuclear energy. This paper suggests that a Nuclear Energy Readiness Indicator (NERI) Index might serve as a meaningful basis for assessing a country's status in terms of progress toward nuclear energy utilization under appropriate conditions. The NERI Index is a benchmarking tool that measures a country's level of 'readiness' for nonproliferation-conscious nuclear energy development. NERI first identifies 8 key indicators that have been recognized by the International Atomic Energy Agency as key nonproliferation and security milestones to achieve prior to establishing a nuclear energy program. It then measures a country's progress in each of these areas on a 1-5 point scale. In doing so NERI illuminates gaps or underdeveloped areas in a country's nuclear infrastructure with a view to enable stakeholders to prioritize the allocation of resources toward programs and policies supporting international nonproliferation goals through responsible nuclear energy development. On a preliminary basis, the indicators selected include: (1) demonstrated need; (2) expressed political support; (3) participation in nonproliferation and nuclear security treaties, international terrorism conventions, and export and border control arrangements; (4) national nuclear-related legal and regulatory mechanisms; (5) nuclear infrastructure; (6) the

  3. Developing Empirical Benchmarks of Teacher Knowledge Effect Sizes in Studies of Professional Development Effectiveness

    ERIC Educational Resources Information Center

    Phelps, Geoffrey; Jones, Nathan; Kelcey, Ben; Liu, Shuangshuang; Kisa, Zahid

    2013-01-01

    Growing interest in teaching quality and accountability has focused attention on the need for rigorous studies and evaluations of professional development (PD) programs. However, the study of PD has been hampered by a lack of suitable instruments. The authors present data from the Teacher Knowledge Assessment System (TKAS), which was designed to…

  4. Surveillance Metrics Sensitivity Study

    SciTech Connect

    Bierbaum, R; Hamada, M; Robertson, A

    2011-11-01

    In September of 2009, a Tri-Lab team was formed to develop a set of metrics relating to the NNSA nuclear weapon surveillance program. The purpose of the metrics was to develop a more quantitative and/or qualitative metric(s) describing the results of realized or non-realized surveillance activities on our confidence in reporting reliability and assessing the stockpile. As a part of this effort, a statistical sub-team investigated various techniques and developed a complementary set of statistical metrics that could serve as a foundation for characterizing aspects of meeting the surveillance program objectives. The metrics are a combination of tolerance limit calculations and power calculations, intending to answer level-of-confidence type questions with respect to the ability to detect certain undesirable behaviors (catastrophic defects, margin insufficiency defects, and deviations from a model). Note that the metrics are not intended to gauge product performance but instead the adequacy of surveillance. This report gives a short description of four metrics types that were explored and the results of a sensitivity study conducted to investigate their behavior for various inputs. The results of the sensitivity study can be used to set the risk parameters that specify the level of stockpile problem that the surveillance program should be addressing.

  5. Surveillance metrics sensitivity study.

    SciTech Connect

    Hamada, Michael S.; Bierbaum, Rene Lynn; Robertson, Alix A.

    2011-09-01

    In September of 2009, a Tri-Lab team was formed to develop a set of metrics relating to the NNSA nuclear weapon surveillance program. The purpose of the metrics was to develop a more quantitative and/or qualitative metric(s) describing the results of realized or non-realized surveillance activities on our confidence in reporting reliability and assessing the stockpile. As a part of this effort, a statistical sub-team investigated various techniques and developed a complementary set of statistical metrics that could serve as a foundation for characterizing aspects of meeting the surveillance program objectives. The metrics are a combination of tolerance limit calculations and power calculations, intending to answer level-of-confidence type questions with respect to the ability to detect certain undesirable behaviors (catastrophic defects, margin insufficiency defects, and deviations from a model). Note that the metrics are not intended to gauge product performance but instead the adequacy of surveillance. This report gives a short description of four metrics types that were explored and the results of a sensitivity study conducted to investigate their behavior for various inputs. The results of the sensitivity study can be used to set the risk parameters that specify the level of stockpile problem that the surveillance program should be addressing.

  6. Quality Metrics in Endoscopy

    PubMed Central

    Gurudu, Suryakanth R.

    2013-01-01

    Endoscopy has evolved in the past 4 decades to become an important tool in the diagnosis and management of many digestive diseases. Greater focus on endoscopic quality has highlighted the need to ensure competency among endoscopists. A joint task force of the American College of Gastroenterology and the American Society for Gastrointestinal Endoscopy has proposed several quality metrics to establish competence and help define areas of continuous quality improvement. These metrics represent quality in endoscopy pertinent to pre-, intra-, and postprocedural periods. Quality in endoscopy is a dynamic and multidimensional process that requires continuous monitoring of several indicators and benchmarking with local and national standards. Institutions and practices should have a process in place for credentialing endoscopists and for the assessment of competence regarding individual endoscopic procedures. PMID:24711767

  7. Development and Benchmarking of a Hybrid PIC Code For Dense Plasmas and Fast Ignition

    SciTech Connect

    Witherspoon, F. Douglas; Welch, Dale R.; Thompson, John R.; MacFarlane, Joeseph J.; Phillips, Michael W.; Bruner, Nicki; Mostrom, Chris; Thoma, Carsten; Clark, R. E.; Bogatu, Nick; Kim, Jin-Soo; Galkin, Sergei; Golovkin, Igor E.; Woodruff, P. R.; Wu, Linchun; Messer, Sarah J.

    2014-05-20

    Computational Sciences, Inc. and Advanced Energy Systems Inc. joined efforts to develop new physics and numerical models for LSP in several key areas to enhance the ability of LSP to model high energy density plasmas (HEDP). This final report details those efforts. Areas addressed in this research effort include: adding radiation transport to LSP, first in 2D and then fully 3D, extending the EMHD model to 3D, implementing more advanced radiation and electrode plasma boundary conditions, and installing more efficient implicit numerical algorithms to speed complex 2-D and 3-D computations. The new capabilities allow modeling of the dominant processes in high energy density plasmas, and further assist the development and optimization of plasma jet accelerators, with particular attention to MHD instabilities and plasma/wall interaction (based on physical models for ion drag friction and ablation/erosion of the electrodes). In the first funding cycle we implemented a solver for the radiation diffusion equation. To solve this equation in 2-D, we used finite-differencing and applied the parallelized sparse-matrix solvers in the PETSc library (Argonne National Laboratory) to the resulting system of equations. A database of the necessary coefficients for materials of interest was assembled using the PROPACEOS and ATBASE codes from Prism. The model was benchmarked against Prism's 1-D radiation hydrodynamics code HELIOS, and against experimental data obtained from HyperV's separately funded plasma jet accelerator development program. Work in the second funding cycle focused on extending the radiation diffusion model to full 3-D, continued development of the EMHD model, optimizing the direct-implicit model to speed up calculations, add in multiply ionized atoms, and improved the way boundary conditions are handled in LSP. These new LSP capabilities were then used, along with analytic calculations and Mach2 runs, to investigate plasma jet merging, plasma detachment and transport, restrike

  8. Metrication in a global environment

    NASA Technical Reports Server (NTRS)

    Aberg, J.

    1994-01-01

    A brief history about the development of the metric system of measurement is given. The need for the U.S. to implement the 'SI' metric system in the international markets, especially in the aerospace and general trade, is discussed. Development of metric implementation and experiences locally, nationally, and internationally are included.

  9. Progress in developing the ASPECT Mantle Convection Code - New Features, Benchmark Comparisons and Applications

    NASA Astrophysics Data System (ADS)

    Dannberg, Juliane; Bangerth, Wolfgang; Sobolev, Stephan

    2014-05-01

    Since there is no direct access to the deep Earth, numerical simulations are an indispensible tool for exploring processes in the Earth's mantle. Results of these models can be compared to surface observations and, combined with constraints from seismology and geochemistry, have provided insight into a broad range of geoscientific problems. In this contribution we present results obtained from a next-generation finite-element code called ASPECT (Advanced Solver for Problems in Earth's ConvecTion), which is especially suited for modeling thermo-chemical convection due to its use of many modern numerical techniques: fully adaptive meshes, accurate discretizations, a nonlinear artificial diffusion method to stabilize the advection equation, an efficient solution strategy based on a block triangular preconditioner utilizing an algebraic multigrid, parallelization of all of the steps above and finally its modular and easily extensible implementation. In particular the latter features make it a very versatile tool applicable also to lithosphere models. The equations are implemented in the form of the Anelastic Liquid Approximation with temperature, pressure, composition and strain rate dependent material properties including associated non-linear solvers. We will compare computations with ASPECT to common benchmarks in the geodynamics community such as the Rayleigh-Taylor instability (van Keken et al., 1997) and demonstrate recently implemented features such as a melting model with temperature, pressure and composition dependent melt fraction and latent heat. Moreover, we elaborate on a number of features currently under development by the community such as free surfaces, porous flow and elasticity. In addition, we show examples of how ASPECT is applied to develop sophisticated simulations of typical geodynamic problems. These include 3D models of thermo-chemical plumes incorporating phase transitions (including melting) with the accompanying density changes, Clapeyron

  10. BENCHMARKING SUSTAINABILITY ENGINEERING EDUCATION

    EPA Science Inventory

    The goals of this project are to develop and apply a methodology for benchmarking curricula in sustainability engineering and to identify individuals active in sustainability engineering education.

  11. The Development of a Benchmarking Methodology to Assist in Managing the Enhancement of University Research Quality

    ERIC Educational Resources Information Center

    Nicholls, Miles G.

    2007-01-01

    The paper proposes a metric, the research quality index (RQI), for assessing and tracking university research quality. The RQI is a composite index that encompasses the three main areas of research activity: publications, research grants and higher degree by research activity. The public availability of such an index will also facilitate…

  12. Limitations of Community College Benchmarking and Benchmarks

    ERIC Educational Resources Information Center

    Bers, Trudy H.

    2006-01-01

    This chapter distinguishes between benchmarks and benchmarking, describes a number of data and cultural limitations to benchmarking projects, and suggests that external demands for accountability are the dominant reason for growing interest in benchmarking among community colleges.

  13. Edible Metrics.

    ERIC Educational Resources Information Center

    Mecca, Christyna E.

    1998-01-01

    Presents an exercise that introduces students to scientific measurements using only metric units. At the conclusion of the exercise, students eat the experiment. Requires dried refried beans, crackers or chips, and dried instant powder for lemonade. (DDR)

  14. Think Metric

    USGS Publications Warehouse

    U.S. Geological Survey

    1978-01-01

    The International System of Units, as the metric system is officially called, provides for a single "language" to describe weights and measures over the world. We in the United States together with the people of Brunei, Burma, and Yemen are the only ones who have not put this convenient system into effect. In the passage of the Metric Conversion Act of 1975, Congress determined that we also will adopt it, but the transition will be voluntary.

  15. Benchmarking expert system tools

    NASA Technical Reports Server (NTRS)

    Riley, Gary

    1988-01-01

    As part of its evaluation of new technologies, the Artificial Intelligence Section of the Mission Planning and Analysis Div. at NASA-Johnson has made timing tests of several expert system building tools. Among the production systems tested were Automated Reasoning Tool, several versions of OPS5, and CLIPS (C Language Integrated Production System), an expert system builder developed by the AI section. Also included in the test were a Zetalisp version of the benchmark along with four versions of the benchmark written in Knowledge Engineering Environment, an object oriented, frame based expert system tool. The benchmarks used for testing are studied.

  16. NASA Software Engineering Benchmarking Study

    NASA Technical Reports Server (NTRS)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    To identify best practices for the improvement of software engineering on projects, NASA's Offices of Chief Engineer (OCE) and Safety and Mission Assurance (OSMA) formed a team led by Heather Rarick and Sally Godfrey to conduct this benchmarking study. The primary goals of the study are to identify best practices that: Improve the management and technical development of software intensive systems; Have a track record of successful deployment by aerospace industries, universities [including research and development (R&D) laboratories], and defense services, as well as NASA's own component Centers; and Identify candidate solutions for NASA's software issues. Beginning in the late fall of 2010, focus topics were chosen and interview questions were developed, based on the NASA top software challenges. Between February 2011 and November 2011, the Benchmark Team interviewed a total of 18 organizations, consisting of five NASA Centers, five industry organizations, four defense services organizations, and four university or university R and D laboratory organizations. A software assurance representative also participated in each of the interviews to focus on assurance and software safety best practices. Interviewees provided a wealth of information on each topic area that included: software policy, software acquisition, software assurance, testing, training, maintaining rigor in small projects, metrics, and use of the Capability Maturity Model Integration (CMMI) framework, as well as a number of special topics that came up in the discussions. NASA's software engineering practices compared favorably with the external organizations in most benchmark areas, but in every topic, there were ways in which NASA could improve its practices. Compared to defense services organizations and some of the industry organizations, one of NASA's notable weaknesses involved communication with contractors regarding its policies and requirements for acquired software. One of NASA's strengths

  17. Toward Developing a New Occupational Exposure Metric Approach for Characterization of Diesel Aerosols

    PubMed Central

    Cauda, Emanuele G.; Ku, Bon Ki; Miller, Arthur L.; Barone, Teresa L.

    2015-01-01

    The extensive use of diesel-powered equipment in mines makes the exposure to diesel aerosols a serious occupational issue. The exposure metric currently used in U.S. underground noncoal mines is based on the measurement of total carbon (TC) and elemental carbon (EC) mass concentration in the air. Recent toxicological evidence suggests that the measurement of mass concentration is not sufficient to correlate ultrafine aerosol exposure with health effects. This urges the evaluation of alternative measurements. In this study, the current exposure metric and two additional metrics, the surface area and the total number concentration, were evaluated by conducting simultaneous measurements of diesel ultrafine aerosols in a laboratory setting. The results showed that the surface area and total number concentration of the particles per unit of mass varied substantially with the engine operating condition. The specific surface area (SSA) and specific number concentration (SNC) normalized with TC varied two and five times, respectively. This implies that miners, whose exposure is measured only as TC, might be exposed to an unknown variable number concentration of diesel particles and commensurate particle surface area. Taken separately, mass, surface area, and number concentration did not completely characterize the aerosols. A comprehensive assessment of diesel aerosol exposure should include all of these elements, but the use of laboratory instruments in underground mines is generally impracticable. The article proposes a new approach to solve this problem. Using SSA and SNC calculated from field-type measurements, the evaluation of additional physical properties can be obtained by using the proposed approach. PMID:26361400

  18. Development and application of an agricultural intensity index to invertebrate and algal metrics from streams at two scales

    USGS Publications Warehouse

    Waite, Ian R.

    2013-01-01

    Research was conducted at 28-30 sites within eight study areas across the United States along a gradient of nutrient enrichment/agricultural land use between 2003 and 2007. Objectives were to test the application of an agricultural intensity index (AG-Index) and compare among various invertebrate and algal metrics to determine indicators of nutrient enrichment nationally and within three regions. The agricultural index was based on total nitrogen and phosphorus input to the watershed, percent watershed agriculture, and percent riparian agriculture. Among data sources, agriculture within riparian zone showed significant differences among values generated from remote sensing or from higher resolution orthophotography; median values dropped significantly when estimated by orthophotography. Percent agriculture in the watershed consistently had lower correlations to invertebrate and algal metrics than the developed AG-Index across all regions. Percent agriculture showed fewer pairwise comparisons that were significant than the same comparisons using the AG-Index. Highest correlations to the AG-Index regionally were −0.75 for Ephemeroptera, Plecoptera, and Trichoptera richness (EPTR) and −0.70 for algae Observed/Expected (O/E), nationally the highest was −0.43 for EPTR vs. total nitrogen and −0.62 for algae O/E vs. AG-Index. Results suggest that analysis of metrics at national scale can often detect large differences in disturbance, but more detail and specificity is obtained by analyzing data at regional scales.

  19. EVA Health and Human Performance Benchmarking Study

    NASA Technical Reports Server (NTRS)

    Abercromby, A. F.; Norcross, J.; Jarvis, S. L.

    2016-01-01

    Multiple HRP Risks and Gaps require detailed characterization of human health and performance during exploration extravehicular activity (EVA) tasks; however, a rigorous and comprehensive methodology for characterizing and comparing the health and human performance implications of current and future EVA spacesuit designs does not exist. This study will identify and implement functional tasks and metrics, both objective and subjective, that are relevant to health and human performance, such as metabolic expenditure, suit fit, discomfort, suited postural stability, cognitive performance, and potentially biochemical responses for humans working inside different EVA suits doing functional tasks under the appropriate simulated reduced gravity environments. This study will provide health and human performance benchmark data for humans working in current EVA suits (EMU, Mark III, and Z2) as well as shirtsleeves using a standard set of tasks and metrics with quantified reliability. Results and methodologies developed during this test will provide benchmark data against which future EVA suits, and different suit configurations (eg, varied pressure, mass, CG) may be reliably compared in subsequent tests. Results will also inform fitness for duty standards as well as design requirements and operations concepts for future EVA suits and other exploration systems.

  20. An evidence-based approach to benchmarking the fairness of health-sector reform in developing countries.

    PubMed

    Daniels, Norman; Flores, Walter; Pannarunothai, Supasit; Ndumbe, Peter N; Bryant, John H; Ngulube, T J; Wang, Yuankun

    2005-07-01

    The Benchmarks of Fairness instrument is an evidence-based policy tool developed in generic form in 2000 for evaluating the effects of health-system reforms on equity, efficiency and accountability. By integrating measures of these effects on the central goal of fairness, the approach fills a gap that has hampered reform efforts for more than two decades. Over the past three years, projects in developing countries on three continents have adapted the generic version of these benchmarks for use at both national and subnational levels. Interdisciplinary teams of managers, providers, academics and advocates agree on the relevant criteria for assessing components of fairness and, depending on which aspects of reform they wish to evaluate, select appropriate indicators that rely on accessible information; they also agree on scoring rules for evaluating the diverse changes in the indicators. In contrast to a comprehensive index that aggregates all measured changes into a single evaluation or rank, the pattern of changes revealed by the benchmarks is used to inform policy deliberation aboutwhich aspects of the reforms have been successfully implemented, and it also allows for improvements to be made in the reforms. This approach permits useful evidence about reform to be gathered in settings where existing information is underused and where there is a weak information infrastructure. Brief descriptions of early results from Cameroon, Ecuador, Guatemala, Thailand and Zambia demonstrate that the method can produce results that are useful for policy and reveal the variety of purposes to which the approach can be put. Collaboration across sites can yield a catalogue of indicators that will facilitate further work. PMID:16175828

  1. An evidence-based approach to benchmarking the fairness of health-sector reform in developing countries.

    PubMed Central

    Daniels, Norman; Flores, Walter; Pannarunothai, Supasit; Ndumbe, Peter N.; Bryant, John H.; Ngulube, T. J.; Wang, Yuankun

    2005-01-01

    The Benchmarks of Fairness instrument is an evidence-based policy tool developed in generic form in 2000 for evaluating the effects of health-system reforms on equity, efficiency and accountability. By integrating measures of these effects on the central goal of fairness, the approach fills a gap that has hampered reform efforts for more than two decades. Over the past three years, projects in developing countries on three continents have adapted the generic version of these benchmarks for use at both national and subnational levels. Interdisciplinary teams of managers, providers, academics and advocates agree on the relevant criteria for assessing components of fairness and, depending on which aspects of reform they wish to evaluate, select appropriate indicators that rely on accessible information; they also agree on scoring rules for evaluating the diverse changes in the indicators. In contrast to a comprehensive index that aggregates all measured changes into a single evaluation or rank, the pattern of changes revealed by the benchmarks is used to inform policy deliberation aboutwhich aspects of the reforms have been successfully implemented, and it also allows for improvements to be made in the reforms. This approach permits useful evidence about reform to be gathered in settings where existing information is underused and where there is a weak information infrastructure. Brief descriptions of early results from Cameroon, Ecuador, Guatemala, Thailand and Zambia demonstrate that the method can produce results that are useful for policy and reveal the variety of purposes to which the approach can be put. Collaboration across sites can yield a catalogue of indicators that will facilitate further work. PMID:16175828

  2. Development of Metric for Measuring the Impact of RD&D Funding on GTO's Geothermal Exploration Goals (Presentation)

    SciTech Connect

    Jenne, S.; Young, K. R.; Thorsteinsson, H.

    2013-04-01

    The Department of Energy's Geothermal Technologies Office (GTO) provides RD&D funding for geothermal exploration technologies with the goal of lowering the risks and costs of geothermal development and exploration. In 2012, NREL was tasked with developing a metric to measure the impacts of this RD&D funding on the cost and time required for exploration activities. The development of this metric included collecting cost and time data for exploration techniques, creating a baseline suite of exploration techniques to which future exploration and cost and time improvements could be compared, and developing an online tool for graphically showing potential project impacts (all available at http://en.openei.org/wiki/Gateway:Geothermal). The conference paper describes the methodology used to define the baseline exploration suite of techniques (baseline), as well as the approach that was used to create the cost and time data set that populates the baseline. The resulting product, an online tool for measuring impact, and the aggregated cost and time data are available on the Open EI website for public access (http://en.openei.org).

  3. Manned Mars Mission on-orbit operations metric development. [astronaut and robot performance in spacecraft orbital assembly

    NASA Technical Reports Server (NTRS)

    Gorin, Barney F.

    1990-01-01

    This report describes the effort made to develop a scoring system, or metric, for comparing astronaut Extra Vehicular Activity with various robotic options for the on-orbit assembly of a very large spacecraft, such as would be needed for a Manned Mars Mission. All trade studies comparing competing approaches to a specific task involve the use of some consistent and unbiased method for assigning a score, or rating factor, to each concept under consideration. The relative scores generated by the selected rating system provide the tool for deciding which of the approaches is the most desirable.

  4. Metric System.

    ERIC Educational Resources Information Center

    Del Mod System, Dover, DE.

    This autoinstructional unit deals with the identification of units of measure in the metric system and the construction of relevant conversion tables. Students in middle school or in grade ten, taking a General Science course, can handle this learning activity. It is recommended that high, middle or low level achievers can use the program.…

  5. An introduction to benchmarking in healthcare.

    PubMed

    Benson, H R

    1994-01-01

    Benchmarking--the process of establishing a standard of excellence and comparing a business function or activity, a product, or an enterprise as a whole with that standard--will be used increasingly by healthcare institutions to reduce expenses and simultaneously improve product and service quality. As a component of total quality management, benchmarking is a continuous process by which an organization can measure and compare its own processes with those of organizations that are leaders in a particular area. Benchmarking should be viewed as a part of quality management programs, not as a replacement. There are four kinds of benchmarking: internal, competitive, functional and generic. With internal benchmarking, functions within an organization are compared with each other. Competitive benchmarking partners do business in the same market and provide a direct comparison of products or services. Functional and generic benchmarking are performed with organizations which may have a specific similar function, such as payroll or purchasing, but which otherwise are in a different business. Benchmarking must be a team process because the outcome will involve changing current practices, with effects felt throughout the organization. The team should include members who have subject knowledge; communications and computer proficiency; skills as facilitators and outside contacts; and sponsorship of senior management. Benchmarking requires quantitative measurement of the subject. The process or activity that you are attempting to benchmark will determine the types of measurements used. Benchmarking metrics usually can be classified in one of four categories: productivity, quality, time and cost-related. PMID:10139084

  6. Development of a strontium chronic effects benchmark for aquatic life in freshwater.

    PubMed

    McPherson, Cathy A; Lawrence, Gary S; Elphick, James R; Chapman, Peter M

    2014-11-01

    There are no national water-quality guidelines for strontium for the protection of freshwater aquatic life in North America or elsewhere. Available data on the acute and chronic toxicity of strontium to freshwater aquatic life were compiled and reviewed. Acute toxicity was reported to occur at concentrations ranging from 75 mg/L to 15 000 mg/L. The majority of chronic effects occurred at concentrations above 11 mg/L; however, calculation of a representative benchmark was confounded by results from 4 studies indicating that chronic effects occurred at lower concentrations than all other studies, in 2 cases below background concentrations reported for US and European streams. Two of these studies, including 1 reporting effects below background concentrations, were repeated and found not to be reproducible; chronic effects occurred at considerably higher strontium concentrations than in the original studies. Studies with narrow-mouthed toad and goldfish were not repeated; both studies reported chronic effects below background concentrations, and both studies had been conducted by the authors of 1 of the 2 studies that were repeated and shown to be nonreproducible. Studies by these authors (3 of the 4 confounding studies), conducted over 30 yr ago, lacked detail in reporting of methods and results. It is thus likely that repeating the toad and goldfish studies would also have resulted in a higher strontium effects concentration. A strontium chronic effects benchmark of 10.7 mg/L that incorporates the results of additional testing summarized in the present study is proposed for freshwater environments. PMID:25051924

  7. The development and application of composite complexity models and a relative complexity metric in a software maintenance environment

    NASA Technical Reports Server (NTRS)

    Hops, J. M.; Sherif, J. S.

    1994-01-01

    A great deal of effort is now being devoted to the study, analysis, prediction, and minimization of software maintenance expected cost, long before software is delivered to users or customers. It has been estimated that, on the average, the effort spent on software maintenance is as costly as the effort spent on all other software costs. Software design methods should be the starting point to aid in alleviating the problems of software maintenance complexity and high costs. Two aspects of maintenance deserve attention: (1) protocols for locating and rectifying defects, and for ensuring that noe new defects are introduced in the development phase of the software process; and (2) protocols for modification, enhancement, and upgrading. This article focuses primarily on the second aspect, the development of protocols to help increase the quality and reduce the costs associated with modifications, enhancements, and upgrades of existing software. This study developed parsimonious models and a relative complexity metric for complexity measurement of software that were used to rank the modules in the system relative to one another. Some success was achieved in using the models and the relative metric to identify maintenance-prone modules.

  8. The Development and Application of Composite Complexity Models and a Relative Complexity Metric in a Software Maintenance Environment

    NASA Astrophysics Data System (ADS)

    Hops, J. M.; Sherif, J. S.

    1994-01-01

    A great deal of effort is now being devoted to the study, analysis, prediction, and minimization of software maintenance expected cost, long before software is delivered to users or customers. It has been estimated that, on the average, the effort spent on software maintenance is as costly as the effort spent on all other software costs. Software design methods should be the starting point to aid in alleviating the problems of software maintenance complexity and high costs. Two aspects of maintenance deserve attention: (1) protocols for locating and rectifying defects, and for ensuring that no new defects are introduced in the development phase of the software process, and (2) protocols for modification, enhancement, and upgrading. This article focuses primarily on the second aspect, the development of protocols to help increase the quality and reduce the costs associated with modi fications, enhancements, and upgrades of existing software. This study developed parsimonious models and a relative complexity metric for complexity measurement of software that were used to rank the modules in the system relative to one another. Some success was achieved in using the models and the relative metric to identify maintenance-prone modules.

  9. Optimization of Deep Drilling Performance - Development and Benchmark Testing of Advanced Diamond Product Drill Bits & HP/HT Fluids to Significantly Improve Rates of Penetration

    SciTech Connect

    Alan Black; Arnis Judzis

    2005-09-30

    This document details the progress to date on the OPTIMIZATION OF DEEP DRILLING PERFORMANCE--DEVELOPMENT AND BENCHMARK TESTING OF ADVANCED DIAMOND PRODUCT DRILL BITS AND HP/HT FLUIDS TO SIGNIFICANTLY IMPROVE RATES OF PENETRATION contract for the year starting October 2004 through September 2005. The industry cost shared program aims to benchmark drilling rates of penetration in selected simulated deep formations and to significantly improve ROP through a team development of aggressive diamond product drill bit--fluid system technologies. Overall the objectives are as follows: Phase 1--Benchmark ''best in class'' diamond and other product drilling bits and fluids and develop concepts for a next level of deep drilling performance; Phase 2--Develop advanced smart bit-fluid prototypes and test at large scale; and Phase 3--Field trial smart bit--fluid concepts, modify as necessary and commercialize products. As of report date, TerraTek has concluded all Phase 1 testing and is planning Phase 2 development.

  10. Benchmarking--Measuring and Comparing for Continuous Improvement.

    ERIC Educational Resources Information Center

    Henczel, Sue

    2002-01-01

    Discussion of benchmarking focuses on the use of internal and external benchmarking by special librarians. Highlights include defining types of benchmarking; historical development; benefits, including efficiency, improved performance, increased competitiveness, and better decision making; problems, including inappropriate adaptation; developing a…

  11. Development and comparison of weighting metrics for probabilistic climate change projections of Mediterranean precipitation

    NASA Astrophysics Data System (ADS)

    Kaspar-Ott, Irena; Hertig, Elke; Pollinger, Felix; Ring, Christoph; Paeth, Heiko; Jacobeit, Jucundus

    2016-04-01

    Climate protection and adaptive measures require reliable estimates of future climate change. Coupled global circulation models are still the most appropriate tool. However, the climate projections of individual models differ considerably, particularly at the regional scale and with respect to certain climate variables such as precipitation. Significant uncertainties also arise on the part of climate impact research. The model differences result from unknown initial conditions, different resolutions and driving mechanisms, different model parameterizations and emission scenarios. It is very challenging to determine which model simulates proper future climate conditions. By implementing results from all important model runs in probability density functions, the exceeding probabilities with respect to certain thresholds of climate change can be determined. The aim of this study is to derive such probabilistic estimates of future precipitation changes in the Mediterranean region for the multi-model ensemble from CMIP3 and CMIP5. The Mediterranean region represents a so-called hot spot of climate change. The analyses are carried out for the meteorological seasons in eight Mediterranean sub-regions, based on the results of principal component analyses. The methodologically innovative aspect refers mainly to the comparison of different metrics to derive model weights, such as Bayesian statistics, regression models, spatial-temporal filtering, the fingerprinting method and quality criteria for the simulated large-scale circulation. The latter describes the ability of the models to simulate the North Atlantic Oscillation, the East Atlantic pattern, the East Atlantic/West Russia pattern and the Scandinavia pattern, as they are the most important large-scale atmospheric drivers for Mediterranean precipitation. The comparison of observed atmospheric patterns with the modeled patterns leads to specific model weights. They are checked for their temporal consistency in the 20th

  12. Development of a benchmark factor to detect wrinkles in bending parts

    NASA Astrophysics Data System (ADS)

    Engel, Bernd; Zehner, Bernd-Uwe; Mathes, Christian; Kuhnhen, Christopher

    2013-12-01

    The rotary draw bending process finds special use in the bending of parts with small bending radii. Due to the support of the forming zone during the bending process, semi-finished products with small wall thicknesses can be bent. One typical quality characteristic is the emergence of corrugations and wrinkles at the inside arc. Presently, the standard for the evaluation of wrinkles is insufficient. The wrinkles' distribution along the longitudinal axis of the tube results in an average value [1]. An evaluation of the wrinkles is not carried out. Due to the lack of an adequate basis of assessment, coordination problems between customers and suppliers occur. They result from an imprecision caused by the lack of quantitative evaluability of the geometric deviations at the inside arc. The benchmark factor for the inside arc presented in this article is an approach to holistically evaluate the geometric deviations at the inside arc. The classification of geometric deviations is carried out according to the area of the geometric characteristics and the respective flank angles.

  13. Metrics for Occupations. Information Series No. 118.

    ERIC Educational Resources Information Center

    Peterson, John C.

    The metric system is discussed in this information analysis paper with regard to its history, a rationale for the United States' adoption of the metric system, a brief overview of the basic units of the metric system, examples of how the metric system will be used in different occupations, and recommendations for research and development. The…

  14. Development of a total dissolved solids (TDS) chronic effects benchmark for a northern Canadian lake.

    PubMed

    Chapman, Peter M; McPherson, Cathy A

    2016-04-01

    Laboratory chronic toxicity tests with plankton, benthos, and fish early life stages were conducted with total dissolved solids (TDS) at an ionic composition specific to Snap Lake (Northwest Territories, Canada), which receives treated effluent from the Snap Lake Diamond Mine. Snap Lake TDS composition has remained consistent from 2007 to 2014 and is expected to remain unchanged through the life of the mine: Cl (45%-47%), Ca (20%-21%), Na (10%-11%), sulfate (9%); carbonate (5%-7%), nitrate (4%), Mg (2%-3%), and minor contributions from K and fluoride. The TDS concentrations that resulted in negligible effects (i.e., 10% or 20% effect concentrations) to taxa representative of resident biota ranged from greater than 1100 to greater than 2200 mg/L, with the exception of a 21% effect concentration of 990 mg/L for 1 of 2 early life stage fish dry fertilization tests (wet fertilization results were >1480 mg/L). A conservative, site-specific, chronic effects benchmark for Snap Lake TDS of 1000 mg/L was derived, below the lowest negligible effect concentration for the most sensitive resident taxon tested, the cladoceran, Daphnia magna (>1100 mg/L). Cladocerans typically only constitute a few percent of the zooplankton community and biomass in Snap Lake; other plankton effect concentrations ranged from greater than 1330 to greater than 1510 mg/L. Chironomids, representative of the lake benthos, were not affected by greater than 1380 mg/L TDS. Early life stage tests with 3 fish species resulted in 10% to 20% effect concentrations ranging from greater than 1410 to greater than 2200 mg/L. The testing undertaken is generally applicable to northern freshwaters, and the concept can readily be adapted to other freshwaters either for TDS where ionic composition does not change or for major ionic components, where TDS composition does change. PMID:26174095

  15. PNNL Information Technology Benchmarking

    SciTech Connect

    DD Hostetler

    1999-09-08

    Benchmarking is a methodology for searching out industry best practices that lead to superior performance. It is exchanging information, not just with any organization, but with organizations known to be the best within PNNL, in industry, or in dissimilar industries with equivalent functions. It is used as a continuous improvement tool for business and technical processes, products, and services. Information technology--comprising all computer and electronic communication products and services--underpins the development and/or delivery of many PNNL products and services. This document describes the Pacific Northwest National Laboratory's (PNNL's) approach to information technology (IT) benchmarking. The purpose is to engage other organizations in the collaborative process of benchmarking in order to improve the value of IT services provided to customers. TM document's intended audience consists of other US Department of Energy (DOE) national laboratories and their IT staff. Although the individual participants must define the scope of collaborative benchmarking, an outline of IT service areas for possible benchmarking is described.

  16. Development of a chronic noncancer oral reference dose and drinking water screening level for sulfolane using benchmark dose modeling.

    PubMed

    Thompson, Chad M; Gaylor, David W; Tachovsky, J Andrew; Perry, Camarie; Carakostas, Michael C; Haws, Laurie C

    2013-12-01

    Sulfolane is a widely used industrial solvent that is often used for gas treatment (sour gas sweetening; hydrogen sulfide removal from shale and coal processes, etc.), and in the manufacture of polymers and electronics, and may be found in pharmaceuticals as a residual solvent used in the manufacturing processes. Sulfolane is considered a high production volume chemical with worldwide production around 18 000-36 000 tons per year. Given that sulfolane has been detected as a contaminant in groundwater, an important potential route of exposure is tap water ingestion. Because there are currently no federal drinking water standards for sulfolane in the USA, we developed a noncancer oral reference dose (RfD) based on benchmark dose modeling, as well as a tap water screening value that is protective of ingestion. Review of the available literature suggests that sulfolane is not likely to be mutagenic, clastogenic or carcinogenic, or pose reproductive or developmental health risks except perhaps at very high exposure concentrations. RfD values derived using benchmark dose modeling were 0.01-0.04 mg kg(-1) per day, although modeling of developmental endpoints resulted in higher values, approximately 0.4 mg kg(-1) per day. The lowest, most conservative, RfD of 0.01 mg kg(-1) per day was based on reduced white blood cell counts in female rats. This RfD was used to develop a tap water screening level that is protective of ingestion, viz. 365 µg l(-1). It is anticipated that these values, along with the hazard identification and dose-response modeling described herein, should be informative for risk assessors and regulators interested in setting health-protective drinking water guideline values for sulfolane. PMID:22936336

  17. Catchment controls on water temperature and the development of simple metrics to inform riparian zone management

    NASA Astrophysics Data System (ADS)

    Johnson, Matthew; Wilby, Robert

    2015-04-01

    of thermal refuge could be important in the context of future climate change, potentially maintaining populations of animals excluded from other parts of the river during hot summer months. International management strategies to mitigate rising temperatures tend to focus on the protection, enhancement or creation of riparian shade. Simple metrics derived from catchment landscape models, the heat capacity of water, and modelled solar radiation receipt, suggest that approximately 1 km of deep riparian shading is necessary to offset a 1° C rise in temperature in the monitored catchments. A similar value is likely to be obtained for similar sized rivers at similar latitudes. Trees would take 20 years to attain sufficient height to shade the necessary solar angles. However, 1 km of deep riparian shade will have substantial impacts on the hydrological and geomorphological functioning of the river, beyond simply altering the thermal regime. Consequently, successful management of rising water temperature in rivers will require catchment scale consideration, as part of an integrated management plan.

  18. Ordinal Distance Metric Learning for Image Ranking.

    PubMed

    Li, Changsheng; Liu, Qingshan; Liu, Jing; Lu, Hanqing

    2015-07-01

    Recently, distance metric learning (DML) has attracted much attention in image retrieval, but most previous methods only work for image classification and clustering tasks. In this brief, we focus on designing ordinal DML algorithms for image ranking tasks, by which the rank levels among the images can be well measured. We first present a linear ordinal Mahalanobis DML model that tries to preserve both the local geometry information and the ordinal relationship of the data. Then, we develop a nonlinear DML method by kernelizing the above model, considering of real-world image data with nonlinear structures. To further improve the ranking performance, we finally derive a multiple kernel DML approach inspired by the idea of multiple-kernel learning that performs different kernel operators on different kinds of image features. Extensive experiments on four benchmarks demonstrate the power of the proposed algorithms against some related state-of-the-art methods. PMID:25163071

  19. Engineering performance metrics

    NASA Astrophysics Data System (ADS)

    Delozier, R.; Snyder, N.

    1993-03-01

    Implementation of a Total Quality Management (TQM) approach to engineering work required the development of a system of metrics which would serve as a meaningful management tool for evaluating effectiveness in accomplishing project objectives and in achieving improved customer satisfaction. A team effort was chartered with the goal of developing a system of engineering performance metrics which would measure customer satisfaction, quality, cost effectiveness, and timeliness. The approach to developing this system involved normal systems design phases including, conceptual design, detailed design, implementation, and integration. The lessons teamed from this effort will be explored in this paper. These lessons learned may provide a starting point for other large engineering organizations seeking to institute a performance measurement system accomplishing project objectives and in achieving improved customer satisfaction. To facilitate this effort, a team was chartered to assist in the development of the metrics system. This team, consisting of customers and Engineering staff members, was utilized to ensure that the needs and views of the customers were considered in the development of performance measurements. The development of a system of metrics is no different than the development of any type of system. It includes the steps of defining performance measurement requirements, measurement process conceptual design, performance measurement and reporting system detailed design, and system implementation and integration.

  20. Engineering performance metrics

    SciTech Connect

    DeLozier, R. ); Snyder, N. )

    1993-03-31

    Implementation of a Total Quality Management (TQM) approach to engineering work required the development of a system of metrics which would serve as a meaningful msinagement tool for evaluating effectiveness in accomplishing project objectives and in achieving improved customer satisfaction. A team effort was chartered with the goal of developing a system of engineering performance metrics which would measure customer satisfaction, quality, cost effectiveness, and timeliness. The approach to developing this system involved normal systems design phases including, conceptual design, detailed design, implementation, and integration. The lessons teamed from this effort will be explored in this paper. These lessons teamed may provide a starting point for other large engineering organizations seeking to institute a performance measurement system accomplishing project objectives and in achieving improved customer satisfaction. To facilitate this effort, a team was chartered to assist in the development of the metrics system. This team, consisting of customers and Engineering staff members, was utilized to ensure that the needs and views of the customers were considered in the development of performance measurements. The development of a system of metrics is no different than the development of any type of system. It includes the steps of defining performance measurement requirements, measurement process conceptual design, performance measurement and reporting system detailed design, and system implementation and integration.

  1. An analytical model of the HINT performance metric

    SciTech Connect

    Snell, Q.O.; Gustafson, J.L.

    1996-10-01

    The HINT benchmark was developed to provide a broad-spectrum metric for computers and to measure performance over the full range of memory sizes and time scales. We have extended our understanding of why HINT performance curves look the way they do and can now predict the curves using an analytical model based on simple hardware specifications as input parameters. Conversely, by fitting the experimental curves with the analytical model, hardware specifications such as memory performance can be inferred to provide insight into the nature of a given computer system.

  2. Changes in Benchmarked Training.

    ERIC Educational Resources Information Center

    Bassi, Laurie J.; Cheney, Scott

    1996-01-01

    Comparisons of the training practices of large companies confirm that the delivery and financing of training is changing rapidly. Companies in the American Society for Training and Development Benchmarking Forum are delivering less training with permanent staff and more with strategic use of technology, contract staff, and external providers,…

  3. Metricize Yourself

    NASA Astrophysics Data System (ADS)

    Falbo, Maria K.

    2006-12-01

    In lab and homework, students should check whether or not their quantitative answers to physics questions make sense in the context of the problem. Unfortunately it is still the case in the US that many students don’t have a “feel” for oC, kg, cm, liters or Newtons. This problem contributes to the inability of students to check answers. It is also the case that just “going over” the tables in the text can be boring and dry. In this talk I’ll demonstrate some classroom activities that can be used throughout the year to give students a metric context in which quantitative answers can be interpreted.

  4. Virtual machine performance benchmarking.

    PubMed

    Langer, Steve G; French, Todd

    2011-10-01

    The attractions of virtual computing are many: reduced costs, reduced resources and simplified maintenance. Any one of these would be compelling for a medical imaging professional attempting to support a complex practice on limited resources in an era of ever tightened reimbursement. In particular, the ability to run multiple operating systems optimized for different tasks (computational image processing on Linux versus office tasks on Microsoft operating systems) on a single physical machine is compelling. However, there are also potential drawbacks. High performance requirements need to be carefully considered if they are to be executed in an environment where the running software has to execute through multiple layers of device drivers before reaching the real disk or network interface. Our lab has attempted to gain insight into the impact of virtualization on performance by benchmarking the following metrics on both physical and virtual platforms: local memory and disk bandwidth, network bandwidth, and integer and floating point performance. The virtual performance metrics are compared to baseline performance on "bare metal." The results are complex, and indeed somewhat surprising. PMID:21207096

  5. Are We Doing Ok? Developing a Generic Process to Benchmark Career Services in Educational Institutions

    ERIC Educational Resources Information Center

    McCowan, Col; McKenzie, Malcolm

    2011-01-01

    In 2007 the Career Industry Council of Australia developed the Guiding Principles for Career Development Services and Career Information Products as one part of its strategy to produce a national quality framework for career development activities in Australia. An Australian university career service undertook an assessment process against these…

  6. Benchmarking Professional Development Practices across Youth-Serving Organizations: Implications for Extension

    ERIC Educational Resources Information Center

    Garst, Barry A.; Baughman, Sarah; Franz, Nancy

    2014-01-01

    Examining traditional and contemporary professional development practices of youth-serving organizations can inform practices across Extension, particularly in light of the barriers that have been noted for effectively developing the professional competencies of Extension educators. With professional development systems changing quickly,…

  7. Geothermal Resource Reporting Metric (GRRM) Developed for the U.S. Department of Energy's Geothermal Technologies Office

    SciTech Connect

    Young, Katherine R.; Wall, Anna M.; Dobson, Patrick F.

    2015-09-02

    This paper reviews a methodology being developed for reporting geothermal resources and project progress. The goal is to provide the U.S. Department of Energy's (DOE) Geothermal Technologies Office (GTO) with a consistent and comprehensible means of evaluating the impacts of its funding programs. This framework will allow the GTO to assess the effectiveness of research, development, and deployment (RD&D) funding, prioritize funding requests, and demonstrate the value of RD&D programs to the U.S. Congress and the public. Standards and reporting codes used in other countries and energy sectors provide guidance to develop the relevant geothermal methodology, but industry feedback and our analysis suggest that the existing models have drawbacks that should be addressed. In order to formulate a comprehensive metric for use by the GTO, we analyzed existing resource assessments and reporting methodologies for the geothermal, mining, and oil and gas industries, and sought input from industry, investors, academia, national labs, and other government agencies. Using this background research as a guide, we describe a methodology for evaluating and reporting on GTO funding according to resource grade (geological, technical and socio-economic) and project progress. This methodology would allow GTO to target funding, measure impact by monitoring the progression of projects, or assess geological potential of targeted areas for development.

  8. Percentile-Based Journal Impact Factors: A Neglected Collection Development Metric

    ERIC Educational Resources Information Center

    Wagner, A. Ben

    2009-01-01

    Various normalization techniques to transform journal impact factors (JIFs) into a standard scale or range of values have been reported a number of times in the literature, but have seldom been part of collection development librarians' tool kits. In this paper, JIFs as reported in the Journal Citation Reports (JCR) database are converted to…

  9. Developing Composite Metrics of Teaching Practice for Mediator Analysis of Program Impact

    ERIC Educational Resources Information Center

    Lazarev, Val; Newman, Denis

    2014-01-01

    Efficacy studies of educational programs often involve mediator analyses aimed at testing empirically appropriate theories of action. In particular, in the studies of professional development programs, the intervention targets primarily teachers' pedagogical skills and content knowledge, while the ultimate outcome is the student achievement…

  10. Developing an Aggregate Metric of Teaching Practice for Use in Mediator Analysis

    ERIC Educational Resources Information Center

    Lazarev, Valeriy; Newman, Denis; Grossman, Pam

    2013-01-01

    Efficacy studies of educational programs often involve mediator analyses aimed at testing empirically appropriate theories of action. In particular, in the studies of professional teacher development programs, the intervention targets presumably teacher performance while the ultimate outcome is the student achievement measured by a standardized…

  11. Development and Calibration of an Item Bank for PE Metrics Assessments: Standard 1

    ERIC Educational Resources Information Center

    Zhu, Weimo; Fox, Connie; Park, Youngsik; Fisette, Jennifer L.; Dyson, Ben; Graber, Kim C.; Avery, Marybell; Franck, Marian; Placek, Judith H.; Rink, Judy; Raynes, De

    2011-01-01

    The purpose of this study was to develop and calibrate an assessment system, or bank, using the latest measurement theories and methods to promote valid and reliable student assessment in physical education. Using an anchor-test equating design, a total of 30 items or assessments were administered to 5,021 (2,568 boys and 2,453 girls) students in…

  12. International Benchmarking: State and National Education Performance Standards

    ERIC Educational Resources Information Center

    Phillips, Gary W.

    2014-01-01

    This report uses international benchmarking as a common metric to examine and compare what students are expected to learn in some states with what students are expected to learn in other states. The performance standards in each state were compared with the international benchmarks used in two international assessments, and it was assumed that…

  13. Benchmarking local healthcare-associated infections: available benchmarks and interpretation challenges.

    PubMed

    El-Saed, Aiman; Balkhy, Hanan H; Weber, David J

    2013-10-01

    Growing numbers of healthcare facilities are routinely collecting standardized data on healthcare-associated infection (HAI), which can be used not only to track internal performance but also to compare local data to national and international benchmarks. Benchmarking overall (crude) HAI surveillance metrics without accounting or adjusting for potential confounders can result in misleading conclusions. Methods commonly used to provide risk-adjusted metrics include multivariate logistic regression analysis, stratification, indirect standardization, and restrictions. The characteristics of recognized benchmarks worldwide, including the advantages and limitations are described. The choice of the right benchmark for the data from the Gulf Cooperation Council (GCC) states is challenging. The chosen benchmark should have similar data collection and presentation methods. Additionally, differences in surveillance environments including regulations should be taken into consideration when considering such a benchmark. The GCC center for infection control took some steps to unify HAI surveillance systems in the region. GCC hospitals still need to overcome legislative and logistic difficulties in sharing data to create their own benchmark. The availability of a regional GCC benchmark may better enable health care workers and researchers to obtain more accurate and realistic comparisons. PMID:23999329

  14. Beyond Human Capital Development: Balanced Safeguards Workforce Metrics and the Next Generation Safeguards Workforce

    SciTech Connect

    Burbank, Roberta L.; Frazar, Sarah L.; Gitau, Ernest TN; Shergur, Jason M.; Scholz, Melissa A.; Undem, Halvor A.

    2014-03-28

    Since its establishment in 2008, the Next Generation Safeguards Initiative (NGSI) has achieved a number of objectives under its five pillars: concepts and approaches, policy development and outreach, international nuclear safeguards engagement, technology development, and human capital development (HCD). As a result of these efforts, safeguards has become much more visible as a critical U.S. national security interest across the U.S. Department of Energy (DOE) complex. However, limited budgets have since created challenges in a number of areas. Arguably, one of the more serious challenges involves NGSI’s ability to integrate entry-level staff into safeguards projects. Laissez fair management of this issue across the complex can lead to wasteful project implementation and endanger NGSI’s long-term sustainability. The authors provide a quantitative analysis of this problem, focusing on the demographics of the current safeguards workforce and compounding pressures to operate cost-effectively, transfer knowledge to the next generation of safeguards professionals, and sustain NGSI safeguards investments.

  15. Measuring Impact of U.S. DOE Geothermal Technologies Office Funding: Considerations for Development of a Geothermal Resource Reporting Metric

    SciTech Connect

    Young, Katherine R.; Wall, Anna M.; Dobson, Patrick F.; Bennett, Mitchell; Segneri, Brittany

    2015-04-25

    This paper reviews existing methodologies and reporting codes used to describe extracted energy resources such as coal and oil and describes a comparable proposed methodology to describe geothermal resources. The goal is to provide the U.S. Department of Energy's (DOE) Geothermal Technologies Office (GTO) with a consistent and comprehensible means of assessing the impacts of its funding programs. This framework will allow for GTO to assess the effectiveness of research, development, and deployment (RD&D) funding, prioritize funding requests, and demonstrate the value of RD&D programs to the U.S. Congress. Standards and reporting codes used in other countries and energy sectors provide guidance to inform development of a geothermal methodology, but industry feedback and our analysis suggest that the existing models have drawbacks that should be addressed. In order to formulate a comprehensive metric for use by GTO, we analyzed existing resource assessments and reporting methodologies for the geothermal, mining, and oil and gas industries, and we sought input from industry, investors, academia, national labs, and other government agencies. Using this background research as a guide, we describe a methodology for assessing and reporting on GTO funding according to resource knowledge and resource grade (or quality). This methodology would allow GTO to target funding or measure impact by progression of projects or geological potential for development.

  16. Software Quality Assurance Metrics

    NASA Technical Reports Server (NTRS)

    McRae, Kalindra A.

    2004-01-01

    Software Quality Assurance (SQA) is a planned and systematic set of activities that ensures conformance of software life cycle processes and products conform to requirements, standards and procedures. In software development, software quality means meeting requirements and a degree of excellence and refinement of a project or product. Software Quality is a set of attributes of a software product by which its quality is described and evaluated. The set of attributes includes functionality, reliability, usability, efficiency, maintainability, and portability. Software Metrics help us understand the technical process that is used to develop a product. The process is measured to improve it and the product is measured to increase quality throughout the life cycle of software. Software Metrics are measurements of the quality of software. Software is measured to indicate the quality of the product, to assess the productivity of the people who produce the product, to assess the benefits derived from new software engineering methods and tools, to form a baseline for estimation, and to help justify requests for new tools or additional training. Any part of the software development can be measured. If Software Metrics are implemented in software development, it can save time, money, and allow the organization to identify the caused of defects which have the greatest effect on software development. The summer of 2004, I worked with Cynthia Calhoun and Frank Robinson in the Software Assurance/Risk Management department. My task was to research and collect, compile, and analyze SQA Metrics that have been used in other projects that are not currently being used by the SA team and report them to the Software Assurance team to see if any metrics can be implemented in their software assurance life cycle process.

  17. Testing, Requirements, and Metrics

    NASA Technical Reports Server (NTRS)

    Rosenberg, Linda; Hyatt, Larry; Hammer, Theodore F.; Huffman, Lenore; Wilson, William

    1998-01-01

    The criticality of correct, complete, testable requirements is a fundamental tenet of software engineering. Also critical is complete requirements based testing of the final product. Modern tools for managing requirements allow new metrics to be used in support of both of these critical processes. Using these tools, potential problems with the quality of the requirements and the test plan can be identified early in the life cycle. Some of these quality factors include: ambiguous or incomplete requirements, poorly designed requirements databases, excessive or insufficient test cases, and incomplete linkage of tests to requirements. This paper discusses how metrics can be used to evaluate the quality of the requirements and test to avoid problems later. Requirements management and requirements based testing have always been critical in the implementation of high quality software systems. Recently, automated tools have become available to support requirements management. At NASA's Goddard Space Flight Center (GSFC), automated requirements management tools are being used on several large projects. The use of these tools opens the door to innovative uses of metrics in characterizing test plan quality and assessing overall testing risks. In support of these projects, the Software Assurance Technology Center (SATC) is working to develop and apply a metrics program that utilizes the information now available through the application of requirements management tools. Metrics based on this information provides real-time insight into the testing of requirements and these metrics assist the Project Quality Office in its testing oversight role. This paper discusses three facets of the SATC's efforts to evaluate the quality of the requirements and test plan early in the life cycle, thus preventing costly errors and time delays later.

  18. Metrication: A Guide for Consumers.

    ERIC Educational Resources Information Center

    Consumer and Corporate Affairs Dept., Ottawa (Ontario).

    The widespread use of the metric system by most of the major industrial powers of the world has prompted the Canadian government to investigate and consider use of the system. This booklet was developed to aid the consuming public in Canada in gaining some knowledge of metrication and how its application would affect their present economy.…

  19. Development of an occult metric for common motor vehicle crash injuries - biomed 2013.

    PubMed

    Schoell, Samantha L; Weaver, Ashley A; Stitzel, Joel D

    2013-01-01

    Detection of occult injuries, which are not easily recognized and are life-threatening, in motor vehicle crashes (MVCs) is crucial in order to reduce fatalities. An Occult Injury Database (OID) was previously developed by the Center for Transportation Injury Research (CenTIR) using the National Automotive Sampling System Crashworthiness Data System (NASS-CDS) 1997-2001 which identified occult and non-occult head, thorax, and abdomen injuries. The objective of the current work was to develop an occult injury model based on underlying injury characteristics to derive an Occult Score for common MVC-induced injuries. A multiple logistic regression model was developed utilizing six injury parameters to generate a probability formula which assigned an Occult Score for each injury. The model was applied to a list of 240 injuries comprising the top 95 percent of injuries occurring in NASS-CDS 2000-2011. The parameters in the model included a continuous Cause MRR/year variable indicating the annual proportion of occupants sustaining a given injury whose cause of death was attributed to that injury. The categorical variables in the model were AIS 2-3 vs. 4-6, laceration, hemorrhage/hematoma, contusion, and intracranial. Results indicated that injuries with a low Cause MRR/year and AIS severity of 4-6 had an increased likelihood of being occult. In addition, the presence of a laceration, hemorrhage/hematoma, contusion, or intracranial injury also increased the likelihood of an injury being occult. The Occult Score ranges from zero to one with a threshold of 0.5 as the discriminator of an occult injury. Of the considered injuries, it was determined that 54% of head, 26% of thorax, and 23% of abdominal injuries were occult injuries. No occult injuries were identified in the face, spine, upper extremity, or lower extremity body regions. The Occult Score generated can be useful in advanced automatic crash notification research and for the detection of serious occult injuries in

  20. Population health metrics: crucial inputs to the development of evidence for health policy

    PubMed Central

    Mathers, Colin D; Murray, Christopher JL; Ezzati, Majid; Gakidou, Emmanuela; Salomon, Joshua A; Stein, Claudia

    2003-01-01

    Valid, reliable and comparable measures of the health states of individuals and of the health status of populations are critical components of the evidence base for health policy. We need to develop population health measurement strategies that coherently address the relationships between epidemiological measures (such as risk exposures, incidence, and mortality rates) and multi-domain measures of population health status, while ensuring validity and cross-population comparability. Studies reporting on descriptive epidemiology of major diseases, injuries and risk factors, and on the measurement of health at the population level – either for monitoring trends in health levels or inequalities or for measuring broad outcomes of health systems and social interventions – are not well-represented in traditional epidemiology journals, which tend to concentrate on causal studies and on quasi-experimental design. In particular, key methodological issues relating to the clear conceptualisation of, and the validity and comparability of measures of population health are currently not addressed coherently by any discipline, and cross-disciplinary debate is fragmented and often conducted in mutually incomprehensible language or paradigms. Population health measurement potentially bridges a range of currently disjoint fields of inquiry relating to health: biology, demography, epidemiology, health economics, and broader social science disciplines relevant to assessment of health determinants, health state valuations and health inequalities. This new journal will focus on the importance of a population based approach to measurement as a way to characterize the complexity of people's health, the diseases and risks that affect it, its distribution, and its valuation, and will attempt to provide a forum for innovative work and debate that bridge the many fields of inquiry relevant to population health in order to contribute to the development of valid and comparable methods for

  1. Quality metrics for product defectiveness at KCD

    SciTech Connect

    Grice, J.V.

    1993-07-01

    Metrics are discussed for measuring and tracking product defectiveness at AlliedSignal Inc., Kansas City Division (KCD). Three new metrics, the metric (percent defective) that preceded the new metrics, and several alternatives are described. The new metrics, Percent Parts Accepted, Percent Parts Accepted Trouble Free, and Defects Per Million Observations, (denoted by PPA, PATF, and DPMO, respectively) were implemented for KCD-manufactured product and purchased material in November 1992. These metrics replace the percent defective metric that had been used for several years. The PPA and PATF metrics primarily measure quality performance while DPMO measures the effects of continuous improvement activities. The new metrics measure product quality in terms of product defectiveness observed only during the inspection process. The metrics were originally developed for purchased product and were adapted to manufactured product to provide a consistent set of metrics plant- wide. The new metrics provide a meaningful tool to measure the quantity of product defectiveness in terms of the customer`s requirements and expectations for quality. Many valid metrics are available and all will have deficiencies. These three metrics are among the least sensitive to problems and are easily understood. They will serve as good management tools for KCD in the foreseeable future until new flexible data systems and reporting procedures can be implemented that can provide more detailed and accurate metric computations.

  2. Benchmark campaign and case study episode in central Europe for development and assessment of advanced GNSS tropospheric models and products

    NASA Astrophysics Data System (ADS)

    Douša, Jan; Dick, Galina; Kačmařík, Michal; Brožková, Radmila; Zus, Florian; Brenot, Hugues; Stoycheva, Anastasia; Möller, Gregor; Kaplon, Jan

    2016-07-01

    Initial objectives and design of the Benchmark campaign organized within the European COST Action ES1206 (2013-2017) are described in the paper. This campaign has aimed to support the development and validation of advanced Global Navigation Satellite System (GNSS) tropospheric products, in particular high-resolution and ultra-fast zenith total delays (ZTDs) and tropospheric gradients derived from a dense permanent network. A complex data set was collected for the 8-week period when several extreme heavy precipitation episodes occurred in central Europe which caused severe river floods in this area. An initial processing of data sets from GNSS products and numerical weather models (NWMs) provided independently estimated reference parameters - zenith tropospheric delays and tropospheric horizontal gradients. Their provision gave an overview about the product similarities and complementarities, and thus a potential for improvements of a synergy in their optimal exploitations in future. Reference GNSS and NWM results were intercompared and visually analysed using animated maps. ZTDs from two reference GNSS solutions compared to global ERA-Interim reanalysis resulted in accuracy at the 10 mm level in terms of the root mean square (rms) with a negligible overall bias, comparisons to Global Forecast System (GFS) forecasts showed accuracy at the 12 mm level with the overall bias of -5 mm and, finally, comparisons to mesoscale ALADIN-CZ forecast resulted in accuracy at the 8 mm level with a negligible total bias. The comparison of horizontal tropospheric gradients from GNSS and NWM data demonstrated a very good agreement among independent solutions with negligible biases and an accuracy of about 0.5 mm. Visual comparisons of maps of zenith wet delays and tropospheric horizontal gradients showed very promising results for future exploitations of advanced GNSS tropospheric products in meteorological applications, such as severe weather event monitoring and weather nowcasting

  3. NAS Parallel Benchmarks Results

    NASA Technical Reports Server (NTRS)

    Subhash, Saini; Bailey, David H.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    The NAS Parallel Benchmarks (NPB) were developed in 1991 at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a pencil and paper fashion i.e. the complete details of the problem to be solved are given in a technical document, and except for a few restrictions, benchmarkers are free to select the language constructs and implementation techniques best suited for a particular system. In this paper, we present new NPB performance results for the following systems: (a) Parallel-Vector Processors: Cray C90, Cray T'90 and Fujitsu VPP500; (b) Highly Parallel Processors: Cray T3D, IBM SP2 and IBM SP-TN2 (Thin Nodes 2); (c) Symmetric Multiprocessing Processors: Convex Exemplar SPP1000, Cray J90, DEC Alpha Server 8400 5/300, and SGI Power Challenge XL. We also present sustained performance per dollar for Class B LU, SP and BT benchmarks. We also mention NAS future plans of NPB.

  4. Bioprospecting of Evaporative Lakes for Development of a Novel Paleo-aridity Metric

    NASA Astrophysics Data System (ADS)

    Finkelstein, D. B.; Snoeyenbos-West, O.; Pratt, L. M.

    2011-12-01

    %). Notably, even in deeper and wetter parts of the mat, these groups are abundant members of the microbial community (62%) suggesting their role as keystone taxa in this harsh habitat. Using our culture-independent phylogenetic data as a guide, we are now developing culturing methods to target and isolate these desiccation-tolerant microbes and their associated metabolites for extraction and further biogeochemical study. These data will have applicability as potential paleo-aridity indicators in the rock record.

  5. Using TRACI for Sustainability Metrics

    EPA Science Inventory

    TRACI, the Tool for the Reduction and Assessment of Chemical and other environmental Impacts, has been developed for sustainability metrics, life cycle impact assessment, and product and process design impact assessment for developing increasingly sustainable products, processes,...

  6. Brain development in rodents and humans: Identifying benchmarks of maturation and vulnerability to injury across species

    PubMed Central

    Semple, Bridgette D.; Blomgren, Klas; Gimlin, Kayleen; Ferriero, Donna M.; Noble-Haeusslein, Linda J.

    2013-01-01

    Hypoxic-ischemic and traumatic brain injuries are leading causes of long-term mortality and disability in infants and children. Although several preclinical models using rodents of different ages have been developed, species differences in the timing of key brain maturation events can render comparisons of vulnerability and regenerative capacities difficult to interpret. Traditional models of developmental brain injury have utilized rodents at postnatal day 7–10 as being roughly equivalent to a term human infant, based historically on the measurement of post-mortem brain weights during the 1970s. Here we will examine fundamental brain development processes that occur in both rodents and humans, to delineate a comparable time course of postnatal brain development across species. We consider the timing of neurogenesis, synaptogenesis, gliogenesis, oligodendrocyte maturation and age-dependent behaviors that coincide with developmentally regulated molecular and biochemical changes. In general, while the time scale is considerably different, the sequence of key events in brain maturation is largely consistent between humans and rodents. Further, there are distinct parallels in regional vulnerability as well as functional consequences in response to brain injuries. With a focus on developmental hypoxicischemic encephalopathy and traumatic brain injury, this review offers guidelines for researchers when considering the most appropriate rodent age for the developmental stage or process of interest to approximate human brain development. PMID:23583307

  7. The Development of the Children's Services Statistical Neighbour Benchmarking Model. Final Report

    ERIC Educational Resources Information Center

    Benton, Tom; Chamberlain, Tamsin; Wilson, Rebekah; Teeman, David

    2007-01-01

    In April 2006, the Department for Education and Skills (DfES) commissioned the National Foundation for Educational Research (NFER) to conduct an independent external review in order to develop a single "statistical neighbour" model. This single model aimed to combine the key elements of the different models currently available and be relevant to…

  8. THE NEW ENGLAND AIR QUALITY FORECASTING PILOT PROGRAM: DEVELOPMENT OF AN EVALUATION PROTOCOL AND PERFORMANCE BENCHMARK

    EPA Science Inventory

    The National Oceanic and Atmospheric Administration recently sponsored the New England Forecasting Pilot Program to serve as a "test bed" for chemical forecasting by providing all of the elements of a National Air Quality Forecasting System, including the development and implemen...

  9. Development of aquatic toxicity benchmarks for oil products using species sensitivity distributions

    EPA Science Inventory

    Determining the sensitivity of a diversity of species to spilled oil and chemically dispersed oil continues to be a significant challenge in spill response and impact assessment. We used standardized tests from the literature to develop species sensitivity distributions (SSDs) of...

  10. AR, HEA and AAS in Rural Development Projects--Benchmarking towards the Best Processes.

    ERIC Educational Resources Information Center

    Westermarck, Harri

    In most countries, agricultural research (AR), institutions of higher education in agriculture (HEA), and agricultural advisory services (AAS) function as separate agencies. So far, in most countries, AR, HEA, and AAS have not had a common vision for rural development. In Finland, domination of agricultural production in Finland has led to a lack…

  11. 40 CFR 141.709 - Developing the disinfection profile and benchmark.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... determine the total log inactivation for Giardia lamblia and viruses. If systems monitor more frequently... must monitor weekly during the period of operation. Systems must determine log inactivation for Giardia... must develop a virus profile using the same monitoring data on which the Giardia lamblia profile...

  12. 40 CFR 141.709 - Developing the disinfection profile and benchmark.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... determine the total log inactivation for Giardia lamblia and viruses. If systems monitor more frequently... must monitor weekly during the period of operation. Systems must determine log inactivation for Giardia... must develop a virus profile using the same monitoring data on which the Giardia lamblia profile...

  13. 40 CFR 141.709 - Developing the disinfection profile and benchmark.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... determine the total log inactivation for Giardia lamblia and viruses. If systems monitor more frequently... must monitor weekly during the period of operation. Systems must determine log inactivation for Giardia... must develop a virus profile using the same monitoring data on which the Giardia lamblia profile...

  14. Fuel Cell Development for NASA's Human Exploration Program: Benchmarking with "The Hydrogen Economy"

    NASA Technical Reports Server (NTRS)

    Scott, John H.

    2007-01-01

    The theoretically high efficiency and low temperature operation of hydrogen-oxygen fuel cells has motivated them to be the subject of much study since their invention in the 19th Century, but their relatively high life cycle costs kept them as a "solution in search of a problem" for many years. The first problem for which fuel cells presented a truly cost effective solution was that of providing a power source for NASA's human spaceflight vehicles in the 1960 s. NASA thus invested, and continues to invest, in the development of fuel cell power plants for this application. This development program continues to place its highest priorities on requirements for minimum system mass and maximum durability and reliability. These priorities drive fuel cell power plant design decisions at all levels, even that of catalyst support. However, since the mid-1990's, prospective environmental regulations have driven increased governmental and industrial interest in "green power" and the "Hydrogen Economy." This has in turn stimulated greatly increased investment in fuel cell development for a variety of commercial applications. This investment is bringing about notable advances in fuel cell technology, but, as these development efforts place their highest priority on requirements for minimum life cycle cost and field safety, these advances are yielding design solutions quite different at almost every level from those needed for spacecraft applications. This environment thus presents both opportunities and challenges for NASA's Human Exploration Program

  15. Make It Metric.

    ERIC Educational Resources Information Center

    Camilli, Thomas

    Measurement is perhaps the most frequently used form of mathematics. This book presents activities for learning about the metric system designed for upper intermediate and junior high levels. Discussions include: why metrics, history of metrics, changing to a metric world, teaching tips, and formulas. Activities presented are: metrics all around…

  16. Action-Oriented Benchmarking: Concepts and Tools

    SciTech Connect

    California Energy Commission; Mathew, Paul; Mills, Evan; Mathew, Paul; Piette, Mary Ann; Bourassa, Norman; Brook, Martha

    2008-02-13

    Most energy benchmarking tools provide static feedback on how one building compares to a larger set of loosely similar buildings, without providing information at the end-use level or on what can be done to reduce consumption, cost, or emissions. In this article--Part 1 of a two-part series--we describe an 'action-oriented benchmarking' approach, which extends whole-building energy benchmarking to include analysis of system and component energy use metrics and features. Action-oriented benchmarking thereby allows users to generate more meaningful metrics and to identify, screen and prioritize potential efficiency improvements. This opportunity assessment process can then be used to inform and optimize a full-scale audit or commissioning process. We introduce a new web-based action-oriented benchmarking system and associated software tool-EnergyIQ. The benchmarking methods, visualizations, and user interface design are informed by an end-user needs assessment survey and best-practice guidelines from ASHRAE.

  17. Benchmarking, Research, Development, and Support for ORNL Automated Image and Signature Retrieval (AIR/ASR) Technologies

    SciTech Connect

    Tobin, K.W.

    2004-06-01

    This report describes the results of a Cooperative Research and Development Agreement (CRADA) with Applied Materials, Inc. (AMAT) of Santa Clara, California. This project encompassed the continued development and integration of the ORNL Automated Image Retrieval (AIR) technology, and an extension of the technology denoted Automated Signature Retrieval (ASR), and other related technologies with the Defect Source Identification (DSI) software system that was under development by AMAT at the time this work was performed. In the semiconductor manufacturing environment, defect imagery is used to diagnose problems in the manufacturing line, train yield management engineers, and examine historical data for trends. Image management in semiconductor data systems is a growing cause of concern in the industry as fabricators are now collecting up to 20,000 images each week. In response to this concern, researchers at the Oak Ridge National Laboratory (ORNL) developed a semiconductor-specific content-based image retrieval method and system, also known as AIR. The system uses an image-based query-by-example method to locate and retrieve similar imagery from a database of digital imagery using visual image characteristics. The query method is based on a unique architecture that takes advantage of the statistical, morphological, and structural characteristics of image data, generated by inspection equipment in industrial applications. The system improves the manufacturing process by allowing rapid access to historical records of similar events so that errant process equipment can be isolated and corrective actions can be quickly taken to improve yield. The combined ORNL and AMAT technology is referred to hereafter as DSI-AIR and DSI-ASR.

  18. Benchmarking in Student Affairs.

    ERIC Educational Resources Information Center

    Mosier, Robert E.; Schwarzmueller, Gary J.

    2002-01-01

    Discusses the use of benchmarking in student affairs, focusing on issues related to student housing. Provides examples of how benchmarking has influenced administrative practice at many institutions. (EV)

  19. Benchmarking: applications to transfusion medicine.

    PubMed

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal. PMID:22237134

  20. Benchmarking neuromorphic systems with Nengo

    PubMed Central

    Bekolay, Trevor; Stewart, Terrence C.; Eliasmith, Chris

    2015-01-01

    Nengo is a software package for designing and simulating large-scale neural models. Nengo is architected such that the same Nengo model can be simulated on any of several Nengo backends with few to no modifications. Backends translate a model to specific platforms, which include GPUs and neuromorphic hardware. Nengo also contains a large test suite that can be run with any backend and focuses primarily on functional performance. We propose that Nengo's large test suite can be used to benchmark neuromorphic hardware's functional performance and simulation speed in an efficient, unbiased, and future-proof manner. We implement four benchmark models and show that Nengo can collect metrics across five different backends that identify situations in which some backends perform more accurately or quickly. PMID:26539076

  1. Development and Analysis of Global, High-Resolution Diagnostic Metrics for Vegetation Monitoring, Yield Estimation and Famine Mitigation

    NASA Astrophysics Data System (ADS)

    Anderson, B. T.; Zhang, P.; Myneni, R.

    2008-12-01

    Drought, through its impact on food scarcity and crop prices, can have significant economic, social, and environmental impacts - presently, up to 36 countries and 73 million people are facing food crises around the globe. Because of these adverse affects, there has been a drive to develop drought and vegetation- monitoring metrics that can quantify and predict human vulnerability/susceptibility to drought at high- resolution spatial scales over the entire globe. Here we introduce a new vegetation-monitoring index utilizing data derived from satellite-based instruments (the Moderate Resolution Imaging Spectroradiometer - MODIS) designed to identify the vulnerability of vegetation in a particular region to climate variability during the growing season. In addition, the index can quantify the percentage of annual grid-point vegetation production either gained or lost due to climatic variability in a given month. When integrated over the growing season, this index is shown to be better correlated with end-of-season crop yields than traditional remotely-sensed or meteorological indices. In addition, in-season estimates of the index, which are available in near real-time, provide yield forecasts comparable to concurrent in situ objective yield surveys, which are only available in limited regions of the world. Overall, the cost effectiveness and repetitive, near-global view of earth's surface provided by this satellite-based vegetation monitoring index can potentially improve our ability to mitigate human vulnerability/susceptibility to drought and its impacts upon vegetation and agriculture.

  2. OpenMP-accelerated SWAT simulation using Intel C and FORTRAN compilers: Development and benchmark

    NASA Astrophysics Data System (ADS)

    Ki, Seo Jin; Sugimura, Tak; Kim, Albert S.

    2015-02-01

    We developed a practical method to accelerate execution of Soil and Water Assessment Tool (SWAT) using open (free) computational resources. The SWAT source code (rev 622) was recompiled using a non-commercial Intel FORTRAN compiler in Ubuntu 12.04 LTS Linux platform, and newly named iOMP-SWAT in this study. GNU utilities of make, gprof, and diff were used to develop the iOMP-SWAT package, profile memory usage, and check identicalness of parallel and serial simulations. Among 302 SWAT subroutines, the slowest routines were identified using GNU gprof, and later modified using Open Multiple Processing (OpenMP) library in an 8-core shared memory system. In addition, a C wrapping function was used to rapidly set large arrays to zero by cross compiling with the original SWAT FORTRAN package. A universal speedup ratio of 2.3 was achieved using input data sets of a large number of hydrological response units. As we specifically focus on acceleration of a single SWAT run, the use of iOMP-SWAT for parameter calibrations will significantly improve the performance of SWAT optimization.

  3. Experimental Transport Benchmarks for Physical Dosimetry to Support Development of Fast-Neutron Therapy with Neutron Capture Augmentation

    SciTech Connect

    D. W. Nigg; J. K. Hartwell; J. R. Venhuizen; C. A. Wemple; R. Risler; G. E. Laramore; W. Sauerwein; G. Hudepohl; A. Lennox

    2006-06-01

    The Idaho National Laboratory (INL), the University of Washington (UW) Neutron Therapy Center, the University of Essen (Germany) Neutron Therapy Clinic, and the Northern Illinois University(NIU) Institute for Neutron Therapy at Fermilab have been collaborating in the development of fast-neutron therapy (FNT) with concurrent neutron capture (NCT) augmentation [1,2]. As part of this effort, we have conducted measurements to produce suitable benchmark data as an aid in validation of advanced three-dimensional treatment planning methodologies required for successful administration of FNT/NCT. Free-beam spectral measurements as well as phantom measurements with Lucite{trademark} cylinders using thermal, resonance, and threshold activation foil techniques have now been completed at all three clinical accelerator facilities. The same protocol was used for all measurements to facilitate intercomparison of data. The results will be useful for further detailed characterization of the neutron beams of interest as well as for validation of various charged particle and neutron transport codes and methodologies for FNT/NCT computational dosimetry, such as MCNP [3], LAHET [4], and MINERVA [5].

  4. Defining Spoken Language Benchmarks and Selecting Measures of Expressive Language Development for Young Children With Autism Spectrum Disorders

    PubMed Central

    Tager-Flusberg, Helen; Rogers, Sally; Cooper, Judith; Landa, Rebecca; Lord, Catherine; Paul, Rhea; Rice, Mabel; Stoel-Gammon, Carol; Wetherby, Amy; Yoder, Paul

    2010-01-01

    Purpose The aims of this article are twofold: (a) to offer a set of recommended measures that can be used for evaluating the efficacy of interventions that target spoken language acquisition as part of treatment research studies or for use in applied settings and (b) to propose and define a common terminology for describing levels of spoken language ability in the expressive modality and to set benchmarks for determining a child’s language level in order to establish a framework for comparing outcomes across intervention studies. Method The National Institute on Deafness and Other Communication Disorders assembled a group of researchers with interests and experience in the study of language development and disorders in young children with autism spectrum disorders. The group worked for 18 months through a series of conference calls and correspondence, culminating in a meeting held in December 2007 to achieve consensus on these aims. Results The authors recommend moving away from using the term functional speech, replacing it with a developmental framework. Rather, they recommend multiple sources of information to define language phases, including natural language samples, parent report, and standardized measures. They also provide guidelines and objective criteria for defining children’s spoken language expression in three major phases that correspond to developmental levels between 12 and 48 months of age. PMID:19380608

  5. Use of Neutron Benchmark Fields for the Validation of Dosimetry Cross Sections

    NASA Astrophysics Data System (ADS)

    Griffin, Patrick

    2016-02-01

    The evolution of validation metrics for dosimetry cross sections in neutron benchmark fields is explored. The strength of some of the metrics in providing validation evidence is examined by applying them to the 252Cf spontaneous fission standard neutron benchmark field, the 235U thermal neutron fission reference benchmark field, the ACRR pool-type reactor central cavity reference benchmark fields, and the SPR-III fast burst reactor central cavity. The IRDFF dosimetry cross section library is used in the validation study and observations are made on the amount of coverage provided to the library contents by validation data available in these benchmark fields.

  6. NASA Software Engineering Benchmarking Effort

    NASA Technical Reports Server (NTRS)

    Godfrey, Sally; Rarick, Heather

    2012-01-01

    Benchmarking was very interesting and provided a wealth of information (1) We did see potential solutions to some of our "top 10" issues (2) We have an assessment of where NASA stands with relation to other aerospace/defense groups We formed new contacts and potential collaborations (1) Several organizations sent us examples of their templates, processes (2) Many of the organizations were interested in future collaboration: sharing of training, metrics, Capability Maturity Model Integration (CMMI) appraisers, instructors, etc. We received feedback from some of our contractors/ partners (1) Desires to participate in our training; provide feedback on procedures (2) Welcomed opportunity to provide feedback on working with NASA

  7. Benchmarking Gas Path Diagnostic Methods: A Public Approach

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Bird, Jeff; Davison, Craig; Volponi, Al; Iverson, R. Eugene

    2008-01-01

    Recent technology reviews have identified the need for objective assessments of engine health management (EHM) technology. The need is two-fold: technology developers require relevant data and problems to design and validate new algorithms and techniques while engine system integrators and operators need practical tools to direct development and then evaluate the effectiveness of proposed solutions. This paper presents a publicly available gas path diagnostic benchmark problem that has been developed by the Propulsion and Power Systems Panel of The Technical Cooperation Program (TTCP) to help address these needs. The problem is coded in MATLAB (The MathWorks, Inc.) and coupled with a non-linear turbofan engine simulation to produce "snap-shot" measurements, with relevant noise levels, as if collected from a fleet of engines over their lifetime of use. Each engine within the fleet will experience unique operating and deterioration profiles, and may encounter randomly occurring relevant gas path faults including sensor, actuator and component faults. The challenge to the EHM community is to develop gas path diagnostic algorithms to reliably perform fault detection and isolation. An example solution to the benchmark problem is provided along with associated evaluation metrics. A plan is presented to disseminate this benchmark problem to the engine health management technical community and invite technology solutions.

  8. A Heterogeneous Medium Analytical Benchmark

    SciTech Connect

    Ganapol, B.D.

    1999-09-27

    A benchmark, called benchmark BLUE, has been developed for one-group neutral particle (neutron or photon) transport in a one-dimensional sub-critical heterogeneous plane parallel medium with surface illumination. General anisotropic scattering is accommodated through the Green's Function Method (GFM). Numerical Fourier transform inversion is used to generate the required Green's functions which are kernels to coupled integral equations that give the exiting angular fluxes. The interior scalar flux is then obtained through quadrature. A compound iterative procedure for quadrature order and slab surface source convergence provides highly accurate benchmark qualities (4- to 5- places of accuracy) results.

  9. California commercial building energy benchmarking

    SciTech Connect

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the identities of building owners might be revealed and

  10. NASA metrication activities

    NASA Technical Reports Server (NTRS)

    Vlannes, P. N.

    1978-01-01

    NASA's organization and policy for metrification, history from 1964, NASA participation in Federal agency activities, interaction with nongovernmental metrication organizations, and the proposed metrication assessment study are reviewed.

  11. UTILIZING RESULTS FROM INSAR TO DEVELOP SEISMIC LOCATION BENCHMARKS AND IMPLICATIONS FOR SEISMIC SOURCE STUDIES

    SciTech Connect

    M. BEGNAUD; ET AL

    2000-09-01

    Obtaining accurate seismic event locations is one of the most important goals for monitoring detonations of underground nuclear teats. This is a particular challenge at small magnitudes where the number of recording stations may be less than 20. Although many different procedures are being developed to improve seismic location, most procedures suffer from inadequate testing against accurate information about a seismic event. Events with well-defined attributes, such as latitude, longitude, depth and origin time, are commonly referred to as ground truth (GT). Ground truth comes in many forms and with many different levels of accuracy. Interferometric Synthetic Aperture Radar (InSAR) can provide independent and accurate information (ground truth) regarding ground surface deformation and/or rupture. Relating surface deformation to seismic events is trivial when events are large and create a significant surface rupture, such as for the M{sub w} = 7.5 event that occurred in the remote northern region of the Tibetan plateau in 1997. The event, which was a vertical strike slip even appeared anomalous in nature due to the lack of large aftershocks and had an associated surface rupture of over 180 km that was identified and modeled using InSAR. The east-west orientation of the fault rupture provides excellent ground truth for latitude, but is of limited use for longitude. However, a secondary rupture occurred 50 km south of the main shock rupture trace that can provide ground truth with accuracy within 5 km. The smaller, 5-km-long secondary rupture presents a challenge for relating the deformation to a seismic event. The rupture is believed to have a thrust mechanism; the dip of the fimdt allows for some separation between the secondary rupture trace and its associated event epicenter, although not as much as is currently observed from catalog locations. Few events within the time period of the InSAR analysis are candidates for the secondary rupture. Of these, we have

  12. Algebraic Multigrid Benchmark

    SciTech Connect

    2013-05-06

    AMG2013 is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. It has been derived directly from the Boomer AMG solver in the hypre library, a large linear solvers library that is being developed in the Center for Applied Scientific Computing (CASC) at LLNL. The driver provided in the benchmark can build various test problems. The default problem is a Laplace type problem on an unstructured domain with various jumps and an anisotropy in one part.

  13. Validating Cellular Automata Lava Flow Emplacement Algorithms with Standard Benchmarks

    NASA Astrophysics Data System (ADS)

    Richardson, J. A.; Connor, L.; Charbonnier, S. J.; Connor, C.; Gallant, E.

    2015-12-01

    A major existing need in assessing lava flow simulators is a common set of validation benchmark tests. We propose three levels of benchmarks which test model output against increasingly complex standards. First, imulated lava flows should be morphologically identical, given changes in parameter space that should be inconsequential, such as slope direction. Second, lava flows simulated in simple parameter spaces can be tested against analytical solutions or empirical relationships seen in Bingham fluids. For instance, a lava flow simulated on a flat surface should produce a circular outline. Third, lava flows simulated over real world topography can be compared to recent real world lava flows, such as those at Tolbachik, Russia, and Fogo, Cape Verde. Success or failure of emplacement algorithms in these validation benchmarks can be determined using a Bayesian approach, which directly tests the ability of an emplacement algorithm to correctly forecast lava inundation. Here we focus on two posterior metrics, P(A|B) and P(¬A|¬B), which describe the positive and negative predictive value of flow algorithms. This is an improvement on less direct statistics such as model sensitivity and the Jaccard fitness coefficient. We have performed these validation benchmarks on a new, modular lava flow emplacement simulator that we have developed. This simulator, which we call MOLASSES, follows a Cellular Automata (CA) method. The code is developed in several interchangeable modules, which enables quick modification of the distribution algorithm from cell locations to their neighbors. By assessing several different distribution schemes with the benchmark tests, we have improved the performance of MOLASSES to correctly match early stages of the 2012-3 Tolbachik Flow, Kamchakta Russia, to 80%. We also can evaluate model performance given uncertain input parameters using a Monte Carlo setup. This illuminates sensitivity to model uncertainty.

  14. Developing Linkages between Fish Metrics and Fluvial Variation to Explore Responses of Stream Fish Communities to Climate Change across the Conterminous United States

    NASA Astrophysics Data System (ADS)

    Tsang, Y.; Infante, D.; Wang, L.; Krueger, D. M.; Wieferich, D.

    2011-12-01

    As climate factors operate over the scale of the stream catchment, they influence physical characteristics of streams draining those catchments, and ultimately, their biological assemblages. Characterizing fish species responses to stream flow condition can support a mechanistic approach for assessing their potential ecological response to climate change. However, translatable relations among climate factors, flow conditions, and fish responses have not yet been derived. A recently-compiled fish database developed in support of the National Fish Habitat Action Plan (NFHAP) along with stream gauges across the conterminous United States provides baseline information to fill this knowledge gap. This study intends to offer a conceptual method to develop linkages from climate to ecosystem response. We began by assembling historical daily stream flow data available through the National Water Information System (NWIS) and attributed them to individual stream arcs represented by the National Hydrography Dataset Plus (NHDplus), which allowed us to link fluvial gauges with fish data at nearby stream locations. Using the hydrological index tool (HIT) developed by U.S. Geological Survey, Fort Collins Science Center, long-term flow records were summarized into a large set of metrics characterizing stream flow regimes. Using an indicator analysis approach that linked species to their matched flow characters, a subset of flow metrics determined to be important to fish were identified. This analysis was conducted separately within nine ecologically-defined regions of the conterminous United States and resulted in a list of regionally-specific fish species most responsive to stream flow regimes as well as the identification of stream flow metrics important to specific fish species. These identified habitat metrics were associated with climate metrics to describe climate drivers that influence stream flow conditions. Using a set of fish records compiled from throughout the

  15. OPTIMIZATION OF DEEP DRILLING PERFORMANCE--DEVELOPMENT AND BENCHMARK TESTING OF ADVANCED DIAMOND PRODUCT DRILL BITS & HP/HT FLUIDS TO SIGNIFICANTLY IMPROVE RATES OF PENETRATION

    SciTech Connect

    Alan Black; Arnis Judzis

    2004-10-01

    The industry cost shared program aims to benchmark drilling rates of penetration in selected simulated deep formations and to significantly improve ROP through a team development of aggressive diamond product drill bit--fluid system technologies. Overall the objectives are as follows: Phase 1--Benchmark ''best in class'' diamond and other product drilling bits and fluids and develop concepts for a next level of deep drilling performance; Phase 2--Develop advanced smart bit-fluid prototypes and test at large scale; and Phase 3--Field trial smart bit-fluid concepts, modify as necessary and commercialize products. As of report date, TerraTek has concluded all major preparations for the high pressure drilling campaign. Baker Hughes encountered difficulties in providing additional pumping capacity before TerraTek's scheduled relocation to another facility, thus the program was delayed further to accommodate the full testing program.

  16. Benchmarking and Performance Measurement.

    ERIC Educational Resources Information Center

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  17. Benchmark Airport Charges

    NASA Technical Reports Server (NTRS)

    de Wit, A.; Cohn, N.

    1999-01-01

    The Netherlands Directorate General of Civil Aviation (DGCA) commissioned Hague Consulting Group (HCG) to complete a benchmark study of airport charges at twenty eight airports in Europe and around the world, based on 1996 charges. This study followed previous DGCA research on the topic but included more airports in much more detail. The main purpose of this new benchmark study was to provide insight into the levels and types of airport charges worldwide and into recent changes in airport charge policy and structure. This paper describes the 1996 analysis. It is intended that this work be repeated every year in order to follow developing trends and provide the most up-to-date information possible.

  18. Benchmark Airport Charges

    NASA Technical Reports Server (NTRS)

    deWit, A.; Cohn, N.

    1999-01-01

    The Netherlands Directorate General of Civil Aviation (DGCA) commissioned Hague Consulting Group (HCG) to complete a benchmark study of airport charges at twenty eight airports in Europe and around the world, based on 1996 charges. This study followed previous DGCA research on the topic but included more airports in much more detail. The main purpose of this new benchmark study was to provide insight into the levels and types of airport charges worldwide and into recent changes in airport charge policy and structure, This paper describes the 1996 analysis. It is intended that this work be repeated every year in order to follow developing trends and provide the most up-to-date information possible.

  19. Metric Measurement: A Resource for Teachers.

    ERIC Educational Resources Information Center

    Texas Education Agency, Austin. Div. of Curriculum Development.

    This document is designed to help teachers deal with the changeover from the United States customary system to the metric system. This publication contains a brief introduction to the historical development of the metric system, tables of the International System of Units, and descriptions of everyday use of the metric system. Basic information…

  20. Benchmarking hypercube hardware and software

    NASA Technical Reports Server (NTRS)

    Grunwald, Dirk C.; Reed, Daniel A.

    1986-01-01

    It was long a truism in computer systems design that balanced systems achieve the best performance. Message passing parallel processors are no different. To quantify the balance of a hypercube design, an experimental methodology was developed and the associated suite of benchmarks was applied to several existing hypercubes. The benchmark suite includes tests of both processor speed in the absence of internode communication and message transmission speed as a function of communication patterns.

  1. Chemical structures of low-pressure premixed methylcyclohexane flames as benchmarks for the development of a predictive combustion chemistry model

    SciTech Connect

    Skeen, Scott A.; Yang, Bin; Jasper, Ahren W.; Pitz, William J.; Hansen, Nils

    2011-11-14

    The chemical compositions of three low-pressure premixed flames of methylcyclohexane (MCH) are investigated with the emphasis on the chemistry of MCH decomposition and the formation of aromatic species, including benzene and toluene. The flames are stabilized on a flat-flame (McKenna type) burner at equivalence ratios of φ = 1.0, 1.75, and 1.9 and at low pressures between 15 Torr (= 20 mbar) and 30 Torr (= 40 mbar). The complex chemistry of MCH consumption is illustrated in the experimental identification of several C7H12, C7H10, C6H12, and C6H10 isomers sampled from the flames as a function of distance from the burner. Three initiation steps for MCH consumption are discussed: ring-opening to heptenes and methyl-hexenes (isomerization), methyl radical loss yielding the cyclohexyl radical (dissociation), and H abstraction from MCH. Mole fraction profiles as a function of distance from the burner for the C7 species supplemented by theoretical calculations are presented, indicating that flame structures resulting in steeper temperature gradients and/or greater peak temperatures can lead to a relative increase in MCH consumption through the dissociation and isomerization channels. Trends observed among the stable C6 species as well as 1,3-pentadiene and isoprene also support this conclusion. Relatively large amounts of toluene and benzene are observed in the experiments, illustrating the importance of sequential H-abstraction steps from MCH to toluene and from cyclohexyl to benzene. Furthermore, modeled results using the detailed chemical model of Pitz et al. (Proc. Combust. Inst.2007, 31, 267–275) are also provided to illustrate the use of these data as a benchmark for the improvement or future development of a MCH mechanism.

  2. Chemical structures of low-pressure premixed methylcyclohexane flames as benchmarks for the development of a predictive combustion chemistry model

    DOE PAGESBeta

    Skeen, Scott A.; Yang, Bin; Jasper, Ahren W.; Pitz, William J.; Hansen, Nils

    2011-11-14

    The chemical compositions of three low-pressure premixed flames of methylcyclohexane (MCH) are investigated with the emphasis on the chemistry of MCH decomposition and the formation of aromatic species, including benzene and toluene. The flames are stabilized on a flat-flame (McKenna type) burner at equivalence ratios of φ = 1.0, 1.75, and 1.9 and at low pressures between 15 Torr (= 20 mbar) and 30 Torr (= 40 mbar). The complex chemistry of MCH consumption is illustrated in the experimental identification of several C7H12, C7H10, C6H12, and C6H10 isomers sampled from the flames as a function of distance from the burner.more » Three initiation steps for MCH consumption are discussed: ring-opening to heptenes and methyl-hexenes (isomerization), methyl radical loss yielding the cyclohexyl radical (dissociation), and H abstraction from MCH. Mole fraction profiles as a function of distance from the burner for the C7 species supplemented by theoretical calculations are presented, indicating that flame structures resulting in steeper temperature gradients and/or greater peak temperatures can lead to a relative increase in MCH consumption through the dissociation and isomerization channels. Trends observed among the stable C6 species as well as 1,3-pentadiene and isoprene also support this conclusion. Relatively large amounts of toluene and benzene are observed in the experiments, illustrating the importance of sequential H-abstraction steps from MCH to toluene and from cyclohexyl to benzene. Furthermore, modeled results using the detailed chemical model of Pitz et al. (Proc. Combust. Inst.2007, 31, 267–275) are also provided to illustrate the use of these data as a benchmark for the improvement or future development of a MCH mechanism.« less

  3. Benchmarking the next generation of homology inference tools

    PubMed Central

    Saripella, Ganapathi Varma; Sonnhammer, Erik L. L.; Forslund, Kristoffer

    2016-01-01

    Motivation: Over the last decades, vast numbers of sequences were deposited in public databases. Bioinformatics tools allow homology and consequently functional inference for these sequences. New profile-based homology search tools have been introduced, allowing reliable detection of remote homologs, but have not been systematically benchmarked. To provide such a comparison, which can guide bioinformatics workflows, we extend and apply our previously developed benchmark approach to evaluate the ‘next generation’ of profile-based approaches, including CS-BLAST, HHSEARCH and PHMMER, in comparison with the non-profile based search tools NCBI-BLAST, USEARCH, UBLAST and FASTA. Method: We generated challenging benchmark datasets based on protein domain architectures within either the PFAM + Clan, SCOP/Superfamily or CATH/Gene3D domain definition schemes. From each dataset, homologous and non-homologous protein pairs were aligned using each tool, and standard performance metrics calculated. We further measured congruence of domain architecture assignments in the three domain databases. Results: CSBLAST and PHMMER had overall highest accuracy. FASTA, UBLAST and USEARCH showed large trade-offs of accuracy for speed optimization. Conclusion: Profile methods are superior at inferring remote homologs but the difference in accuracy between methods is relatively small. PHMMER and CSBLAST stand out with the highest accuracy, yet still at a reasonable computational cost. Additionally, we show that less than 0.1% of Swiss-Prot protein pairs considered homologous by one database are considered non-homologous by another, implying that these classifications represent equivalent underlying biological phenomena, differing mostly in coverage and granularity. Availability and Implementation: Benchmark datasets and all scripts are placed at (http://sonnhammer.org/download/Homology_benchmark). Contact: forslund@embl.de Supplementary information: Supplementary data are available at

  4. Development of a regional littoral benthic macroinvertebrate multi-metric index (MMI) for lakes from the National Lakes Assessment

    EPA Science Inventory

    During the 2007 National Lakes Assessment (NLA) benthic macroinvertebrate samples were collected from the lake littoral zone. The purpose of the sampling was to assess the feasibility of a multi-metric index (MMI) to assess the condition of the littoral benthic macroinvertebrate...

  5. DOSE-RESPONSE ASSESSMENT FOR DEVELOPMENT TOXICITY: II. COMPARISON OF GENERIC BENCHMARK DOSE ESTIMATES WITH NO OBSERVED ADVERSE EFFECT LEVELS

    EPA Science Inventory

    Developmental toxicity risk assessment currently relies on the estimation of reference doses (RfDDTS) or reference concentrations (RfCDTS) based on the use of no observed adverse effect levels (NOAELS) divided by uncertainty factors (UFs)The benchmark dose (BUD) has been proposed...

  6. Daylight metrics and energy savings

    SciTech Connect

    Mardaljevic, John; Heschong, Lisa; Lee, Eleanor

    2009-12-31

    The drive towards sustainable, low-energy buildings has increased the need for simple, yet accurate methods to evaluate whether a daylit building meets minimum standards for energy and human comfort performance. Current metrics do not account for the temporal and spatial aspects of daylight, nor of occupants comfort or interventions. This paper reviews the historical basis of current compliance methods for achieving daylit buildings, proposes a technical basis for development of better metrics, and provides two case study examples to stimulate dialogue on how metrics can be applied in a practical, real-world context.

  7. Benchmarking image fusion algorithm performance

    NASA Astrophysics Data System (ADS)

    Howell, Christopher L.

    2012-06-01

    Registering two images produced by two separate imaging sensors having different detector sizes and fields of view requires one of the images to undergo transformation operations that may cause its overall quality to degrade with regards to visual task performance. This possible change in image quality could add to an already existing difference in measured task performance. Ideally, a fusion algorithm would take as input unaltered outputs from each respective sensor used in the process. Therefore, quantifying how well an image fusion algorithm performs should be base lined to whether the fusion algorithm retained the performance benefit achievable by each independent spectral band being fused. This study investigates an identification perception experiment using a simple and intuitive process for discriminating between image fusion algorithm performances. The results from a classification experiment using information theory based image metrics is presented and compared to perception test results. The results show an effective performance benchmark for image fusion algorithms can be established using human perception test data. Additionally, image metrics have been identified that either agree with or surpass the performance benchmark established.

  8. The NAS Parallel Benchmarks

    SciTech Connect

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage

  9. Rotational clutter metric

    NASA Astrophysics Data System (ADS)

    Salem, Salem; Halford, Carl; Moyer, Steve; Gundy, Matthew

    2009-08-01

    A new approach to linear discriminant analysis (LDA), called orthogonal rotational LDA (ORLDA) is presented. Using ORLDA and properly accounting for target size allowed development of a new clutter metric that is based on the Laplacian pyramid (LP) decomposition of clutter images. The new metric achieves correlation exceeding 98% with expert human labeling of clutter levels in a set of 244 infrared images. Our clutter metric is based on the set of weights for the LP levels that best classify images into clutter levels as manually classified by an expert human observer. LDA is applied as a preprocessing step to classification. LDA suffers from a few limitations in this application. Therefore, we propose a new approach to LDA, called ORLDA, using orthonormal geometric rotations. Each rotation brings the LP feature space closer to the LDA solution while retaining orthogonality in the feature space. To understand the effects of target size on clutter, we applied ORLDA at different target sizes. The outputs are easily related because they are functions of orthogonal rotation angles. Finally, we used Bayesian decision theory to learn class boundaries for clutter levels at different target sizes.

  10. Coverage Metrics for Model Checking

    NASA Technical Reports Server (NTRS)

    Penix, John; Visser, Willem; Norvig, Peter (Technical Monitor)

    2001-01-01

    When using model checking to verify programs in practice, it is not usually possible to achieve complete coverage of the system. In this position paper we describe ongoing research within the Automated Software Engineering group at NASA Ames on the use of test coverage metrics to measure partial coverage and provide heuristic guidance for program model checking. We are specifically interested in applying and developing coverage metrics for concurrent programs that might be used to support certification of next generation avionics software.

  11. Performance Evaluation and Benchmarking of Intelligent Systems

    SciTech Connect

    Madhavan, Raj; Messina, Elena; Tunstel, Edward

    2009-09-01

    To design and develop capable, dependable, and affordable intelligent systems, their performance must be measurable. Scientific methodologies for standardization and benchmarking are crucial for quantitatively evaluating the performance of emerging robotic and intelligent systems technologies. There is currently no accepted standard for quantitatively measuring the performance of these systems against user-defined requirements; and furthermore, there is no consensus on what objective evaluation procedures need to be followed to understand the performance of these systems. The lack of reproducible and repeatable test methods has precluded researchers working towards a common goal from exchanging and communicating results, inter-comparing system performance, and leveraging previous work that could otherwise avoid duplication and expedite technology transfer. Currently, this lack of cohesion in the community hinders progress in many domains, such as manufacturing, service, healthcare, and security. By providing the research community with access to standardized tools, reference data sets, and open source libraries of solutions, researchers and consumers will be able to evaluate the cost and benefits associated with intelligent systems and associated technologies. In this vein, the edited book volume addresses performance evaluation and metrics for intelligent systems, in general, while emphasizing the need and solutions for standardized methods. To the knowledge of the editors, there is not a single book on the market that is solely dedicated to the subject of performance evaluation and benchmarking of intelligent systems. Even books that address this topic do so only marginally or are out of date. The research work presented in this volume fills this void by drawing from the experiences and insights of experts gained both through theoretical development and practical implementation of intelligent systems in a variety of diverse application domains. The book presents

  12. Research on computer systems benchmarking

    NASA Technical Reports Server (NTRS)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  13. Antimicrobial Stewardship in Long-Term Care: Metrics and Risk Adjustment.

    PubMed

    Mylotte, Joseph M

    2016-07-01

    An antimicrobial stewardship program (ASP) has been recommended for long-term care facilities because of the increasing problem of antibiotic resistance in this setting to improve prescribing and decrease adverse events. Recommendations have been made for the components of such a program, but there is little evidence to support any specific methodology at the present time. The recommendations make minimal reference to metrics, an essential component of any ASP, to monitor the results of interventions. This article focuses on the role of antibiotic use metrics as part of an ASP for long-term care. Studies specifically focused on development of antibiotic use metrics for long-term care are reviewed. It is stressed that these metrics should be considered as an integral part of an ASP in long-term care. In order to develop benchmarks for antibiotic use for long-term care, there must be appropriate risk adjustment for interfacility comparisons and quality improvement. Studies that have focused on resident functional status as a risk factor for infection and antibiotic use are reviewed. Recommendations for the potentially most useful and feasible metrics for long-term care are provided along with recommendations for future research. PMID:27233489

  14. Benchmark on outdoor scenes

    NASA Astrophysics Data System (ADS)

    Zhang, Hairong; Wang, Cheng; Chen, Yiping; Jia, Fukai; Li, Jonathan

    2016-03-01

    Depth super-resolution is becoming popular in computer vision, and most of test data is based on indoor data sets with ground-truth measurements such as Middlebury. However, indoor data sets mainly are acquired from structured light techniques under ideal conditions, which cannot represent the objective world with nature light. Unlike indoor scenes, the uncontrolled outdoor environment is much more complicated and is rich both in visual and depth texture. For that reason, we develop a more challenging and meaningful outdoor benchmark for depth super-resolution using the state-of-the-art active laser scanning system.

  15. Algebraic Multigrid Benchmark

    Energy Science and Technology Software Center (ESTSC)

    2013-05-06

    AMG2013 is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. It has been derived directly from the Boomer AMG solver in the hypre library, a large linear solvers library that is being developed in the Center for Applied Scientific Computing (CASC) at LLNL. The driver provided in the benchmark can build various test problems. The default problem is a Laplace type problem on an unstructured domain with various jumpsmore » and an anisotropy in one part.« less

  16. Optimization of Deep Drilling Performance--Development and Benchmark Testing of Advanced Diamond Product Drill Bits & HP/HT Fluids to Significantly Improve Rates of Penetration

    SciTech Connect

    Alan Black; Arnis Judzis

    2003-10-01

    This document details the progress to date on the OPTIMIZATION OF DEEP DRILLING PERFORMANCE--DEVELOPMENT AND BENCHMARK TESTING OF ADVANCED DIAMOND PRODUCT DRILL BITS AND HP/HT FLUIDS TO SIGNIFICANTLY IMPROVE RATES OF PENETRATION contract for the year starting October 2002 through September 2002. The industry cost shared program aims to benchmark drilling rates of penetration in selected simulated deep formations and to significantly improve ROP through a team development of aggressive diamond product drill bit--fluid system technologies. Overall the objectives are as follows: Phase 1--Benchmark ''best in class'' diamond and other product drilling bits and fluids and develop concepts for a next level of deep drilling performance; Phase 2--Develop advanced smart bit--fluid prototypes and test at large scale; and Phase 3--Field trial smart bit--fluid concepts, modify as necessary and commercialize products. Accomplishments to date include the following: 4Q 2002--Project started; Industry Team was assembled; Kick-off meeting was held at DOE Morgantown; 1Q 2003--Engineering meeting was held at Hughes Christensen, The Woodlands Texas to prepare preliminary plans for development and testing and review equipment needs; Operators started sending information regarding their needs for deep drilling challenges and priorities for large-scale testing experimental matrix; Aramco joined the Industry Team as DEA 148 objectives paralleled the DOE project; 2Q 2003--Engineering and planning for high pressure drilling at TerraTek commenced; 3Q 2003--Continuation of engineering and design work for high pressure drilling at TerraTek; Baker Hughes INTEQ drilling Fluids and Hughes Christensen commence planning for Phase 1 testing--recommendations for bits and fluids.

  17. Metrics in Career Education.

    ERIC Educational Resources Information Center

    Lindbeck, John R.

    The United States is rapidly becoming a metric nation. Industry, education, business, and government are all studying the issue of metrication to learn how they can prepare for it. The book is designed to help teachers and students in career education programs learn something about metrics. Presented in an easily understood manner, the textbook's…

  18. Metrication for the Manager.

    ERIC Educational Resources Information Center

    Benedict, John T.

    The scope of this book covers metrication management. It was created to fill the middle management need for condensed, authoritative information about the metrication process and was conceived as a working tool and a prime reference source. Written from a management point of view, it touches on virtually all aspects of metrication and highlights…

  19. Research Reactor Benchmarks

    SciTech Connect

    Ravnik, Matjaz; Jeraj, Robert

    2003-09-15

    A criticality benchmark experiment performed at the Jozef Stefan Institute TRIGA Mark II research reactor is described. This experiment and its evaluation are given as examples of benchmark experiments at research reactors. For this reason the differences and possible problems compared to other benchmark experiments are particularly emphasized. General guidelines for performing criticality benchmarks in research reactors are given. The criticality benchmark experiment was performed in a normal operating reactor core using commercially available fresh 20% enriched fuel elements containing 12 wt% uranium in uranium-zirconium hydride fuel material. Experimental conditions to minimize experimental errors and to enhance computer modeling accuracy are described. Uncertainties in multiplication factor due to fuel composition and geometry data are analyzed by sensitivity analysis. The simplifications in the benchmark model compared to the actual geometry are evaluated. Sample benchmark calculations with the MCNP and KENO Monte Carlo codes are given.

  20. Benchmarks for acute stroke care delivery

    PubMed Central

    Hall, Ruth E.; Khan, Ferhana; Bayley, Mark T.; Asllani, Eriola; Lindsay, Patrice; Hill, Michael D.; O'Callaghan, Christina; Silver, Frank L.; Kapral, Moira K.

    2013-01-01

    Objective Despite widespread interest in many jurisdictions in monitoring and improving the quality of stroke care delivery, benchmarks for most stroke performance indicators have not been established. The objective of this study was to develop data-derived benchmarks for acute stroke quality indicators. Design Nine key acute stroke quality indicators were selected from the Canadian Stroke Best Practice Performance Measures Manual. Participants A population-based retrospective sample of patients discharged from 142 hospitals in Ontario, Canada, between 1 April 2008 and 31 March 2009 (N = 3191) was used to calculate hospital rates of performance and benchmarks. Intervention The Achievable Benchmark of Care (ABC™) methodology was used to create benchmarks based on the performance of the upper 15% of patients in the top-performing hospitals. Main Outcome Measures Benchmarks were calculated for rates of neuroimaging, carotid imaging, stroke unit admission, dysphasia screening and administration of stroke-related medications. Results The following benchmarks were derived: neuroimaging within 24 h, 98%; admission to a stroke unit, 77%; thrombolysis among patients arriving within 2.5 h, 59%; carotid imaging, 93%; dysphagia screening, 88%; antithrombotic therapy, 98%; anticoagulation for atrial fibrillation, 94%; antihypertensive therapy, 92% and lipid-lowering therapy, 77%. ABC™ acute stroke care benchmarks achieve or exceed the consensus-based targets required by Accreditation Canada, with the exception of dysphagia screening. Conclusions Benchmarks for nine hospital-based acute stroke care quality indicators have been established. These can be used in the development of standards for quality improvement initiatives. PMID:24141011

  1. Radiation Detection Computational Benchmark Scenarios

    SciTech Connect

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  2. Selection of metrics based on the seagrass Cymodocea nodosa and development of a biotic index (CYMOX) for assessing ecological status of coastal and transitional waters

    NASA Astrophysics Data System (ADS)

    Oliva, Silvia; Mascaró, Oriol; Llagostera, Izaskun; Pérez, Marta; Romero, Javier

    2012-12-01

    Bioindicators, based on a large variety of organisms, have been increasingly used in the assessment of the status of aquatic systems. In marine coastal waters, seagrasses have shown a great potential as bioindicator organisms, probably due to both their environmental sensitivity and the large amount of knowledge available. However, and as far as we are aware, only little attention has been paid to euryhaline species suitable for biomonitoring both transitional and marine waters. With the aim to contribute to this expanding field, and provide new and useful tools for managers, we develop here a multi-bioindicator index based on the seagrass Cymodocea nodosa. We first compiled from the literature a suite of 54 candidate metrics, i. e. measurable attribute of the concerned organism or community that adequately reflects properties of the environment, obtained from C. nodosa and its associated ecosystem, putatively responding to environmental deterioration. We then evaluated them empirically, obtaining a complete dataset on these metrics along a gradient of anthropogenic disturbance. Using this dataset, we selected the metrics to construct the index, using, successively: (i) ANOVA, to assess their capacity to discriminate among sites of different environmental conditions; (ii) PCA, to check the existence of a common pattern among the metrics reflecting the environmental gradient; and (iii) feasibility and cost-effectiveness criteria. Finally, 10 metrics (out of the 54 tested) encompassing from the physiological (δ15N, δ34S, % N, % P content of rhizomes), through the individual (shoot size) and the population (root weight ratio), to the community (epiphytes load) organisation levels, and some metallic pollution descriptors (Cd, Cu and Zn content of rhizomes) were retained and integrated into a single index (CYMOX) using the scores of the sites on the first axis of a PCA. These scores were reduced to a 0-1 (Ecological Quality Ratio) scale by referring the values to the

  3. Metric Education and the Metrics Debate: A Perspective.

    ERIC Educational Resources Information Center

    Chappelet, Jean Loup

    A short history of the use of the metric system is given. The role of education in metrication is discussed. The metric activities of three groups of metrics advocates, the business community, private groups, and government agencies, are described. Arguments advanced by metric opponents are also included. The author compares the metric debate with…

  4. Exploring Metric Symmetry

    SciTech Connect

    Zwart, P.H.; Grosse-Kunstleve, R.W.; Adams, P.D.

    2006-07-31

    Relatively minor perturbations to a crystal structure can in some cases result in apparently large changes in symmetry. Changes in space group or even lattice can be induced by heavy metal or halide soaking (Dauter et al, 2001), flash freezing (Skrzypczak-Jankun et al, 1996), and Se-Met substitution (Poulsen et al, 2001). Relations between various space groups and lattices can provide insight in the underlying structural causes for the symmetry or lattice transformations. Furthermore, these relations can be useful in understanding twinning and how to efficiently solve two different but related crystal structures. Although (pseudo) symmetric properties of a certain combination of unit cell parameters and a space group are immediately obvious (such as a pseudo four-fold axis if a is approximately equal to b in an orthorhombic space group), other relations (e.g. Lehtio, et al, 2005) that are less obvious might be crucial to the understanding and detection of certain idiosyncrasies of experimental data. We have developed a set of tools that allows straightforward exploration of possible metric symmetry relations given unit cell parameters and a space group. The new iotbx.explore{_}metric{_}symmetry command produces an overview of the various relations between several possible point groups for a given lattice. Methods for finding relations between a pair of unit cells are also available. The tools described in this newsletter are part of the CCTBX libraries, which are included in the latest (versions July 2006 and up) PHENIX and CCI Apps distributions.

  5. Requirement Metrics for Risk Identification

    NASA Technical Reports Server (NTRS)

    Hammer, Theodore; Huffman, Lenore; Wilson, William; Rosenberg, Linda; Hyatt, Lawrence

    1996-01-01

    The Software Assurance Technology Center (SATC) is part of the Office of Mission Assurance of the Goddard Space Flight Center (GSFC). The SATC's mission is to assist National Aeronautics and Space Administration (NASA) projects to improve the quality of software which they acquire or develop. The SATC's efforts are currently focused on the development and use of metric methodologies and tools that identify and assess risks associated with software performance and scheduled delivery. This starts at the requirements phase, where the SATC, in conjunction with software projects at GSFC and other NASA centers is working to identify tools and metric methodologies to assist project managers in identifying and mitigating risks. This paper discusses requirement metrics currently being used at NASA in a collaborative effort between the SATC and the Quality Assurance Office at GSFC to utilize the information available through the application of requirements management tools.

  6. A wavelet contrast metric for the targeting task performance metric

    NASA Astrophysics Data System (ADS)

    Preece, Bradley L.; Flug, Eric A.

    2016-05-01

    Target acquisition performance depends strongly on the contrast of the target. The Targeting Task Performance (TTP) metric, within the Night Vision Integrated Performance Model (NV-IPM), uses a combination of resolution, signal-to-noise ratio (SNR), and contrast to predict and model system performance. While the dependence on resolution and SNR are well defined and understood, defining a robust and versatile contrast metric for a wide variety of acquisition tasks is more difficult. In this correspondence, a wavelet contrast metric (WCM) is developed under the assumption that the human eye processes spatial differences in a manner similar to a wavelet transform. The amount of perceivable information, or useful wavelet coefficients, is used to predict the total viewable contrast to the human eye. The WCM is intended to better match the measured performance of the human vision system for high-contrast, low-contrast, and low-observable targets. After further validation, the new contrast metric can be incorporated using a modified TTP metric into the latest Army target acquisition software suite, the NV-IPM.

  7. Benchmark Energetic Data in a Model System for Grubbs II Metathesis Catalysis and Their Use for the Development, Assessment, and Validation of Electronic Structure Methods

    SciTech Connect

    Zhao, Yan; Truhlar, Donald G.

    2009-01-31

    We present benchmark relative energetics in the catalytic cycle of a model system for Grubbs second-generation olefin metathesis catalysts. The benchmark data were determined by a composite approach based on CCSD(T) calculations, and they were used as a training set to develop a new spin-component-scaled MP2 method optimized for catalysis, which is called SCSC-MP2. The SCSC-MP2 method has improved performance for modeling Grubbs II olefin metathesis catalysts as compared to canonical MP2 or SCS-MP2. We also employed the benchmark data to test 17 WFT methods and 39 density functionals. Among the tested density functionals, M06 is the best performing functional. M06/TZQS gives an MUE of only 1.06 kcal/mol, and it is a much more affordable method than the SCSC-MP2 method or any other correlated WFT methods. The best performing meta-GGA is M06-L, and M06-L/DZQ gives an MUE of 1.77 kcal/mol. PBEh is the best performing hybrid GGA, with an MUE of 3.01 kcal/mol; however, it does not perform well for the larger, real Grubbs II catalyst. B3LYP and many other functionals containing the LYP correlation functional perform poorly, and B3LYP underestimates the stability of stationary points for the cis-pathway of the model system by a large margin. From the assessments, we recommend the M06, M06-L, and MPW1B95 functionals for modeling Grubbs II olefin metathesis catalysts. The local M06-L method is especially efficient for calculations on large systems.

  8. Development of a Model Protein Interaction Pair as a Benchmarking Tool for the Quantitative Analysis of 2-Site Protein-Protein Interactions.

    PubMed

    Yamniuk, Aaron P; Newitt, John A; Doyle, Michael L; Arisaka, Fumio; Giannetti, Anthony M; Hensley, Preston; Myszka, David G; Schwarz, Fred P; Thomson, James A; Eisenstein, Edward

    2015-12-01

    A significant challenge in the molecular interaction field is to accurately determine the stoichiometry and stepwise binding affinity constants for macromolecules having >1 binding site. The mission of the Molecular Interactions Research Group (MIRG) of the Association of Biomolecular Resource Facilities (ABRF) is to show how biophysical technologies are used to quantitatively characterize molecular interactions, and to educate the ABRF members and scientific community on the utility and limitations of core technologies [such as biosensor, microcalorimetry, or analytic ultracentrifugation (AUC)]. In the present work, the MIRG has developed a robust model protein interaction pair consisting of a bivalent variant of the Bacillus amyloliquefaciens extracellular RNase barnase and a variant of its natural monovalent intracellular inhibitor protein barstar. It is demonstrated that this system can serve as a benchmarking tool for the quantitative analysis of 2-site protein-protein interactions. The protein interaction pair enables determination of precise binding constants for the barstar protein binding to 2 distinct sites on the bivalent barnase binding partner (termed binase), where the 2 binding sites were engineered to possess affinities that differed by 2 orders of magnitude. Multiple MIRG laboratories characterized the interaction using isothermal titration calorimetry (ITC), AUC, and surface plasmon resonance (SPR) methods to evaluate the feasibility of the system as a benchmarking model. Although general agreement was seen for the binding constants measured using solution-based ITC and AUC approaches, weaker affinity was seen for surface-based method SPR, with protein immobilization likely affecting affinity. An analysis of the results from multiple MIRG laboratories suggests that the bivalent barnase-barstar system is a suitable model for benchmarking new approaches for the quantitative characterization of complex biomolecular interactions. PMID:26543437

  9. Using Benchmarking To Influence Tuition and Fee Decisions.

    ERIC Educational Resources Information Center

    Hubbell, Loren W. Loomis; Massa, Robert J.; Lapovsky, Lucie

    2002-01-01

    Discusses the use of benchmarking in managing enrollment. Using a case study, illustrates how benchmarking can help administrators develop strategies for planning and implementing admissions and pricing practices. (EV)

  10. About Using the Metric System.

    ERIC Educational Resources Information Center

    Illinois State Office of Education, Springfield.

    This booklet contains a brief introduction to the use of the metric system. Topics covered include: (1) what is the metric system; (2) how to think metric; (3) some advantages of the metric system; (4) basics of the metric system; (5) how to measure length, area, volume, mass and temperature the metric way; (6) some simple calculations using…

  11. Austin Community College Benchmarking Update.

    ERIC Educational Resources Information Center

    Austin Community Coll., TX. Office of Institutional Effectiveness.

    Austin Community College contracted with MGT of America, Inc. in spring 1999 to develop a peer and benchmark (best) practices analysis on key indicators. These indicators were updated in spring 2002 using data from eight Texas community colleges and four non-Texas institutions that represent large, comprehensive, urban community colleges, similar…

  12. Processor Emulator with Benchmark Applications

    SciTech Connect

    Lloyd, G. Scott; Pearce, Roger; Gokhale, Maya

    2015-11-13

    A processor emulator and a suite of benchmark applications have been developed to assist in characterizing the performance of data-centric workloads on current and future computer architectures. Some of the applications have been collected from other open source projects. For more details on the emulator and an example of its usage, see reference [1].

  13. PyMPI Dynamic Benchmark

    Energy Science and Technology Software Center (ESTSC)

    2007-02-16

    Pynamic is a benchmark designed to test a system's ability to handle the Dynamic Linking and Loading (DLL) requirements of Python-based scientific applications. This benchmark is developed to add a workload to our testing environment, a workload that represents a newly emerging class of DLL behaviors. Pynamic buildins on pyMPI, and MPI extension to Python C-extension dummy codes and a glue layer that facilitates linking and loading of the generated dynamic modules into the resultingmore » pyMPI. Pynamic is configurable, enabling modeling the static properties of a specific code as described in section 5. It does not, however, model any significant computationss of the target and hence, it is not subjected to the same level of control as the target code. In fact, HPC computer vendors and tool developers will be encouraged to add it to their tesitn suite once the code release is completed. an ability to produce and run this benchmark is an effective test for valifating the capability of a compiler and linker/loader as well as an OS kernel and other runtime system of HPC computer vendors. In addition, the benchmark is designed as a test case for stressing code development tools. Though Python has recently gained popularity in the HPC community, it heavy DLL operations have hindered certain HPC code development tools, notably parallel debuggers, from performing optimally.« less

  14. SMaSH: a benchmarking toolkit for human genome variant calling

    PubMed Central

    Talwalkar, Ameet; Liptrap, Jesse; Newcomb, Julie; Hartl, Christopher; Terhorst, Jonathan; Curtis, Kristal; Bresler, Ma’ayan; Song, Yun S.; Jordan, Michael I.; Patterson, David

    2014-01-01

    Motivation: Computational methods are essential to extract actionable information from raw sequencing data, and to thus fulfill the promise of next-generation sequencing technology. Unfortunately, computational tools developed to call variants from human sequencing data disagree on many of their predictions, and current methods to evaluate accuracy and computational performance are ad hoc and incomplete. Agreement on benchmarking variant calling methods would stimulate development of genomic processing tools and facilitate communication among researchers. Results: We propose SMaSH, a benchmarking methodology for evaluating germline variant calling algorithms. We generate synthetic datasets, organize and interpret a wide range of existing benchmarking data for real genomes and propose a set of accuracy and computational performance metrics for evaluating variant calling methods on these benchmarking data. Moreover, we illustrate the utility of SMaSH to evaluate the performance of some leading single-nucleotide polymorphism, indel and structural variant calling algorithms. Availability and implementation: We provide free and open access online to the SMaSH tool kit, along with detailed documentation, at smash.cs.berkeley.edu Contact: ameet@cs.berkeley.edu or pattrsn@cs.berkeley.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24894505

  15. Metric handbook for Federal officials: Recommendations of the Interagency Committee on Metric Policy

    NASA Astrophysics Data System (ADS)

    1989-08-01

    Recommendations for introduction of metric units in proposed legislation, regulations, data requests and other Government use of measurement units are presented. These recommendations were developed for the Interagency Committee on Metric Policy by its working arm, the Metrication Operating Committee, and its Metric Practice and Preferred Units Subcommittee. Assistance in editing of the documents, coordination and publication in the Federal Register was provided by the U.S. Department of Commerce, Office of Metric Programs, which serves as the secretariat for the ICMP and its subordinate committees. Other Federal documents are provided for convenient reference as appendices.

  16. Fusion metrics for dynamic situation analysis

    NASA Astrophysics Data System (ADS)

    Blasch, Erik P.; Pribilski, Mike; Daughtery, Bryan; Roscoe, Brian; Gunsett, Josh

    2004-08-01

    To design information fusion systems, it is important to develop metrics as part of a test and evaluation strategy. In many cases, fusion systems are designed to (1) meet a specific set of user information needs (IN), (2) continuously validate information pedigree and updates, and (3) maintain this performance under changing conditions. A fusion system"s performance is evaluated in many ways. However, developing a consistent set of metrics is important for standardization. For example, many track and identification metrics have been proposed for fusion analysis. To evaluate a complete fusion system performance, level 4 sensor management and level 5 user refinement metrics need to be developed simultaneously to determine whether or not the fusion system is meeting information needs. To describe fusion performance, the fusion community needs to agree on a minimum set of metrics for user assessment and algorithm comparison. We suggest that such a minimum set should include feasible metrics of accuracy, confidence, throughput, timeliness, and cost. These metrics can be computed as confidence (probability), accuracy (error), timeliness (delay), throughput (amount) and cost (dollars). In this paper, we explore an aggregate set of metrics for fusion evaluation and demonstrate with information need metrics for dynamic situation analysis.

  17. Symbolic planning with metric time

    NASA Astrophysics Data System (ADS)

    MacMillan, T. R.

    1992-03-01

    Most AI planning systems have considered time in a qualitative way only. For example, a plan may require one action to come 'before' another. Metric time enables AI planners to represent action durations and reason over quantitative temporal constraints such as windows of opportunity. This paper presents preliminary results observed while developing a theory of multi-agent adversarial planning for battle management research. Quantitative temporal reasoning seems essential in this domain. For example, Orange may plan to block Blue's attack by seizing a river ford which Blue must cross, but only if Orange can get there during the window of opportunity while Blue is approaching the ford but has not yet arrived. In nonadversarial multi-agent planning, metric time enables planners to detect windows of opportunity for agents to help or hinder each other. In single-agent planning, metric time enables planners to reason about deadlines, temporally constrained resource availability, and asynchronous processes which the agent can initiate and monitor. Perhaps surprisingly, metric time increases the computational complexity of planning less than might be expected, because it reduces the computational complexity of modal truth criteria. To make this observation precise, we review Chapman's analysis to modal truth criteria and describe a tractable heuristic criterion, 'worst case necessarily true.' Deciding if a proposition is worst case necessarily true, in a single-agent plan with n steps, requires O(n) computations only if qualitative temporal information is used. We show how it can be decided in O(log n) using metric time.

  18. Developing a new stream metric for comparing stream function using a bank-floodplain sediment budget: a case study of three Piedmont streams

    USGS Publications Warehouse

    Schenk, Edward R.; Hupp, Cliff R.; Gellis, Allen; Noe, Greg

    2013-01-01

    A bank and floodplain sediment budget was created for three Piedmont streams tributary to the Chesapeake Bay. The watersheds of each stream varied in land use from urban (Difficult Run) to urbanizing (Little Conestoga Creek) to agricultural (Linganore Creek). The purpose of the study was to determine the relation between geomorphic parameters and sediment dynamics and to develop a floodplain trapping metric for comparing streams with variable characteristics. Net site sediment budgets were best explained by gradient at Difficult Run, floodplain width at Little Conestoga Creek, and the relation of channel cross-sectional area to floodplain width at Linganore Creek. A correlation for all streams indicated that net site sediment budget was best explained by relative floodplain width (ratio of channel width to floodplain width). A new geomorphic metric, the floodplain trapping factor, was used to compare sediment budgets between streams with differing suspended sediment yields. Site sediment budgets were normalized by floodplain area and divided by the stream's sediment yield to provide a unitless measure of floodplain sediment trapping. A floodplain trapping factor represents the amount of upland sediment that a particular floodplain site can trap (e.g. a factor of 5 would indicate that a particular floodplain site traps the equivalent of 5 times that area in upland erosional source area). Using this factor we determined that Linganore Creek had the highest gross and net (floodplain deposition minus bank erosion) floodplain trapping factor (107 and 46, respectively) that Difficult Run the lowest gross floodplain trapping factor (29) and Little Conestoga Creek had the lowest net floodplain trapping factor (–14, indicating that study sites were net contributors to the suspended sediment load). The trapping factor is a robust metric for comparing three streams of varied watershed and geomorphic character, it promises to be a useful tool for future stream assessments.

  19. Fighter agility metrics, research, and test

    NASA Technical Reports Server (NTRS)

    Liefer, Randall K.; Valasek, John; Eggold, David P.

    1990-01-01

    Proposed new metrics to assess fighter aircraft agility are collected and analyzed. A framework for classification of these new agility metrics is developed and applied. A completed set of transient agility metrics is evaluated with a high fidelity, nonlinear F-18 simulation provided by the NASA Dryden Flight Research Center. Test techniques and data reduction methods are proposed. A method of providing cuing information to the pilot during flight test is discussed. The sensitivity of longitudinal and lateral agility metrics to deviations from the pilot cues is studied in detail. The metrics are shown to be largely insensitive to reasonable deviations from the nominal test pilot commands. Instrumentation required to quantify agility via flight test is also considered. With one exception, each of the proposed new metrics may be measured with instrumentation currently available. Simulation documentation and user instructions are provided in an appendix.

  20. Using Publication Metrics to Highlight Academic Productivity and Research Impact

    PubMed Central

    Carpenter, Christopher R.; Cone, David C.; Sarli, Cathy C.

    2016-01-01

    This article provides a broad overview of widely available measures of academic productivity and impact using publication data and highlights uses of these metrics for various purposes. Metrics based on publication data include measures such as number of publications, number of citations, the journal impact factor score, and the h-index, as well as emerging metrics based on document-level metrics. Publication metrics can be used for a variety of purposes for tenure and promotion, grant applications and renewal reports, benchmarking, recruiting efforts, and administrative purposes for departmental or university performance reports. The authors also highlight practical applications of measuring and reporting academic productivity and impact to emphasize and promote individual investigators, grant applications, or department output. PMID:25308141

  1. Semantic Metrics for Object Oriented Design

    NASA Technical Reports Server (NTRS)

    Etzkorn, Lethe

    2003-01-01

    The purpose of this proposal is to research a new suite of object-oriented (OO) software metrics, called semantic metrics, that have the potential to help software engineers identify fragile, low quality code sections much earlier in the development cycle than is possible with traditional OO metrics. With earlier and better Fault detection, software maintenance will be less time consuming and expensive, and software reusability will be improved. Because it is less costly to correct faults found earlier than to correct faults found later in the software lifecycle, the overall cost of software development will be reduced. Semantic metrics can be derived from the knowledge base of a program understanding system. A program understanding system is designed to understand a software module. Once understanding is complete, the knowledge-base contains digested information about the software module. Various semantic metrics can be collected on the knowledge base. This new kind of metric measures domain complexity, or the relationship of the software to its application domain, rather than implementation complexity, which is what traditional software metrics measure. A semantic metric will thus map much more closely to qualities humans are interested in, such as cohesion and maintainability, than is possible using traditional metrics, that are calculated using only syntactic aspects of software.

  2. PRISMATIC CORE COUPLED TRANSIENT BENCHMARK

    SciTech Connect

    J. Ortensi; M.A. Pope; G. Strydom; R.S. Sen; M.D. DeHart; H.D. Gougar; C. Ellis; A. Baxter; V. Seker; T.J. Downar; K. Vierow; K. Ivanov

    2011-06-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  3. Taking Aims: New CASE Study Benchmarks Advancement Investments and Returns

    ERIC Educational Resources Information Center

    Goldsmith, Rae

    2012-01-01

    Advancement professionals have always been thirsty for information that will help them understand how their programs compare with those of their peers. But in recent years the demand for benchmarking data has exploded as budgets have become leaner, leaders have become more business minded, and terms like "performance metrics and return on…

  4. Arbitrary Metrics in Psychology

    ERIC Educational Resources Information Center

    Blanton, Hart; Jaccard, James

    2006-01-01

    Many psychological tests have arbitrary metrics but are appropriate for testing psychological theories. Metric arbitrariness is a concern, however, when researchers wish to draw inferences about the true, absolute standing of a group or individual on the latent psychological dimension being measured. The authors illustrate this in the context of 2…

  5. Metrics for Cosmetology.

    ERIC Educational Resources Information Center

    Cooper, Gloria S., Ed.; Magisos, Joel H., Ed.

    Designed to meet the job-related metric measurement needs of cosmetology students, this instructional package on cosmetology is part of a set of 55 packages for metric instruction in different occupations. The package is intended for students who already know the occupational terminology, measurement terms, and tools currently in use. Each of the…

  6. Introduction to Metrics.

    ERIC Educational Resources Information Center

    Edgecomb, Philip L.; Shapiro, Marion

    Addressed to vocational, or academic middle or high school students, this book reviews mathematics fundamentals using metric units of measurement. It utilizes a common-sense approach to the degree of accuracy needed in solving actual trade and every-day problems. Stress is placed on reading off metric measurements from a ruler or tape, and on…

  7. Metrics for Agricultural Mechanics.

    ERIC Educational Resources Information Center

    Cooper, Gloria S., Ed.; Magisos, Joel H., Ed.

    Designed to meet the job-related metric measurement needs of agricultural mechanics students, this instructional package is one of four for the agribusiness and natural resources occupations cluster, part of a set of 55 packages for metric instruction in different occupations. The package is intended for students who already know the occupational…

  8. What About Metric?

    ERIC Educational Resources Information Center

    Barbrow, Louis E.

    Implications of the change to the metric system in our daily lives are discussed. Advantages of the metric system are presented, especially its decimal base and ease of calculation which are demonstrated by several worked examples. Some further sources of information are listed. A world map indicates the few remaining countries that have not yet…

  9. Metrics for Food Distribution.

    ERIC Educational Resources Information Center

    Cooper, Gloria S., Ed.; Magisos, Joel H., Ed.

    Designed to meet the job-related metric measurement needs of students interested in food distribution, this instructional package is one of five for the marketing and distribution cluster, part of a set of 55 packages for metric instruction in different occupations. The package is intended for students who already know the occupational…

  10. Metrics for Transportation.

    ERIC Educational Resources Information Center

    Cooper, Gloria S., Ed.; Magisos, Joel H., Ed.

    Designed to meet the job-related metric measurement needs of students interested in transportation, this instructional package is one of five for the marketing and distribution cluster, part of a set of 55 packages for metric instruction in different occupations. The package is intended for students who already know the occupational terminology,…

  11. Results Oriented Benchmarking: The Evolution of Benchmarking at NASA from Competitive Comparisons to World Class Space Partnerships

    NASA Technical Reports Server (NTRS)

    Bell, Michael A.

    1999-01-01

    Informal benchmarking using personal or professional networks has taken place for many years at the Kennedy Space Center (KSC). The National Aeronautics and Space Administration (NASA) recognized early on, the need to formalize the benchmarking process for better utilization of resources and improved benchmarking performance. The need to compete in a faster, better, cheaper environment has been the catalyst for formalizing these efforts. A pioneering benchmarking consortium was chartered at KSC in January 1994. The consortium known as the Kennedy Benchmarking Clearinghouse (KBC), is a collaborative effort of NASA and all major KSC contractors. The charter of this consortium is to facilitate effective benchmarking, and leverage the resulting quality improvements across KSC. The KBC acts as a resource with experienced facilitators and a proven process. One of the initial actions of the KBC was to develop a holistic methodology for Center-wide benchmarking. This approach to Benchmarking integrates the best features of proven benchmarking models (i.e., Camp, Spendolini, Watson, and Balm). This cost-effective alternative to conventional Benchmarking approaches has provided a foundation for consistent benchmarking at KSC through the development of common terminology, tools, and techniques. Through these efforts a foundation and infrastructure has been built which allows short duration benchmarking studies yielding results gleaned from world class partners that can be readily implemented. The KBC has been recognized with the Silver Medal Award (in the applied research category) from the International Benchmarking Clearinghouse.

  12. Informatics in radiology: Efficiency metrics for imaging device productivity.

    PubMed

    Hu, Mengqi; Pavlicek, William; Liu, Patrick T; Zhang, Muhong; Langer, Steve G; Wang, Shanshan; Place, Vicki; Miranda, Rafael; Wu, Teresa Tong

    2011-01-01

    Acute awareness of the costs associated with medical imaging equipment is an ever-present aspect of the current healthcare debate. However, the monitoring of productivity associated with expensive imaging devices is likely to be labor intensive, relies on summary statistics, and lacks accepted and standardized benchmarks of efficiency. In the context of the general Six Sigma DMAIC (design, measure, analyze, improve, and control) process, a World Wide Web-based productivity tool called the Imaging Exam Time Monitor was developed to accurately and remotely monitor imaging efficiency with use of Digital Imaging and Communications in Medicine (DICOM) combined with a picture archiving and communication system. Five device efficiency metrics-examination duration, table utilization, interpatient time, appointment interval time, and interseries time-were derived from DICOM values. These metrics allow the standardized measurement of productivity, to facilitate the comparative evaluation of imaging equipment use and ongoing efforts to improve efficiency. A relational database was constructed to store patient imaging data, along with device- and examination-related data. The database provides full access to ad hoc queries and can automatically generate detailed reports for administrative and business use, thereby allowing staff to monitor data for trends and to better identify possible changes that could lead to improved productivity and reduced costs in association with imaging services. © RSNA, 2011. PMID:21257928

  13. A Seafloor Benchmark for 3-dimensional Geodesy

    NASA Astrophysics Data System (ADS)

    Chadwell, C. D.; Webb, S. C.; Nooner, S. L.

    2014-12-01

    We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone

  14. A performance benchmark test for geodynamo simulations

    NASA Astrophysics Data System (ADS)

    Matsui, H.; Heien, E. M.

    2013-12-01

    In the last ten years, a number of numerical dynamo models have successfully represented basic characteristics of the geomagnetic field. As new models and numerical methods continue to be developed, it is important to update and extend benchmarks for testing these models. The first dynamo benchmark of Christensen et al. (2001) was applied to models based on spherical harmonic expansion methods. However, only a few groups have reported results of the dynamo benchmark using local methods (Harder and Hansen, 2005; Matsui and Okuda, 2005; Chan et al., 2007) because of the difficulty treating magnetic boundary conditions based on the local methods. On the other hand, spherical harmonics expansion methods perform poorly on massively parallel computers because global data communications are required for the spherical harmonics expansions to evaluate nonlinear terms. We perform benchmark tests to asses various numerical methods for the next generation of geodynamo simulations. The purpose of this benchmark test is to assess numerical geodynamo models on a massively parallel computational platform. To compare among many numerical methods as possible, we consider the model with the insulated magnetic boundary by Christensen et al. (2001) and with the pseudo vacuum magnetic boundary, because the pseudo vacuum boundaries are implemented easier by using the local method than the magnetic insulated boundaries. In the present study, we consider two kinds of benchmarks, so-called accuracy benchmark and performance benchmark. In the accuracy benchmark, we compare the dynamo models by using modest Ekman and Rayleigh numbers proposed by Christensen et. al. (2001). We investigate a required spatial resolution for each dynamo code to obtain less than 1% difference from the suggested solution of the benchmark test using the two magnetic boundary conditions. In the performance benchmark, we investigate computational performance under the same computational environment. We perform these

  15. Documenting performance metrics in a building life-cycle information system

    SciTech Connect

    Hitchcock, R.J.; Piette, M.A.; Selkowitz, S.E.

    1998-08-01

    In order to produce a new generation of green buildings, it will be necessary to clearly identify their performance requirements, and to assure that these requirements are met. A long-term goal is to provide building decision-makers with the information and tools needed to cost-effectively assure the desired performance of buildings, as specified by stakeholders, across the complete life cycle of a building project. A key element required in achieving this goal is a method for explicitly documenting the building performance objectives that are of importance to stakeholders. Such a method should clearly define each objective (e.g., cost, energy use, and comfort) and its desired level of performance. This information is intended to provide quantitative benchmarks useful in evaluating alternative design solutions, commissioning the newly constructed building, and tracking and maintaining the actual performance of the occupied building over time. These quantitative benchmarks are referred to as performance metrics, and they are a principal element of information captured in the Building Life-cycle Information System (BLISS). An initial implementation of BLISS is based on the International Alliance for Interoperability`s (IAI) Industry Foundation Classes (IFC), an evolving data model under development by a variety of architectural, engineering, and construction (AEC) industry firms and organizations. Within BLISS, the IFC data model has been extended to include performance metrics and a structure for archiving changing versions of the building information over time. This paper defines performance metrics, discusses the manner in which BLISS is envisioned to support a variety of activities related to assuring the desired performance of a building across its life cycle, and describes a performance metric tracking tool, called Metracker, that is based on BLISS.

  16. Relevance Metric Learning for Person Re-Identification by Exploiting Listwise Similarities.

    PubMed

    Chen, Jiaxin; Zhang, Zhaoxiang; Wang, Yunhong

    2015-12-01

    Person re-identification aims to match people across non-overlapping camera views, which is an important but challenging task in video surveillance. In order to obtain a robust metric for matching, metric learning has been introduced recently. Most existing works focus on seeking a Mahalanobis distance by employing sparse pairwise constraints, which utilize image pairs with the same person identity as positive samples, and select a small portion of those with different identities as negative samples. However, this training strategy has abandoned a large amount of discriminative information, and ignored the relative similarities. In this paper, we propose a novel relevance metric learning method with listwise constraints (RMLLCs) by adopting listwise similarities, which consist of the similarity list of each image with respect to all remaining images. By virtue of listwise similarities, RMLLC could capture all pairwise similarities, and consequently learn a more discriminative metric by enforcing the metric to conserve predefined similarity lists in a low-dimensional projection subspace. Despite the performance enhancement, RMLLC using predefined similarity lists fails to capture the relative relevance information, which is often unavailable in practice. To address this problem, we further introduce a rectification term to automatically exploit the relative similarities, and develop an efficient alternating iterative algorithm to jointly learn the optimal metric and the rectification term. Extensive experiments on four publicly available benchmarking data sets are carried out and demonstrate that the proposed method is significantly superior to the state-of-the-art approaches. The results also show that the introduction of the rectification term could further boost the performance of RMLLC. PMID:26259221

  17. A performance geodynamo benchmark

    NASA Astrophysics Data System (ADS)

    Matsui, H.; Heien, E. M.

    2014-12-01

    In the last ten years, a number of numerical dynamo models have successfully represented basic characteristics of the geomagnetic field. However, to approach the parameters regime of the Earth's outer core, we need massively parallel computational environment for extremely large spatial resolutions. Local methods are expected to be more suitable for massively parallel computation because the local methods needs less data communication than the spherical harmonics expansion, but only a few groups have reported results of the dynamo benchmark using local methods (Harder and Hansen, 2005; Matsui and Okuda, 2005; Chan et al., 2007) because of the difficulty treating magnetic boundary conditions based on the local methods. On the other hand, some numerical dynamo models using spherical harmonics expansion has performed successfully with thousands of processes. We perform benchmark tests to asses various numerical methods to asses the next generation of geodynamo simulations. The purpose of the present benchmark test is to assess numerical geodynamo models on a massively parallel computational platform. To compare among many numerical methods as possible, we consider the model with the insulated magnetic boundary by Christensen et al. (2001) and with the pseudo vacuum magnetic boundary, because the pseudo vacuum boundaries are implemented easier by using the local method than the magnetic insulated boundaries. In the present study, we consider two kinds of benchmarks, so-called accuracy benchmark and performance benchmark. In the present study, we will report the results of the performance benchmark. We perform the participated dynamo models under the same computational environment (XSEDE TACC Stampede), and investigate computational performance. To simplify the problem, we choose the same model and parameter regime as the accuracy benchmark test, but perform the simulations with much finer spatial resolutions as possible to investigate computational capability (e

  18. Math Roots: The Beginnings of the Metric System

    ERIC Educational Resources Information Center

    Johnson, Art; Norris, Kit; Adams,Thomasina Lott, Ed.

    2007-01-01

    This article reviews the history of the metric system, from a proposal of a sixteenth-century mathematician to its implementation in Revolutionary France some 200 years later. Recent developments in the metric system are also discussed.

  19. Comparing Chemistry to Outcome: The Development of a Chemical Distance Metric, Coupled with Clustering and Hierarchal Visualization Applied to Macromolecular Crystallography

    PubMed Central

    Bruno, Andrew E.; Ruby, Amanda M.; Luft, Joseph R.; Grant, Thomas D.; Seetharaman, Jayaraman; Montelione, Gaetano T.; Hunt, John F.; Snell, Edward H.

    2014-01-01

    Many bioscience fields employ high-throughput methods to screen multiple biochemical conditions. The analysis of these becomes tedious without a degree of automation. Crystallization, a rate limiting step in biological X-ray crystallography, is one of these fields. Screening of multiple potential crystallization conditions (cocktails) is the most effective method of probing a proteins phase diagram and guiding crystallization but the interpretation of results can be time-consuming. To aid this empirical approach a cocktail distance coefficient was developed to quantitatively compare macromolecule crystallization conditions and outcome. These coefficients were evaluated against an existing similarity metric developed for crystallization, the C6 metric, using both virtual crystallization screens and by comparison of two related 1,536-cocktail high-throughput crystallization screens. Hierarchical clustering was employed to visualize one of these screens and the crystallization results from an exopolyphosphatase-related protein from Bacteroides fragilis, (BfR192) overlaid on this clustering. This demonstrated a strong correlation between certain chemically related clusters and crystal lead conditions. While this analysis was not used to guide the initial crystallization optimization, it led to the re-evaluation of unexplained peaks in the electron density map of the protein and to the insertion and correct placement of sodium, potassium and phosphate atoms in the structure. With these in place, the resulting structure of the putative active site demonstrated features consistent with active sites of other phosphatases which are involved in binding the phosphoryl moieties of nucleotide triphosphates. The new distance coefficient, CDcoeff, appears to be robust in this application, and coupled with hierarchical clustering and the overlay of crystallization outcome, reveals information of biological relevance. While tested with a single example the potential applications

  20. Toxicological Benchmarks for Wildlife

    SciTech Connect

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red-tailed hawk

  1. Software metrics: The quantitative impact of four factors on work rates experienced during software development. [reliability engineering

    NASA Technical Reports Server (NTRS)

    Gaffney, J. E., Jr.; Judge, R. W.

    1981-01-01

    A model of a software development process is described. The software development process is seen to consist of a sequence of activities, such as 'program design' and 'module development' (or coding). A manpower estimate is made by multiplying code size by the rates (man months per thousand lines of code) for each of the activities relevant to the particular case of interest and summing up the results. The effect of four objectively determinable factors (organization, software product type, computer type, and code type) on productivity values for each of nine principal software development activities was assessed. Four factors were identified which account for 39% of the observed productivity variation.

  2. Evaluation Metrics for the Paragon XP/S-15

    NASA Technical Reports Server (NTRS)

    Traversat, Bernard; McNab, David; Nitzberg, Bill; Fineberg, Sam; Blaylock, Bruce T. (Technical Monitor)

    1993-01-01

    On February 17th 1993, the Numerical Aerodynamic Simulation (NAS) facility located at the NASA Ames Research Center installed a 224 node Intel Paragon XP/S-15 system. After its installation, the Paragon was found to be in a very immature state and was unable to support a NAS users' workload, composed of a wide range of development and production activities. As a first step towards addressing this problem, we implemented a set of metrics to objectively monitor the system as operating system and hardware upgrades were installed. The metrics were designed to measure four aspects of the system that we consider essential to support our workload: availability, utilization, functionality, and performance. This report presents the metrics collected from February 1993 to August 1993. Since its installation, the Paragon availability has improved from a low of 15% uptime to a high of 80%, while its utilization has remained low. Functionality and performance have improved from merely running one of the NAS Parallel Benchmarks to running all of them faster (between 1 and 2 times) than on the iPSC/860. In spite of the progress accomplished, fundamental limitations of the Paragon operating system are restricting the Paragon from supporting the NAS workload. The maximum operating system message passing (NORMA IPC) bandwidth was measured at 11 Mbytes/s, well below the peak hardware bandwidth (175 Mbytes/s), limiting overall virtual memory and Unix services (i.e. Disk and HiPPI I/O) performance. The high NX application message passing latency (184 microns), three times than on the iPSC/860, was found to significantly degrade performance of applications relying on small message sizes. The amount of memory available for an application was found to be approximately 10 Mbytes per node, indicating that the OS is taking more space than anticipated (6 Mbytes per node).

  3. Geothermal Heat Pump Benchmarking Report

    SciTech Connect

    1997-01-17

    A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.

  4. Diagnostic Algorithm Benchmarking

    NASA Technical Reports Server (NTRS)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  5. TWODANT benchmark. Progress report

    SciTech Connect

    Lee, Sung

    1994-01-11

    TWODANT (Two-Dimensional, Diffusion-Accelerated, Neutral-Particle Transport) code has been benchmarked against 6 critical experiments (Jezebel plutonium critical assembly) and their k effective values compared with those of KENO and MCNP codes.

  6. An Arithmetic Metric

    ERIC Educational Resources Information Center

    Dominici, Diego

    2011-01-01

    This work introduces a distance between natural numbers not based on their position on the real line but on their arithmetic properties. We prove some metric properties of this distance and consider a possible extension.

  7. A metric for success

    NASA Astrophysics Data System (ADS)

    Carver, Gary P.

    1994-05-01

    The federal agencies are working with industry to ease adoption of the metric system. The goal is to help U.S. industry compete more successfully in the global marketplace, increase exports, and create new jobs. The strategy is to use federal procurement, financial assistance, and other business-related activities to encourage voluntary conversion. Based upon the positive experiences of firms and industries that have converted, federal agencies have concluded that metric use will yield long-term benefits that are beyond any one-time costs or inconveniences. It may be time for additional steps to move the Nation out of its dual-system comfort zone and continue to progress toward metrication. This report includes 'Metric Highlights in U.S. History'.

  8. Sustainability Indicators and Metrics

    EPA Science Inventory

    Sustainability is about preserving human existence. Indicators and metrics are absolutely necessary to provide at least a semi-quantitative assessment of progress towards or away from sustainability. Otherwise, it becomes impossible to objectively assess whether progress is bei...

  9. Metrication - Our Responsibility?

    ERIC Educational Resources Information Center

    Kroner, Klaus E.

    1972-01-01

    The metric system will soon be adopted in the United States. Engineering college educators can play a major role in this revision. Various suggestions are listed for teachers, authors and others. (PS)

  10. Using quality metrics with laser range scanners

    NASA Astrophysics Data System (ADS)

    MacKinnon, David K.; Aitken, Victor; Blais, Francois

    2008-02-01

    We have developed a series of new quality metrics that are generalizable to a variety of laser range scanning systems, including those acquiring measurements in the mid-field. Moreover, these metrics can be integrated into either an automated scanning system, or a system that guides a minimally trained operator through the scanning process. In particular, we represent the quality of measurements with regard to aliasing and sampling density for mid-field measurements, two issues that have not been well addressed in contemporary literature. We also present a quality metric that addresses the issue of laser spot motion during sample acquisition. Finally, we take into account the interaction between measurement resolution and measurement uncertainty where necessary. These metrics are presented within the context of an adaptive scanning system in which quality metrics are used to minimize the number of measurements obtained during the acquisition of a single range image.

  11. Advanced Life Support System Value Metric

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.; Rasky, Daniel J. (Technical Monitor)

    1999-01-01

    The NASA Advanced Life Support (ALS) Program is required to provide a performance metric to measure its progress in system development. Extensive discussions within the ALS program have led to the following approach. The Equivalent System Mass (ESM) metric has been traditionally used and provides a good summary of the weight, size, and power cost factors of space life support equipment. But ESM assumes that all the systems being traded off exactly meet a fixed performance requirement, so that the value and benefit (readiness, performance, safety, etc.) of all the different systems designs are considered to be exactly equal. This is too simplistic. Actual system design concepts are selected using many cost and benefit factors and the system specification is defined after many trade-offs. The ALS program needs a multi-parameter metric including both the ESM and a System Value Metric (SVM). The SVM would include safety, maintainability, reliability, performance, use of cross cutting technology, and commercialization potential. Another major factor in system selection is technology readiness level (TRL), a familiar metric in ALS. The overall ALS system metric that is suggested is a benefit/cost ratio, SVM/[ESM + function (TRL)], with appropriate weighting and scaling. The total value is given by SVM. Cost is represented by higher ESM and lower TRL. The paper provides a detailed description and example application of a suggested System Value Metric and an overall ALS system metric.

  12. Toward a perceptual video-quality metric

    NASA Astrophysics Data System (ADS)

    Watson, Andrew B.

    1998-07-01

    The advent of widespread distribution of digital video creates a need for automated methods for evaluating the visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics, and the economic need to reduce bit-rate to the lowest level that yields acceptable quality. In previous work, we have developed visual quality metrics for evaluating, controlling,a nd optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. Here I describe a new video quality metric that is an extension of these still image metrics into the time domain. Like the still image metrics, it is based on the Discrete Cosine Transform. An effort has been made to minimize the amount of memory and computation required by the metric, in order that might be applied in the widest range of applications. To calibrate the basic sensitivity of this metric to spatial and temporal signals we have made measurements of visual thresholds for temporally varying samples of DCT quantization noise.

  13. Associations Between Rate of Force Development Metrics and Throwing Velocity in Elite Team Handball Players: a Short Research Report

    PubMed Central

    Marques, Mário C.; Saavedra, Francisco J.; Abrantes, Catarina; Aidar, Felipe J.

    2011-01-01

    Performance assessment has become an invaluable component of monitoring participant’s development in distinct sports, yet limited and contradictory data are available in trained subjects. The purpose of this study was to examine the relationship between ball throwing velocity during a 3-step running throw in elite team handball players and selected measures of rate of force development like force, power, velocity, and bar displacement during a concentric only bench press exercise in elite male handball players. Fitteen elite senior male team handball players volunteered to participate. Each volunteer had power and bar velocity measured during a concentric only bench press test with 25, 35, and 45 kg as well as having one-repetition maximum strength determined. Ball throwing velocity was evaluated with a standard 3-step running throw using a radar gun. The results of this study indicated significant associations between ball velocity and time at maximum rate of force development (0, 66; p<0.05) and rate of force development at peak force (0,56; p<0.05) only with 25kg load. The current research indicated that ball velocity was only median associated with maximum rate of force development with light loads. A training regimen designed to improve ball-throwing velocity in elite male team handball players should emphasize bench press movement using light loads. PMID:23487363

  14. ICSBEP Benchmarks For Nuclear Data Applications

    NASA Astrophysics Data System (ADS)

    Briggs, J. Blair

    2005-05-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organization for Economic Cooperation and Development (OECD) — Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Serbia and Montenegro (formerly Yugoslavia), Kazakhstan, Spain, Israel, Brazil, Poland, and the Czech Republic are now participating. South Africa, India, China, and Germany are considering participation. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled "International Handbook of Evaluated Criticality Safety Benchmark Experiments." The 2004 Edition of the Handbook contains benchmark specifications for 3331 critical or subcritical configurations that are intended for use in validation efforts and for testing basic nuclear data. New to the 2004 Edition of the Handbook is a draft criticality alarm / shielding type benchmark that should be finalized in 2005 along with two other similar benchmarks. The Handbook is being used extensively for nuclear data testing and is expected to be a valuable resource for code and data validation and improvement efforts for decades to come. Specific benchmarks that are useful for testing structural materials such as iron, chromium, nickel, and manganese; beryllium; lead; thorium; and 238U are highlighted.

  15. The KMAT: Benchmarking Knowledge Management.

    ERIC Educational Resources Information Center

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  16. Object-oriented productivity metrics

    NASA Technical Reports Server (NTRS)

    Connell, John L.; Eller, Nancy

    1992-01-01

    Software productivity metrics are useful for sizing and costing proposed software and for measuring development productivity. Estimating and measuring source lines of code (SLOC) has proven to be a bad idea because it encourages writing more lines of code and using lower level languages. Function Point Analysis is an improved software metric system, but it is not compatible with newer rapid prototyping and object-oriented approaches to software development. A process is presented here for counting object-oriented effort points, based on a preliminary object-oriented analysis. It is proposed that this approach is compatible with object-oriented analysis, design, programming, and rapid prototyping. Statistics gathered on actual projects are presented to validate the approach.

  17. Transformational Research Engineering: Research Design Metrics for In-Depth and Empowering K-12 Teacher Professional Development

    ERIC Educational Resources Information Center

    Osler, James Edward

    2013-01-01

    This paper discusses the implementation of the Tri-Squared Test as an advanced statistical measure used to verify and validate the research outcomes. This type of statistical measure is ideal for teachers professional development as educators can create and validate instruments for educational settings. The initial research investigation published…

  18. Development of the method and U.S. normalization database for Life Cycle Impact Assessment and sustainability metrics.

    PubMed

    Bare, Jane; Gloria, Thomas; Norris, Gregory

    2006-08-15

    Normalization is an optional step within Life Cycle Impact Assessment (LCIA) that may be used to assist in the interpretation of life cycle inventory data as well as life cycle impact assessment results. Normalization transforms the magnitude of LCI and LCIA results into relative contribution by substance and life cycle impact category. Normalization thus can significantly influence LCA-based decisions when tradeoffs exist. The U. S. Environmental Protection Agency (EPA) has developed a normalization database based on the spatial scale of the 48 continental U.S. states, Hawaii, Alaska, the District of Columbia, and Puerto Rico with a one-year reference time frame. Data within the normalization database were compiled based on the impact methodologies and lists of stressors used in TRACI-the EPA's Tool for the Reduction and Assessment of Chemical and other environmental Impacts. The new normalization database published within this article may be used for LCIA case studies within the United States, and can be used to assist in the further development of a global normalization database. The underlying data analyzed for the development of this database are included to allow the development of normalization data consistent with other impact assessment methodologies as well. PMID:16955915

  19. Social network analysis as a metric for the development of an interdisciplinary, inter-organizational research team.

    PubMed

    Ryan, David; Emond, Marcel; Lamontagne, Marie-Eve

    2014-01-01

    The development of an interdisciplinary and inter-organizational research team among eight of Canada's leading emergency, geriatric medicine and rehabilitation researchers affiliated with six academic centers has provided an opportunity to study the development of a distributed team of interdisciplinary researchers using the methods of social network theory and analysis and to consider whether these methods are useful tools in the science of team science. Using traditional network analytic methods, the team of investigators were asked to rate their relationships with one another retrospectively at one year prior to the team's first meeting and contemporaneously at two subsequent yearly intervals. Using network analytic statistics and visualizations the data collected finds an increase in network density and reciprocity of relationships together with more distributed centrality consistent with the findings of other researchers. These network development characteristics suggest that the distributed research team is developing as it should and supports the assertion that network analysis is a useful science of team science research tool. PMID:23961974

  20. GPS Metric Tracking Unit

    NASA Technical Reports Server (NTRS)

    2008-01-01

    As Global Positioning Satellite (GPS) applications become more prevalent for land- and air-based vehicles, GPS applications for space vehicles will also increase. The Applied Technology Directorate of Kennedy Space Center (KSC) has developed a lightweight, low-cost GPS Metric Tracking Unit (GMTU), the first of two steps in developing a lightweight, low-cost Space-Based Tracking and Command Subsystem (STACS) designed to meet Range Safety's link margin and latency requirements for vehicle command and telemetry data. The goals of STACS are to improve Range Safety operations and expand tracking capabilities for space vehicles. STACS will track the vehicle, receive commands, and send telemetry data through the space-based asset, which will dramatically reduce dependence on ground-based assets. The other step was the Low-Cost Tracking and Data Relay Satellite System (TDRSS) Transceiver (LCT2), developed by the Wallops Flight Facility (WFF), which allows the vehicle to communicate with a geosynchronous relay satellite. Although the GMTU and LCT2 were independently implemented and tested, the design collaboration of KSC and WFF engineers allowed GMTU and LCT2 to be integrated into one enclosure, leading to the final STACS. In operation, GMTU needs only a radio frequency (RF) input from a GPS antenna and outputs position and velocity data to the vehicle through a serial or pulse code modulation (PCM) interface. GMTU includes one commercial GPS receiver board and a custom board, the Command and Telemetry Processor (CTP) developed by KSC. The CTP design is based on a field-programmable gate array (FPGA) with embedded processors to support GPS functions.

  1. DOE Commercial Building Benchmark Models: Preprint

    SciTech Connect

    Torcelini, P.; Deru, M.; Griffith, B.; Benne, K.; Halverson, M.; Winiarski, D.; Crawley, D. B.

    2008-07-01

    To provide a consistent baseline of comparison and save time conducting such simulations, the U.S. Department of Energy (DOE) has developed a set of standard benchmark building models. This paper will provide an executive summary overview of these benchmark buildings, and how they can save building analysts valuable time. Fully documented and implemented to use with the EnergyPlus energy simulation program, the benchmark models are publicly available and new versions will be created to maintain compatibility with new releases of EnergyPlus. The benchmark buildings will form the basis for research on specific building technologies, energy code development, appliance standards, and measurement of progress toward DOE energy goals. Having a common starting point allows us to better share and compare research results and move forward to make more energy efficient buildings.

  2. Towards Using Transformative Education as a Benchmark for Clarifying Differences and Similarities between Environmental Education and Education for Sustainable Development

    ERIC Educational Resources Information Center

    Pavlova, Margarita

    2013-01-01

    The UN Decade of Education for Sustainable Development (DESD) charges educators with a key role in developing and "securing sustainable life chances, aspirations and futures for young people". Environmental Education (EE) and ESD share a vision of quality education and a society that lives in balance with Earth's carrying capacity,…

  3. Metric Guidelines Inservice and/or Preservice

    ERIC Educational Resources Information Center

    Granito, Dolores

    1978-01-01

    Guidelines are given for designing teacher training for going metric. The guidelines were developed from existing guidelines, journal articles, a survey of colleges, and the detailed reactions of a panel. (MN)

  4. Correlational effect size benchmarks.

    PubMed

    Bosco, Frank A; Aguinis, Herman; Singh, Kulraj; Field, James G; Pierce, Charles A

    2015-03-01

    Effect size information is essential for the scientific enterprise and plays an increasingly central role in the scientific process. We extracted 147,328 correlations and developed a hierarchical taxonomy of variables reported in Journal of Applied Psychology and Personnel Psychology from 1980 to 2010 to produce empirical effect size benchmarks at the omnibus level, for 20 common research domains, and for an even finer grained level of generality. Results indicate that the usual interpretation and classification of effect sizes as small, medium, and large bear almost no resemblance to findings in the field, because distributions of effect sizes exhibit tertile partitions at values approximately one-half to one-third those intuited by Cohen (1988). Our results offer information that can be used for research planning and design purposes, such as producing better informed non-nil hypotheses and estimating statistical power and planning sample size accordingly. We also offer information useful for understanding the relative importance of the effect sizes found in a particular study in relationship to others and which research domains have advanced more or less, given that larger effect sizes indicate a better understanding of a phenomenon. Also, our study offers information about research domains for which the investigation of moderating effects may be more fruitful and provide information that is likely to facilitate the implementation of Bayesian analysis. Finally, our study offers information that practitioners can use to evaluate the relative effectiveness of various types of interventions. PMID:25314367

  5. Defining Sustainability Metric Targets in an Institutional Setting

    ERIC Educational Resources Information Center

    Rauch, Jason N.; Newman, Julie

    2009-01-01

    Purpose: The purpose of this paper is to expand on the development of university and college sustainability metrics by implementing an adaptable metric target strategy. Design/methodology/approach: A combined qualitative and quantitative methodology is derived that both defines what a sustainable metric target might be and describes the path a…

  6. Metric Education Curriculum Guide, Grades K-12. Bulletin 1537.

    ERIC Educational Resources Information Center

    Elliott, Emily; And Others

    This metric education curriculum guide, produced under the direction of the State of Louisiana Department of Public Education, was developed as part of a metrication plan approved by the Louisiana Board of Elementary and Secondary Education. This guide is designed to help teachers prepare students for the predominantly metric world in which they…

  7. 24 CFR 84.15 - Metric system of measurement.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false Metric system of measurement. 84.15... EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Pre-Award Requirements § 84.15 Metric system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C....

  8. 24 CFR 84.15 - Metric system of measurement.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 1 2012-04-01 2012-04-01 false Metric system of measurement. 84.15... EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Pre-Award Requirements § 84.15 Metric system of measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C....

  9. 24 CFR 84.15 - Metric system of measurement.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 1 2011-04-01 2011-04-01 false Metric system of measurement. 84.15... measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205) declares that the metric system is the preferred measurement system for U.S. trade and commerce. The...

  10. 24 CFR 84.15 - Metric system of measurement.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 1 2013-04-01 2013-04-01 false Metric system of measurement. 84.15... measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205) declares that the metric system is the preferred measurement system for U.S. trade and commerce. The...

  11. 24 CFR 84.15 - Metric system of measurement.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 1 2014-04-01 2014-04-01 false Metric system of measurement. 84.15... measurement. The Metric Conversion Act, as amended by the Omnibus Trade and Competitiveness Act (15 U.S.C. 205) declares that the metric system is the preferred measurement system for U.S. trade and commerce. The...

  12. Metrication of Technical Career Education. Final Report. Volume II.

    ERIC Educational Resources Information Center

    Feirer, John L.

    This second volume of the metrication study report contains the instructional materials developed to help the industrial and vocational education fields to use the metric system, primarily in the area of industrial arts from the seventh through the fourteenth year. The materials are presented in three sections. Section 1, Going Metric in…

  13. GRC GSFC TDRSS Waveform Metrics Report

    NASA Technical Reports Server (NTRS)

    Mortensen, Dale J.

    2013-01-01

    The report presents software metrics and porting metrics for the GGT Waveform. The porting was from a ground-based COTS SDR, the SDR-3000, to the CoNNeCT JPL SDR. The report does not address any of the Operating Environment (OE) software development, nor the original TDRSS waveform development at GSFC for the COTS SDR. With regard to STRS, the report presents compliance data and lessons learned.

  14. Encounters in an online brand community: development and validation of a metric for value co-creation by customers.

    PubMed

    Hsieh, Pei-Ling

    2015-05-01

    Recent developments in service marketing have demonstrated the potential value co-creation by customers who participate in online brand communities (OBCs). Therefore, this study forecasts the co-created value by understanding the participation/behavior of customers in a multi-stakeholder OBC. This six-phase qualitative and quantitative investigation conceptualizes, constructs, refines, and tests a 12-item three-dimensional scale for measuring key factors that are related to the experience, interpersonal interactions, and social relationships that affect the value co-creation by customers in an OBC. The scale captures stable psychometric properties, measured using various reliability and validity tests, and can be applied across various industries. Finally, the utility implications and limitations of the proposed scale are discussed, and potential future research directions considered. PMID:25965862

  15. Benchmarking in the Globalised World and Its Impact on South African Higher Education.

    ERIC Educational Resources Information Center

    Alt, H.

    2002-01-01

    Discusses what benchmarking is and reflects on the importance and development of benchmarking in universities on a national and international level. Offers examples of transnational benchmarking activities, including the International Benchmarking Club, in which South African higher education institutions participate. (EV)

  16. Action-Oriented Benchmarking: Using the CEUS Database to Benchmark Commercial Buildings in California

    SciTech Connect

    Mathew, Paul; Mills, Evan; Bourassa, Norman; Brook, Martha

    2008-02-01

    The 2006 Commercial End Use Survey (CEUS) database developed by the California Energy Commission is a far richer source of energy end-use data for non-residential buildings than has previously been available and opens the possibility of creating new and more powerful energy benchmarking processes and tools. In this article--Part 2 of a two-part series--we describe the methodology and selected results from an action-oriented benchmarking approach using the new CEUS database. This approach goes beyond whole-building energy benchmarking to more advanced end-use and component-level benchmarking that enables users to identify and prioritize specific energy efficiency opportunities - an improvement on benchmarking tools typically in use today.

  17. Successful Experiences in Teaching Metric.

    ERIC Educational Resources Information Center

    Odom, Jeffrey V., Ed.

    In this publication are presentations on specific experiences in teaching metrics, made at a National Bureau of Standards conference. Ideas of value to teachers and administrators are described in reports on: SI units of measure; principles and practices of teaching metric; metric and the school librarian; teaching metric through television and…

  18. Mobile phone camera benchmarking: combination of camera speed and image quality

    NASA Astrophysics Data System (ADS)

    Peltoketo, Veli-Tapani

    2014-01-01

    When a mobile phone camera is tested and benchmarked, the significance of quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. For example, ISO 15781 defines several measurements to evaluate various camera system delays. However, the speed or rapidity metrics of the mobile phone's camera system have not been used with the quality metrics even if the camera speed has become more and more important camera performance feature. There are several tasks in this work. Firstly, the most important image quality metrics are collected from the standards and papers. Secondly, the speed related metrics of a mobile phone's camera system are collected from the standards and papers and also novel speed metrics are identified. Thirdly, combinations of the quality and speed metrics are validated using mobile phones in the market. The measurements are done towards application programming interface of different operating system. Finally, the results are evaluated and conclusions are made. The result of this work gives detailed benchmarking results of mobile phone camera systems in the market. The paper defines also a proposal of combined benchmarking metrics, which includes both quality and speed parameters.

  19. Metrics for Labeled Markov Systems

    NASA Technical Reports Server (NTRS)

    Desharnais, Josee; Jagadeesan, Radha; Gupta, Vineet; Panangaden, Prakash

    1999-01-01

    Partial Labeled Markov Chains are simultaneously generalizations of process algebra and of traditional Markov chains. They provide a foundation for interacting discrete probabilistic systems, the interaction being synchronization on labels as in process algebra. Existing notions of process equivalence are too sensitive to the exact probabilities of various transitions. This paper addresses contextual reasoning principles for reasoning about more robust notions of "approximate" equivalence between concurrent interacting probabilistic systems. The present results indicate that:We develop a family of metrics between partial labeled Markov chains to formalize the notion of distance between processes. We show that processes at distance zero are bisimilar. We describe a decision procedure to compute the distance between two processes. We show that reasoning about approximate equivalence can be done compositionally by showing that process combinators do not increase distance. We introduce an asymptotic metric to capture asymptotic properties of Markov chains; and show that parallel composition does not increase asymptotic distance.

  20. Toward Scalable Benchmarks for Mass Storage Systems

    NASA Technical Reports Server (NTRS)

    Miller, Ethan L.

    1996-01-01

    This paper presents guidelines for the design of a mass storage system benchmark suite, along with preliminary suggestions for programs to be included. The benchmarks will measure both peak and sustained performance of the system as well as predicting both short- and long-term behavior. These benchmarks should be both portable and scalable so they may be used on storage systems from tens of gigabytes to petabytes or more. By developing a standard set of benchmarks that reflect real user workload, we hope to encourage system designers and users to publish performance figures that can be compared with those of other systems. This will allow users to choose the system that best meets their needs and give designers a tool with which they can measure the performance effects of improvements to their systems.

  1. XWeB: The XML Warehouse Benchmark

    NASA Astrophysics Data System (ADS)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  2. Development of ambient air quality population-weighted metrics for use in time-series health studies.

    PubMed

    Ivy, Diane; Mulholland, James A; Russell, Armistead G

    2008-05-01

    A robust methodology was developed to compute population-weighted daily measures of ambient air pollution for use in time-series studies of acute health effects. Ambient data, including criteria pollutants and four fine particulate matter (PM) components, from monitors located in the 20-county metropolitan Atlanta area over the time period of 1999-2004 were normalized, spatially resolved using inverse distance-square weighting to Census tracts, denormalized using descriptive spatial models, and population-weighted. Error associated with applying this procedure with fewer than the maximum number of observations was also calculated. In addition to providing more representative measures of ambient air pollution for the health study population than provided by a central monitor alone and dampening effects of measurement error and local source impacts, results were used to evaluate spatial variability and to identify air pollutants for which ambient concentrations are poorly characterized. The decrease in correlation of daily monitor observations with daily population-weighted average values with increasing distance of the monitor from the urban center was much greater for primary pollutants than for secondary pollutants. Of the criteria pollutant gases, sulfur dioxide observations were least representative because of the failure of ambient networks to capture the spatial variability of this pollutant for which concentrations are dominated by point source impacts. Daily fluctuations in PM of particles less than 10 microm in aerodynamic diameter (PM10) mass were less well characterized than PM of particles less than 2.5 microm in aerodynamic diameter (PM2.5) mass because of a smaller number of PM10 monitors with daily observations. Of the PM2.5 components, the carbon fractions were less well spatially characterized than sulfate and nitrate both because of primary emissions of elemental and organic carbon and because of differences in measurement techniques used to assess

  3. Ideal Based Cyber Security Technical Metrics for Control Systems

    SciTech Connect

    W. F. Boyer; M. A. McQueen

    2007-10-01

    Much of the world's critical infrastructure is at risk from attack through electronic networks connected to control systems. Security metrics are important because they provide the basis for management decisions that affect the protection of the infrastructure. A cyber security technical metric is the security relevant output from an explicit mathematical model that makes use of objective measurements of a technical object. A specific set of technical security metrics are proposed for use by the operators of control systems. Our proposed metrics are based on seven security ideals associated with seven corresponding abstract dimensions of security. We have defined at least one metric for each of the seven ideals. Each metric is a measure of how nearly the associated ideal has been achieved. These seven ideals provide a useful structure for further metrics development. A case study shows how the proposed metrics can be applied to an operational control system.

  4. Assessment of proposed fighter agility metrics

    NASA Technical Reports Server (NTRS)

    Liefer, Randall K.; Valasek, John; Eggold, David P.; Downing, David R.

    1990-01-01

    This paper presents the results of an analysis of proposed metrics to assess fighter aircraft agility. A novel framework for classifying these metrics is developed and applied. A set of transient metrics intended to quantify the axial and pitch agility of fighter aircraft is evaluated with a high fidelity, nonlinear F-18 simulation. Test techniques and data reduction method are proposed, and sensitivities to pilot introduced errors during flight testing is investigated. Results indicate that the power onset and power loss parameters are promising candidates for quantifying axial agility, while maximum pitch up and pitch down rates are for quantifying pitch agility.

  5. Cyber threat metrics.

    SciTech Connect

    Frye, Jason Neal; Veitch, Cynthia K.; Mateski, Mark Elliot; Michalski, John T.; Harris, James Mark; Trevino, Cassandra M.; Maruoka, Scott

    2012-03-01

    Threats are generally much easier to list than to describe, and much easier to describe than to measure. As a result, many organizations list threats. Fewer describe them in useful terms, and still fewer measure them in meaningful ways. This is particularly true in the dynamic and nebulous domain of cyber threats - a domain that tends to resist easy measurement and, in some cases, appears to defy any measurement. We believe the problem is tractable. In this report we describe threat metrics and models for characterizing threats consistently and unambiguously. The purpose of this report is to support the Operational Threat Assessment (OTA) phase of risk and vulnerability assessment. To this end, we focus on the task of characterizing cyber threats using consistent threat metrics and models. In particular, we address threat metrics and models for describing malicious cyber threats to US FCEB agencies and systems.

  6. The FTIO Benchmark

    NASA Technical Reports Server (NTRS)

    Fagerstrom, Frederick C.; Kuszmaul, Christopher L.; Woo, Alex C. (Technical Monitor)

    1999-01-01

    We introduce a new benchmark for measuring the performance of parallel input/ouput. This benchmark has flexible initialization. size. and scaling properties that allows it to satisfy seven criteria for practical parallel I/O benchmarks. We obtained performance results while running on the a SGI Origin2OOO computer with various numbers of processors: with 4 processors. the performance was 68.9 Mflop/s with 0.52 of the time spent on I/O, with 8 processors the performance was 139.3 Mflop/s with 0.50 of the time spent on I/O, with 16 processors the performance was 173.6 Mflop/s with 0.43 of the time spent on I/O. and with 32 processors the performance was 259.1 Mflop/s with 0.47 of the time spent on I/O.

  7. Benchmarking. It's the future.

    PubMed

    Fazzi, Robert A; Agoglia, Robert V; Harlow, Lynn

    2002-11-01

    You can't go to a state conference, read a home care publication or log on to an Internet listserv ... without hearing or reading someone ... talk about benchmarking. What are your average case mix weights? How many visits are your nurses averaging per day? What is your average caseload for full time nurses in the field? What is your profit or loss per episode? The benchmark systems now available in home care potentially can serve as an early warning and partial protection for agencies. Agencies can collect data, analyze the outcomes, and through comparative benchmarking, determine where they are competitive and where they need to improve. These systems clearly provide agencies with the opportunity to be more proactive. PMID:12436898

  8. Accelerated randomized benchmarking

    NASA Astrophysics Data System (ADS)

    Granade, Christopher; Ferrie, Christopher; Cory, D. G.

    2015-01-01

    Quantum information processing offers promising advances for a wide range of fields and applications, provided that we can efficiently assess the performance of the control applied in candidate systems. That is, we must be able to determine whether we have implemented a desired gate, and refine accordingly. Randomized benchmarking reduces the difficulty of this task by exploiting symmetries in quantum operations. Here, we bound the resources required for benchmarking and show that, with prior information, we can achieve several orders of magnitude better accuracy than in traditional approaches to benchmarking. Moreover, by building on state-of-the-art classical algorithms, we reach these accuracies with near-optimal resources. Our approach requires an order of magnitude less data to achieve the same accuracies and to provide online estimates of the errors in the reported fidelities. We also show that our approach is useful for physical devices by comparing to simulations.

  9. Updates to the integrated protein-protein interaction benchmarks: Docking benchmark version 5 and affinity benchmark version 2

    PubMed Central

    Vreven, Thom; Moal, Iain H.; Vangone, Anna; Pierce, Brian G.; Kastritis, Panagiotis L.; Torchala, Mieczyslaw; Chaleil, Raphael; Jiménez-García, Brian; Bates, Paul A.; Fernandez-Recio, Juan; Bonvin, Alexandre M.J.J.; Weng, Zhiping

    2015-01-01

    We present an updated and integrated version of our widely used protein-protein docking and binding affinity benchmarks. The benchmarks consist of non-redundant, high quality structures of protein-protein complexes along with the unbound structures of their components. Fifty-five new complexes were added to the docking benchmark, 35 of which have experimentally-measured binding affinities. These updated docking and affinity benchmarks now contain 230 and 179 entries, respectively. In particular, the number of antibody-antigen complexes has increased significantly, by 67% and 74% in the docking and affinity benchmarks, respectively. We tested previously developed docking and affinity prediction algorithms on the new cases. Considering only the top ten docking predictions per benchmark case, a prediction accuracy of 38% is achieved on all 55 cases, and up to 50% for the 32 rigid-body cases only. Predicted affinity scores are found to correlate with experimental binding energies up to r=0.52 overall, and r=0.72 for the rigid complexes. PMID:26231283

  10. An Examination of Existing Guidelines for Programs for the Preservice and Inservice Education of Teachers in Metric Education and the Modification of These and the Development of New Ones if Deemed Necessary. Final Report of Objective No. 3.

    ERIC Educational Resources Information Center

    Granito, Dolores

    These guidelines for in-service and preservice teacher education related to the conversion to the metric system were developed from a survey of published materials, university faculty, and mathematics supervisors. The eleven guidelines fall into three major categories: (1) design of teacher training programs, (2) teacher training, and (3)…

  11. The Northeast U.S. continental shelf Energy Modeling and Analysis exercise (EMAX): Ecological network model development and basic ecosystem metrics

    NASA Astrophysics Data System (ADS)

    Link, Jason; Overholtz, William; O'Reilly, John; Green, Jack; Dow, David; Palka, Debra; Legault, Chris; Vitaliano, Joseph; Guida, Vincent; Fogarty, Michael; Brodziak, Jon; Methratta, Lisa; Stockhausen, William; Griswold, Laurel and Carolyn, Col

    2008-11-01

    During the past half-century notable changes have occurred in the Northeast U.S. (NEUS) Continental Shelf Large Marine Ecosystem (LME). To understand how the system functions as a whole, to evaluate the potential responses of this ecosystem to numerous human-induced perturbations, and to elucidate the relative magnitude of key biota and processes, the Northeast Fisheries Science Center instituted the Energy Modeling and Analysis eXercise (EMAX). The primary goal of EMAX was to establish an ecological network model (i.e., a more nuanced energy budget) of the entire Northeast U.S. food web. The EMAX work focused on the interdisciplinary development of a network model which reflected contemporary conditions (1996-2000) in four major regions of the ecosystem:, Gulf of Maine, Georges Bank, Southern New England and Middle Atlantic Bight. The model had 36 network "nodes" or biomass state variables across a broad range of the biological hierarchy within each trophic level and incorporated a wide range of key rate processes. Because this ecosystem has been relatively well studied many of the biomass estimates were based on field measurements and biomass estimates from the scientific literature were required only for a relatively small number of nodes. The emphasis of EMAX was to explore the particular role of small pelagic fishes in the ecosystem. Our results show that small pelagic fishes are indeed keystone species in the ecosystem. We examined a suite of novel ecosystem metrics as we compared the four regions and provided a general, system-level description of the NEUS ecosystem. The general patterns of the network properties in the four regions were similar; however the network indices and metrics did reveal some noteworthy differences among regions reflecting their different oceanographic and faunal characteristics. The process of compiling and evaluating available data required by an ecosystem network model identified important gaps in our understanding which should

  12. Fighter agility metrics. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Liefer, Randall K.

    1990-01-01

    Fighter flying qualities and combat capabilities are currently measured and compared in terms relating to vehicle energy, angular rates and sustained acceleration. Criteria based on these measurable quantities have evolved over the past several decades and are routinely used to design aircraft structures, aerodynamics, propulsion and control systems. While these criteria, or metrics, have the advantage of being well understood, easily verified and repeatable during test, they tend to measure the steady state capability of the aircraft and not its ability to transition quickly from one state to another. Proposed new metrics to assess fighter aircraft agility are collected and analyzed. A framework for classification of these new agility metrics is developed and applied. A complete set of transient agility metrics is evaluated with a high fidelity, nonlinear F-18 simulation. Test techniques and data reduction methods are proposed. A method of providing cuing information to the pilot during flight test is discussed. The sensitivity of longitudinal and lateral agility metrics to deviations from the pilot cues is studied in detail. The metrics are shown to be largely insensitive to reasonable deviations from the nominal test pilot commands. Instrumentation required to quantify agility via flight test is also considered. With one exception, each of the proposed new metrics may be measured with instrumentation currently available.

  13. SAPHIRE 8 Quality Assurance Software Metrics Report

    SciTech Connect

    Kurt G. Vedros

    2011-08-01

    The purpose of this review of software metrics is to examine the quality of the metrics gathered in the 2010 IV&V and to set an outline for results of updated metrics runs to be performed. We find from the review that the maintenance of accepted quality standards presented in the SAPHIRE 8 initial Independent Verification and Validation (IV&V) of April, 2010 is most easily achieved by continuing to utilize the tools used in that effort while adding a metric of bug tracking and resolution. Recommendations from the final IV&V were to continue periodic measurable metrics such as McCabe's complexity measure to ensure quality is maintained. The four software tools used to measure quality in the IV&V were CodeHealer, Coverage Validator, Memory Validator, Performance Validator, and Thread Validator. These are evaluated based on their capabilities. We attempted to run their latest revisions with the newer Delphi 2010 based SAPHIRE 8 code that has been developed and was successful with all of the Validator series of tools on small tests. Another recommendation from the IV&V was to incorporate a bug tracking and resolution metric. To improve our capability of producing this metric, we integrated our current web reporting system with the SpiraTest test management software purchased earlier this year to track requirements traceability.

  14. Advanced Life Support System Value Metric

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.; Arnold, James O. (Technical Monitor)

    1999-01-01

    The NASA Advanced Life Support (ALS) Program is required to provide a performance metric to measure its progress in system development. Extensive discussions within the ALS program have reached a consensus. The Equivalent System Mass (ESM) metric has been traditionally used and provides a good summary of the weight, size, and power cost factors of space life support equipment. But ESM assumes that all the systems being traded off exactly meet a fixed performance requirement, so that the value and benefit (readiness, performance, safety, etc.) of all the different systems designs are exactly equal. This is too simplistic. Actual system design concepts are selected using many cost and benefit factors and the system specification is then set accordingly. The ALS program needs a multi-parameter metric including both the ESM and a System Value Metric (SVM). The SVM would include safety, maintainability, reliability, performance, use of cross cutting technology, and commercialization potential. Another major factor in system selection is technology readiness level (TRL), a familiar metric in ALS. The overall ALS system metric that is suggested is a benefit/cost ratio, [SVM + TRL]/ESM, with appropriate weighting and scaling. The total value is the sum of SVM and TRL. Cost is represented by ESM. The paper provides a detailed description and example application of the suggested System Value Metric.

  15. A Kernel Classification Framework for Metric Learning.

    PubMed

    Wang, Faqiang; Zuo, Wangmeng; Zhang, Lei; Meng, Deyu; Zhang, David

    2015-09-01

    Learning a distance metric from the given training samples plays a crucial role in many machine learning tasks, and various models and optimization algorithms have been proposed in the past decade. In this paper, we generalize several state-of-the-art metric learning methods, such as large margin nearest neighbor (LMNN) and information theoretic metric learning (ITML), into a kernel classification framework. First, doublets and triplets are constructed from the training samples, and a family of degree-2 polynomial kernel functions is proposed for pairs of doublets or triplets. Then, a kernel classification framework is established to generalize many popular metric learning methods such as LMNN and ITML. The proposed framework can also suggest new metric learning methods, which can be efficiently implemented, interestingly, using the standard support vector machine (SVM) solvers. Two novel metric learning methods, namely, doublet-SVM and triplet-SVM, are then developed under the proposed framework. Experimental results show that doublet-SVM and triplet-SVM achieve competitive classification accuracies with state-of-the-art metric learning methods but with significantly less training time. PMID:25347887

  16. An Industry/DOE Program to Develop and Benchmark Advanced Diamond Product Drill Bits and HP/HT Drilling Fluids to Significantly Improve Rates of Penetration

    SciTech Connect

    TerraTek

    2007-06-30

    A deep drilling research program titled 'An Industry/DOE Program to Develop and Benchmark Advanced Diamond Product Drill Bits and HP/HT Drilling Fluids to Significantly Improve Rates of Penetration' was conducted at TerraTek's Drilling and Completions Laboratory. Drilling tests were run to simulate deep drilling by using high bore pressures and high confining and overburden stresses. The purpose of this testing was to gain insight into practices that would improve rates of penetration and mechanical specific energy while drilling under high pressure conditions. Thirty-seven test series were run utilizing a variety of drilling parameters which allowed analysis of the performance of drill bits and drilling fluids. Five different drill bit types or styles were tested: four-bladed polycrystalline diamond compact (PDC), 7-bladed PDC in regular and long profile, roller-cone, and impregnated. There were three different rock types used to simulate deep formations: Mancos shale, Carthage marble, and Crab Orchard sandstone. The testing also analyzed various drilling fluids and the extent to which they improved drilling. The PDC drill bits provided the best performance overall. The impregnated and tungsten carbide insert roller-cone drill bits performed poorly under the conditions chosen. The cesium formate drilling fluid outperformed all other drilling muds when drilling in the Carthage marble and Mancos shale with PDC drill bits. The oil base drilling fluid with manganese tetroxide weighting material provided the best performance when drilling the Crab Orchard sandstone.

  17. The independence of software metrics taken at different life-cycle stages

    NASA Technical Reports Server (NTRS)

    Kafura, D.; Canning, J.; Reddy, G.

    1984-01-01

    Over the past few years a large number of software metrics have been proposed and, in varying degrees, a number of these metrics have been subjected to empirical validation which demonstrated the utility of the metrics in the software development process. Attempts to classify these metrics and to determine if the metrics in these different classes appear to be measuring distinct attributes of the software product are studied. Statistical analysis is used to determine the degree of relationship among the metrics.

  18. Simulation information regarding Sandia National Laboratories%3CU%2B2019%3E trinity capability improvement metric.

    SciTech Connect

    Agelastos, Anthony Michael; Lin, Paul T.

    2013-10-01

    Sandia National Laboratories, Los Alamos National Laboratory, and Lawrence Livermore National Laboratory each selected a representative simulation code to be used as a performance benchmark for the Trinity Capability Improvement Metric. Sandia selected SIERRA Low Mach Module: Nalu, which is a uid dynamics code that solves many variable-density, acoustically incompressible problems of interest spanning from laminar to turbulent ow regimes, since it is fairly representative of implicit codes that have been developed under ASC. The simulations for this metric were performed on the Cielo Cray XE6 platform during dedicated application time and the chosen case utilized 131,072 Cielo cores to perform a canonical turbulent open jet simulation within an approximately 9-billion-elementunstructured- hexahedral computational mesh. This report will document some of the results from these simulations as well as provide instructions to perform these simulations for comparison.

  19. Quality metrics for sensor images

    NASA Technical Reports Server (NTRS)

    Ahumada, AL

    1993-01-01

    Methods are needed for evaluating the quality of augmented visual displays (AVID). Computational quality metrics will help summarize, interpolate, and extrapolate the results of human performance tests with displays. The FLM Vision group at NASA Ames has been developing computational models of visual processing and using them to develop computational metrics for similar problems. For example, display modeling systems use metrics for comparing proposed displays, halftoning optimizing methods use metrics to evaluate the difference between the halftone and the original, and image compression methods minimize the predicted visibility of compression artifacts. The visual discrimination models take as input two arbitrary images A and B and compute an estimate of the probability that a human observer will report that A is different from B. If A is an image that one desires to display and B is the actual displayed image, such an estimate can be regarded as an image quality metric reflecting how well B approximates A. There are additional complexities associated with the problem of evaluating the quality of radar and IR enhanced displays for AVID tasks. One important problem is the question of whether intruding obstacles are detectable in such displays. Although the discrimination model can handle detection situations by making B the original image A plus the intrusion, this detection model makes the inappropriate assumption that the observer knows where the intrusion will be. Effects of signal uncertainty need to be added to our models. A pilot needs to make decisions rapidly. The models need to predict not just the probability of a correct decision, but the probability of a correct decision by the time the decision needs to be made. That is, the models need to predict latency as well as accuracy. Luce and Green have generated models for auditory detection latencies. Similar models are needed for visual detection. Most image quality models are designed for static imagery

  20. Metric reconstruction from Weyl scalars

    NASA Astrophysics Data System (ADS)

    Whiting, Bernard F.; Price, Larry R.

    2005-08-01

    The Kerr geometry has remained an elusive world in which to explore physics and delve into the more esoteric implications of general relativity. Following the discovery, by Kerr in 1963, of the metric for a rotating black hole, the most major advance has been an understanding of its Weyl curvature perturbations based on Teukolsky's discovery of separable wave equations some ten years later. In the current research climate, where experiments across the globe are preparing for the first detection of gravitational waves, a more complete understanding than concerns just the Weyl curvature is now called for. To understand precisely how comparatively small masses move in response to the gravitational waves they emit, a formalism has been developed based on a description of the whole spacetime metric perturbation in the neighbourhood of the emission region. Presently, such a description is not available for the Kerr geometry. While there does exist a prescription for obtaining metric perturbations once curvature perturbations are known, it has become apparent that there are gaps in that formalism which are still waiting to be filled. The most serious gaps include gauge inflexibility, the inability to include sources—which are essential when the emitting masses are considered—and the failure to describe the ell = 0 and 1 perturbation properties. Among these latter properties of the perturbed spacetime, arising from a point mass in orbit, are the perturbed mass and axial component of angular momentum, as well as the very elusive Carter constant for non-axial angular momentum. A status report is given on recent work which begins to repair these deficiencies in our current incomplete description of Kerr metric perturbations.

  1. SUSTAINABLE DEVELOPMENT AND SUSTAINABILITY METRICS

    EPA Science Inventory

    If Rachel Carson's Silent Spring, pub. 1962, can be credited for the public realization of widespread environmental degradation directly or indirectly attributable to industrial enterprise, the book, Our Common Future (WCED, 1987) which is the report of the World Commission on E...

  2. A compact effective-current model for power performance analysis on state-of-the-art technology development and benchmarking

    NASA Astrophysics Data System (ADS)

    Oh, Sangheon; Shin, Changhwan; Kwon, Wookhyun

    2015-12-01

    Advances in semiconductor technology have enabled significant performance improvements over the past several decades. However, at the current pace of the development of semiconductor technology, it is increasingly important to achieve a proper balance between performance improvement and power consumption. In this study, to quantitatively analyze the performance and power consumption of new technologies, a compact effective-current model is proposed and used for power performance analysis (PPA). The PPA is performed by separately varying several device characteristics such as drain-induced barrier lowering (DIBL), mobility, and threshold voltage (VT) to determine which options can provide more benefits and better balance for new technologies. The analysis results indicate that the performance improvement due to DIBL reduction (especially below 20 mV/V) is limited. However, VT engineering has more advantages than DIBL and mobility enhancement, unless threshold voltage scaling induces leakage current degradation. Otherwise, mobility enhancement is the most attractive method. By using the proposed compact effective-current model for PPA, we enabled the effective and quantitative estimation of the benefits in terms of performance and power consumption.

  3. Microwave stethoscope: development and benchmarking of a vital signs sensor using computer-controlled phantoms and human studies.

    PubMed

    Celik, Nuri; Gagarin, Ruthsenne; Huang, Gui Chao; Iskander, Magdy F; Berg, Benjamin W

    2014-08-01

    This paper describes a new microwave-based method and associated measurement system for monitoring multiple vital signs (VS) as well as the changes in lung water content. The measurement procedure utilizes a single microwave sensor for reflection coefficient measurements, hence the name "microwave stethoscope (MiSt)," as opposed to the two-sensor transmission method previously proposed by the authors. To compensate for the reduced sensitivity due to reflection coefficient measurements, an improved microwave sensor design with enhanced matching to the skin and broadband operation, as well as an advanced digital signal processing algorithm are used in developing the MiSt. Results from phantom experiments and human clinical trials are described. The results clearly demonstrate that MiSt provides reliable monitoring of multiple VS such as the respiration rate, heart rate, and the changes in lung water content through a single microwave measurement. In addition, information such as heart waveforms that correlates well with electrocardiogram is observed from these microwave measurements. Details of the broadband sensor design, experimental procedure, DSP algorithms used for VS extraction, and obtained results are presented and discussed. PMID:23358946

  4. Benchmark integration test for the Advanced Integration Matrix (AIM)

    NASA Astrophysics Data System (ADS)

    Paul, H.; Labuda, L.

    The Advanced Integration Matrix (AIM) studies and solves systems-level integration issues for exploration missions beyond Low Earth Orbit (LEO) through the design and development of a ground-based facility for developing revolutionary integrated systems for joint human-robotic missions. This systems integration approach to addressing human capability barriers will yield validation of advanced concepts and technologies, establish baselines for further development, and help identify opportunities for system-level breakthroughs. Early ground-based testing of mission capability will identify successful system implementations and operations, hidden risks and hazards, unexpected system and operations interactions, mission mass and operational savings, and can evaluate solutions to requirements-driving questions; all of which will enable NASA to develop more effective, lower risk systems and more reliable cost estimates for future missions. This paper describes the first in the series of integration tests proposed for AIM (the Benchmark Test) which will bring in partners and technology, evaluate the study processes of the project, and develop metrics for success.

  5. Metrics and Sports.

    ERIC Educational Resources Information Center

    National Collegiate Athletic Association, Shawnee Mission, KS.

    Designed as a guide to aid the National Collegiate Athletic Association membership and others who must relate measurement of distances, weights, and volumes to athletic activity, this document presents diagrams of performance areas with measurements delineated in both imperial and metric terms. Illustrations are given for baseball, basketball,…

  6. Toll Gate Metrication Project

    ERIC Educational Resources Information Center

    Izzi, John

    1974-01-01

    The project director of the Toll Gate Metrication Project describes the project as the first structured United States public school educational experiment in implementing change toward the adoption of the International System of Units. He believes the change will simplify, rather than complicate, the educational task. (AG)

  7. Metrics of Scholarly Impact

    ERIC Educational Resources Information Center

    Cacioppo, John T.; Cacioppo, Stephanie

    2012-01-01

    Ruscio and colleagues (Ruscio, Seaman, D'Oriano, Stremlo, & Mahalchik, this issue) provide a thoughtful empirical analysis of 22 different measures of individual scholarly impact. The simplest metric is number of publications, which Simonton (1997) found to be a reasonable predictor of career trajectories. Although the assessment of the scholarly…

  8. Metric Style Guide.

    ERIC Educational Resources Information Center

    Canadian Council of Ministers of Education, Toronto (Ontario).

    This guide was designed to provide a measure of uniformity across Canada with respect to metric terminology and symbolism, and is designed to enable users to understand and apply Systeme International d'Unites (SI) to everyday life with ease and confidence. This document was written with the intent of being helpful to the greatest number of…

  9. Benchmarks: WICHE Region 2012

    ERIC Educational Resources Information Center

    Western Interstate Commission for Higher Education, 2013

    2013-01-01

    Benchmarks: WICHE Region 2012 presents information on the West's progress in improving access to, success in, and financing of higher education. The information is updated annually to monitor change over time and encourage its use as a tool for informed discussion in policy and education communities. To establish a general context for the…

  10. Python/Lua Benchmarks

    SciTech Connect

    Busby, L.

    2014-08-01

    This is an adaptation of the pre-existing Scimark benchmark code to a variety of Python and Lua implementations. It also measures performance of the Fparser expression parser and C and C++ code on a variety of simple scientific expressions.

  11. Monte Carlo Benchmark

    Energy Science and Technology Software Center (ESTSC)

    2010-10-20

    The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.

  12. Benchmarks Momentum on Increase

    ERIC Educational Resources Information Center

    McNeil, Michele

    2008-01-01

    No longer content with the patchwork quilt of assessments used to measure states' K-12 performance, top policy groups are pushing states toward international benchmarking as a way to better prepare students for a competitive global economy. The National Governors Association, the Council of Chief State School Officers, and the standards-advocacy…

  13. Benchmarking the World's Best

    ERIC Educational Resources Information Center

    Tucker, Marc S.

    2012-01-01

    A century ago, the United States was a world leader in industrial benchmarking. However, after World War II, once no one could compete with the U.S., it became complacent. Many industrialized countries now have higher student achievement and more equitable and efficient education systems. A higher proportion of young people in their workforces…

  14. Candidate control design metrics for an agile fighter

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.; Bailey, Melvin L.; Ostroff, Aaron J.

    1991-01-01

    Success in the fighter combat environment of the future will certainly demand increasing capability from aircraft technology. These advanced capabilities in the form of superagility and supermaneuverability will require special design techniques which translate advanced air combat maneuvering requirements into design criteria. Control design metrics can provide some of these techniques for the control designer. Thus study presents an overview of control design metrics and investigates metrics for advanced fighter agility. The objectives of various metric users, such as airframe designers and pilots, are differentiated from the objectives of the control designer. Using an advanced fighter model, metric values are documented over a portion of the flight envelope through piloted simulation. These metric values provide a baseline against which future control system improvements can be compared and against which a control design methodology can be developed. Agility is measured for axial, pitch, and roll axes. Axial metrics highlight acceleration and deceleration capabilities under different flight loads and include specific excess power measurements to characterize energy meneuverability. Pitch metrics cover both body-axis and wind-axis pitch rates and accelerations. Included in pitch metrics are nose pointing metrics which highlight displacement capability between the nose and the velocity vector. Roll metrics (or torsion metrics) focus on rotational capability about the wind axis.

  15. An investigation of routes to cancer diagnosis in 10 international jurisdictions, as part of the International Cancer Benchmarking Partnership: survey development and implementation

    PubMed Central

    Weller, David; Vedsted, Peter; Anandan, Chantelle; Zalounina, Alina; Fourkala, Evangelia Ourania; Desai, Rakshit; Liston, William; Jensen, Henry; Barisic, Andriana; Gavin, Anna; Grunfeld, Eva; Lambe, Mats; Law, Rebecca-Jane; Malmberg, Martin; Neal, Richard D; Kalsi, Jatinderpal; Turner, Donna; White, Victoria; Bomb, Martine

    2016-01-01

    Objectives This paper describes the methods used in the International Cancer Benchmarking Partnership Module 4 Survey (ICBPM4) which examines time intervals and routes to cancer diagnosis in 10 jurisdictions. We present the study design with defining and measuring time intervals, identifying patients with cancer, questionnaire development, data management and analyses. Design and setting Recruitment of participants to the ICBPM4 survey is based on cancer registries in each jurisdiction. Questionnaires draw on previous instruments and have been through a process of cognitive testing and piloting in three jurisdictions followed by standardised translation and adaptation. Data analysis focuses on comparing differences in time intervals and routes to diagnosis in the jurisdictions. Participants Our target is 200 patients with symptomatic breast, lung, colorectal and ovarian cancer in each jurisdiction. Patients are approached directly or via their primary care physician (PCP). Patients’ PCPs and cancer treatment specialists (CTSs) are surveyed, and ‘data rules’ are applied to combine and reconcile conflicting information. Where CTS information is unavailable, audit information is sought from treatment records and databases. Main outcomes Reliability testing of the patient questionnaire showed that agreement was complete (κ=1) in four items and substantial (κ=0.8, 95% CI 0.333 to 1) in one item. The identification of eligible patients is sufficient to meet the targets for breast, lung and colorectal cancer. Initial patient and PCP survey response rates from the UK and Sweden are comparable with similar published surveys. Data collection was completed in early 2016 for all cancer types. Conclusion An international questionnaire-based survey of patients with cancer, PCPs and CTSs has been developed and launched in 10 jurisdictions. ICBPM4 will help to further understand international differences in cancer survival by comparing time intervals and routes to cancer

  16. Community-based benchmarking of the CMIP DECK experiments

    NASA Astrophysics Data System (ADS)

    Gleckler, P. J.

    2015-12-01

    A diversity of community-based efforts are independently developing "diagnostic packages" with little or no coordination between them. A short list of examples include NCAR's Climate Variability Diagnostics Package (CVDP), ORNL's International Land Model Benchmarking (ILAMB), LBNL's Toolkit for Extreme Climate Analysis (TECA), PCMDI's Metrics Package (PMP), the EU EMBRACE ESMValTool, the WGNE MJO diagnostics package, and CFMIP diagnostics. The full value of these efforts cannot be realized without some coordination. As a first step, a WCRP effort has initiated a catalog to document candidate packages that could potentially be applied in a "repeat-use" fashion to all simulations contributed to the CMIP DECK (Diagnostic, Evaluation and Characterization of Klima) experiments. Some coordination of community-based diagnostics has the additional potential to improve how CMIP modeling groups analyze their simulations during model-development. The fact that most modeling groups now maintain a "CMIP compliant" data stream means that in principal without much effort they could readily adopt a set of well organized diagnostic capabilities specifically designed to operate on CMIP DECK experiments. Ultimately, a detailed listing of and access to analysis codes that are demonstrated to work "out of the box" with CMIP data could enable model developers (and others) to select those codes they wish to implement in-house, potentially enabling more systematic evaluation during the model development process.

  17. Note on a new class of metrics: touching metrics

    NASA Astrophysics Data System (ADS)

    Starovoitov, Valery V.

    1996-09-01

    A new class of functions is studied. They are generalizations of the little-known `flower-shop distance'. We call them touching functions. Some of them are metrics, i.e. touching metrics (TM). Disks, circles and digital paths based on these metrics are also studied. The distance transform based on TMs is introduced and a scheme for the algorithm is given.

  18. Design and Application of a Community Land Benchmarking System for Earth System Models

    NASA Astrophysics Data System (ADS)

    Mu, M.; Hoffman, F. M.; Lawrence, D. M.; Riley, W. J.; Keppel-Aleks, G.; Koven, C. D.; Kluzek, E. B.; Mao, J.; Randerson, J. T.

    2015-12-01

    Benchmarking has been widely used to assess the ability of climate models to capture the spatial and temporal variability of observations during the historical era. For the carbon cycle and terrestrial ecosystems, the design and development of an open-source community platform has been an important goal as part of the International Land Model Benchmarking (ILAMB) project. Here we developed a new benchmarking software system that enables the user to specify the models, benchmarks, and scoring metrics, so that results can be tailored to specific model intercomparison projects. Evaluation data sets included soil and aboveground carbon stocks, fluxes of energy, carbon and water, burned area, leaf area, and climate forcing and response variables. We used this system to evaluate simulations from the 5th Phase of the Coupled Model Intercomparison Project (CMIP5) with prognostic atmospheric carbon dioxide levels over the period from 1850 to 2005 (i.e., esmHistorical simulations archived on the Earth System Grid Federation). We found that the multi-model ensemble had a high bias in incoming solar radiation across Asia, likely as a consequence of incomplete representation of aerosol effects in this region, and in South America, primarily as a consequence of a low bias in mean annual precipitation. The reduced precipitation in South America had a larger influence on gross primary production than the high bias in incoming light, and as a consequence gross primary production had a low bias relative to the observations. Although model to model variations were large, the multi-model mean had a positive bias in atmospheric carbon dioxide that has been attributed in past work to weak ocean uptake of fossil emissions. In mid latitudes of the northern hemisphere, most models overestimate latent heat fluxes in the early part of the growing season, and underestimate these fluxes in mid-summer and early fall, whereas sensible heat fluxes show the opposite trend.

  19. The Kerr metric

    NASA Astrophysics Data System (ADS)

    Teukolsky, Saul A.

    2015-06-01

    This review describes the events leading up to the discovery of the Kerr metric in 1963 and the enormous impact the discovery has had in the subsequent 50 years. The review discusses the Penrose process, the four laws of black hole mechanics, uniqueness of the solution, and the no-hair theorems. It also includes Kerr perturbation theory and its application to black hole stability and quasi-normal modes. The Kerr metric's importance in the astrophysics of quasars and accreting stellar-mass black hole systems is detailed. A theme of the review is the ‘miraculous’ nature of the solution, both in describing in a simple analytic formula the most general rotating black hole, and in having unexpected mathematical properties that make many calculations tractable. Also included is a pedagogical derivation of the solution suitable for a first course in general relativity.

  20. Bibliography on metrication

    NASA Astrophysics Data System (ADS)

    Smith, C. R.; Powel, M. B.

    1990-08-01

    This is a bibliography on metrication, the conversion to the International System of Units (SI), compiled from citations dated from January 1977 through July 1989. Citations include books, conference proceedings, newspapers, periodicals, government and civilian documents and reports. Subject indices for each type of citation and an author index for the entire work are included. A variety of subject categories such as legislation, construction, avionics, consumers, engineering, education, management, standards, agriculture, marketing and many others are available.

  1. Pynamic: the Python Dynamic Benchmark

    SciTech Connect

    Lee, G L; Ahn, D H; de Supinksi, B R; Gyllenhaal, J C; Miller, P J

    2007-07-10

    Python is widely used in scientific computing to facilitate application development and to support features such as computational steering. Making full use of some of Python's popular features, which improve programmer productivity, leads to applications that access extremely high numbers of dynamically linked libraries (DLLs). As a result, some important Python-based applications severely stress a system's dynamic linking and loading capabilities and also cause significant difficulties for most development environment tools, such as debuggers. Furthermore, using the Python paradigm for large scale MPI-based applications can create significant file IO and further stress tools and operating systems. In this paper, we present Pynamic, the first benchmark program to support configurable emulation of a wide-range of the DLL usage of Python-based applications for large scale systems. Pynamic has already accurately reproduced system software and tool issues encountered by important large Python-based scientific applications on our supercomputers. Pynamic provided insight for our system software and tool vendors, and our application developers, into the impact of several design decisions. As we describe the Pynamic benchmark, we will highlight some of the issues discovered in our large scale system software and tools using Pynamic.

  2. Metrics for Energy Resilience

    SciTech Connect

    Paul E. Roege; Zachary A. Collier; James Mancillas; John A. McDonagh; Igor Linkov

    2014-09-01

    Energy lies at the backbone of any advanced society and constitutes an essential prerequisite for economic growth, social order and national defense. However there is an Achilles heel to today?s energy and technology relationship; namely a precarious intimacy between energy and the fiscal, social, and technical systems it supports. Recently, widespread and persistent disruptions in energy systems have highlighted the extent of this dependence and the vulnerability of increasingly optimized systems to changing conditions. Resilience is an emerging concept that offers to reconcile considerations of performance under dynamic environments and across multiple time frames by supplementing traditionally static system performance measures to consider behaviors under changing conditions and complex interactions among physical, information and human domains. This paper identifies metrics useful to implement guidance for energy-related planning, design, investment, and operation. Recommendations are presented using a matrix format to provide a structured and comprehensive framework of metrics relevant to a system?s energy resilience. The study synthesizes previously proposed metrics and emergent resilience literature to provide a multi-dimensional model intended for use by leaders and practitioners as they transform our energy posture from one of stasis and reaction to one that is proactive and which fosters sustainable growth.

  3. Aquatic Acoustic Metrics Interface

    Energy Science and Technology Software Center (ESTSC)

    2012-12-18

    Fishes and marine mammals may suffer a range of potential effects from exposure to intense underwater sound generated by anthropogenic activities such as pile driving, shipping, sonars, and underwater blasting. Several underwater sound recording (USR) devices have been built to acquire samples of the underwater sound generated by anthropogenic activities. Software becomes indispensable for processing and analyzing the audio files recorded by these USRs. The new Aquatic Acoustic Metrics Interface Utility Software (AAMI) is specificallymore » designed for analysis of underwater sound recordings to provide data in metrics that facilitate evaluation of the potential impacts of the sound on aquatic animals. In addition to the basic functions, such as loading and editing audio files recorded by USRs and batch processing of sound files, the software utilizes recording system calibration data to compute important parameters in physical units. The software also facilitates comparison of the noise sound sample metrics with biological measures such as audiograms of the sensitivity of aquatic animals to the sound, integrating various components into a single analytical frame.« less

  4. Aquatic Acoustic Metrics Interface

    SciTech Connect

    2012-12-18

    Fishes and marine mammals may suffer a range of potential effects from exposure to intense underwater sound generated by anthropogenic activities such as pile driving, shipping, sonars, and underwater blasting. Several underwater sound recording (USR) devices have been built to acquire samples of the underwater sound generated by anthropogenic activities. Software becomes indispensable for processing and analyzing the audio files recorded by these USRs. The new Aquatic Acoustic Metrics Interface Utility Software (AAMI) is specifically designed for analysis of underwater sound recordings to provide data in metrics that facilitate evaluation of the potential impacts of the sound on aquatic animals. In addition to the basic functions, such as loading and editing audio files recorded by USRs and batch processing of sound files, the software utilizes recording system calibration data to compute important parameters in physical units. The software also facilitates comparison of the noise sound sample metrics with biological measures such as audiograms of the sensitivity of aquatic animals to the sound, integrating various components into a single analytical frame.

  5. Performance Metrics for Commercial Buildings

    SciTech Connect

    Fowler, Kimberly M.; Wang, Na; Romero, Rachel L.; Deru, Michael P.

    2010-09-30

    Commercial building owners and operators have requested a standard set of key performance metrics to provide a systematic way to evaluate the performance of their buildings. The performance metrics included in this document provide standard metrics for the energy, water, operations and maintenance, indoor environmental quality, purchasing, waste and recycling and transportation impact of their building. The metrics can be used for comparative performance analysis between existing buildings and industry standards to clarify the impact of sustainably designed and operated buildings.

  6. Radiography benchmark 2014

    SciTech Connect

    Jaenisch, G.-R. Deresch, A. Bellon, C.; Schumm, A.; Lucet-Sanchez, F.; Guerin, P.

    2015-03-31

    The purpose of the 2014 WFNDEC RT benchmark study was to compare predictions of various models of radiographic techniques, in particular those that predict the contribution of scattered radiation. All calculations were carried out for homogenous materials and a mono-energetic X-ray point source in the energy range between 100 keV and 10 MeV. The calculations were to include the best physics approach available considering electron binding effects. Secondary effects like X-ray fluorescence and bremsstrahlung production were to be taken into account if possible. The problem to be considered had two parts. Part I examined the spectrum and the spatial distribution of radiation behind a single iron plate. Part II considered two equally sized plates, made of iron and aluminum respectively, only evaluating the spatial distribution. Here we present the results of above benchmark study, comparing them to MCNP as the assumed reference model. The possible origins of the observed deviations are discussed.

  7. Radiography benchmark 2014

    NASA Astrophysics Data System (ADS)

    Jaenisch, G.-R.; Deresch, A.; Bellon, C.; Schumm, A.; Lucet-Sanchez, F.; Guerin, P.

    2015-03-01

    The purpose of the 2014 WFNDEC RT benchmark study was to compare predictions of various models of radiographic techniques, in particular those that predict the contribution of scattered radiation. All calculations were carried out for homogenous materials and a mono-energetic X-ray point source in the energy range between 100 keV and 10 MeV. The calculations were to include the best physics approach available considering electron binding effects. Secondary effects like X-ray fluorescence and bremsstrahlung production were to be taken into account if possible. The problem to be considered had two parts. Part I examined the spectrum and the spatial distribution of radiation behind a single iron plate. Part II considered two equally sized plates, made of iron and aluminum respectively, only evaluating the spatial distribution. Here we present the results of above benchmark study, comparing them to MCNP as the assumed reference model. The possible origins of the observed deviations are discussed.

  8. Sequoia Messaging Rate Benchmark

    SciTech Connect

    Friedley, Andrew

    2008-01-22

    The purpose of this benchmark is to measure the maximal message rate of a single compute node. The first num_cores ranks are expected to reside on the 'core' compute node for which message rate is being tested. After that, the next num_nbors ranks are neighbors for the first core rank, the next set of num_nbors ranks are neighbors for the second core rank, and so on. For example, testing an 8-core node (num_cores = 8) with 4 neighbors (num_nbors = 4) requires 8 + 8 * 4 - 40 ranks. The first 8 of those 40 ranks are expected to be on the 'core' node being benchmarked, while the rest of the ranks are on separate nodes.

  9. Sequoia Messaging Rate Benchmark

    Energy Science and Technology Software Center (ESTSC)

    2008-01-22

    The purpose of this benchmark is to measure the maximal message rate of a single compute node. The first num_cores ranks are expected to reside on the 'core' compute node for which message rate is being tested. After that, the next num_nbors ranks are neighbors for the first core rank, the next set of num_nbors ranks are neighbors for the second core rank, and so on. For example, testing an 8-core node (num_cores = 8)more » with 4 neighbors (num_nbors = 4) requires 8 + 8 * 4 - 40 ranks. The first 8 of those 40 ranks are expected to be on the 'core' node being benchmarked, while the rest of the ranks are on separate nodes.« less

  10. Storage-Intensive Supercomputing Benchmark Study

    SciTech Connect

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows: SuperMicro X7

  11. Metrics for Offline Evaluation of Prognostic Performance

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai

    2010-01-01

    Prognostic performance evaluation has gained significant attention in the past few years. Currently, prognostics concepts lack standard definitions and suffer from ambiguous and inconsistent interpretations. This lack of standards is in part due to the varied end-user requirements for different applications, time scales, available information, domain dynamics, etc. to name a few. The research community has used a variety of metrics largely based on convenience and their respective requirements. Very little attention has been focused on establishing a standardized approach to compare different efforts. This paper presents several new evaluation metrics tailored for prognostics that were recently introduced and were shown to effectively evaluate various algorithms as compared to other conventional metrics. Specifically, this paper presents a detailed discussion on how these metrics should be interpreted and used. These metrics have the capability of incorporating probabilistic uncertainty estimates from prognostic algorithms. In addition to quantitative assessment they also offer a comprehensive visual perspective that can be used in designing the prognostic system. Several methods are suggested to customize these metrics for different applications. Guidelines are provided to help choose one method over another based on distribution characteristics. Various issues faced by prognostics and its performance evaluation are discussed followed by a formal notational framework to help standardize subsequent developments.

  12. On Applying the Prognostic Performance Metrics

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai

    2009-01-01

    Prognostics performance evaluation has gained significant attention in the past few years. As prognostics technology matures and more sophisticated methods for prognostic uncertainty management are developed, a standardized methodology for performance evaluation becomes extremely important to guide improvement efforts in a constructive manner. This paper is in continuation of previous efforts where several new evaluation metrics tailored for prognostics were introduced and were shown to effectively evaluate various algorithms as compared to other conventional metrics. Specifically, this paper presents a detailed discussion on how these metrics should be interpreted and used. Several shortcomings identified, while applying these metrics to a variety of real applications, are also summarized along with discussions that attempt to alleviate these problems. Further, these metrics have been enhanced to include the capability of incorporating probability distribution information from prognostic algorithms as opposed to evaluation based on point estimates only. Several methods have been suggested and guidelines have been provided to help choose one method over another based on probability distribution characteristics. These approaches also offer a convenient and intuitive visualization of algorithm performance with respect to some of these new metrics like prognostic horizon and alpha-lambda performance, and also quantify the corresponding performance while incorporating the uncertainty information.

  13. MPI Multicore Linktest Benchmark

    Energy Science and Technology Software Center (ESTSC)

    2008-01-25

    The MPI Multicore Linktest (LinkTest) measures the aggregate bandwidth from/to a multicore node in a parallel system. It allows the user to specify a variety of different node layout and communication routine variations and reports the maximal observed bandwidth across all specified options. In particular, this benchmark is able to vary the number of tasks on the root node and thereby allows users to study the impact of multicore architectures on MPI communication performance.

  14. Benchmarking the billing office.

    PubMed

    Woodcock, Elizabeth W; Williams, A Scott; Browne, Robert C; King, Gerald

    2002-09-01

    Benchmarking data related to human and financial resources in the billing process allows an organization to allocate its resources more effectively. Analyzing human resources used in the billing process helps determine cost-effective staffing. The deployment of human resources in a billing office affects timeliness of payment and ability to maximize revenue potential. Analyzing financial resource helps an organization allocate those resources more effectively. PMID:12235973

  15. Say "Yes" to Metric Measure.

    ERIC Educational Resources Information Center

    Monroe, Eula Ewing; Nelson, Marvin N.

    2000-01-01

    Provides a brief history of the metric system. Discusses the infrequent use of the metric measurement system in the United States, why conversion from the customary system to the metric system is difficult, and the need for change. (Contains 14 resources.) (ASK)

  16. Metrication, American Style. Fastback 41.

    ERIC Educational Resources Information Center

    Izzi, John

    The purpose of this pamphlet is to provide a starting point of information on the metric system for any concerned or interested reader. The material is organized into five brief chapters: Man and Measurement; Learning the Metric System; Progress Report: Education; Recommended Sources; and Metrication, American Style. Appendixes include an…

  17. Some References on Metric Information.

    ERIC Educational Resources Information Center

    National Bureau of Standards (DOC), Washington, DC.

    This resource work lists metric information published by the U.S. Government and the American National Standards Institute. Also organizations marketing metric materials for education are given. A short table of conversions is included as is a listing of basic metric facts for everyday living. (LS)

  18. Advanced Benchmarking for Complex Building Types: Laboratories as an Exemplar

    SciTech Connect

    Mathew, Paul A.; Clear, Robert; Kircher, Kevin; Webster, Tom; Lee, Kwang Ho; Hoyt, Tyler

    2010-08-01

    Complex buildings such as laboratories, data centers and cleanrooms present particular challenges for energy benchmarking because it is difficult to normalize special requirements such as health and safety in laboratories and reliability (i.e., system redundancy to maintain uptime) in data centers which significantly impact energy use. For example, air change requirements vary widely based on the type of work being performed in each laboratory space. We present methods and tools for energy benchmarking in laboratories, as an exemplar of a complex building type. First, we address whole building energy metrics and normalization parameters. We present empirical methods based on simple data filtering as well as multivariate regression analysis on the Labs21 database. The regression analysis showed lab type, lab-area ratio and occupancy hours to be significant variables. Yet the dataset did not allow analysis of factors such as plug loads and air change rates, both of which are critical to lab energy use. The simulation-based method uses an EnergyPlus model to generate a benchmark energy intensity normalized for a wider range of parameters. We suggest that both these methods have complementary strengths and limitations. Second, we present"action-oriented" benchmarking, which extends whole-building benchmarking by utilizing system-level features and metrics such as airflow W/cfm to quickly identify a list of potential efficiency actions which can then be used as the basis for a more detailed audit. While action-oriented benchmarking is not an"audit in a box" and is not intended to provide the same degree of accuracy afforded by an energy audit, we demonstrate how it can be used to focus and prioritize audit activity and track performance at the system level. We conclude with key principles that are more broadly applicable to other complex building types.

  19. Metrics for building performance assurance

    SciTech Connect

    Koles, G.; Hitchcock, R.; Sherman, M.

    1996-07-01

    This report documents part of the work performed in phase I of a Laboratory Directors Research and Development (LDRD) funded project entitled Building Performance Assurances (BPA). The focus of the BPA effort is to transform the way buildings are built and operated in order to improve building performance by facilitating or providing tools, infrastructure, and information. The efforts described herein focus on the development of metrics with which to evaluate building performance and for which information and optimization tools need to be developed. The classes of building performance metrics reviewed are (1) Building Services (2) First Costs, (3) Operating Costs, (4) Maintenance Costs, and (5) Energy and Environmental Factors. The first category defines the direct benefits associated with buildings; the next three are different kinds of costs associated with providing those benefits; the last category includes concerns that are broader than direct costs and benefits to the building owner and building occupants. The level of detail of the various issues reflect the current state of knowledge in those scientific areas and the ability of the to determine that state of knowledge, rather than directly reflecting the importance of these issues; it intentionally does not specifically focus on energy issues. The report describes work in progress and is intended as a resource and can be used to indicate the areas needing more investigation. Other reports on BPA activities are also available.

  20. A Suite of Criticality Benchmarks for Validating Nuclear Data

    SciTech Connect

    Stephanie C. Frankle

    1999-04-01

    The continuous-energy neutron data library ENDF60 for use with MCNP{trademark} was released in the fall of 1994, and was based on ENDF/B-Vl evaluations through Release 2. As part of the data validation process for this library, a number of criticality benchmark calculations were performed. The original suite of nine criticality benchmarks used to test ENDF60 has now been expanded to 86 benchmarks. This report documents the specifications for the suite of 86 criticality benchmarks that have been developed for validating nuclear data.

  1. Using benchmarks for radiation testing of microprocessors and FPGAs

    SciTech Connect

    Quinn, Heather; Robinson, William H.; Rech, Paolo; Aguirre, Miguel; Barnard, Arno; Desogus, Marco; Entrena, Luis; Garcia-Valderas, Mario; Guertin, Steven M.; Kaeli, David; Kastensmidt, Fernanda Lima; Kiddie, Bradley T.; Sanchez-Clemente, Antonio; Reorda, Matteo Sonza; Sterpone, Luca; Wirthlin, Michael

    2015-12-17

    Performance benchmarks have been used over the years to compare different systems. These benchmarks can be useful for researchers trying to determine how changes to the technology, architecture, or compiler affect the system's performance. No such standard exists for systems deployed into high radiation environments, making it difficult to assess whether changes in the fabrication process, circuitry, architecture, or software affect reliability or radiation sensitivity. In this paper, we propose a benchmark suite for high-reliability systems that is designed for field-programmable gate arrays and microprocessors. As a result, we describe the development process and report neutron test data for the hardware and software benchmarks.

  2. A software quality model and metrics for risk assessment

    NASA Technical Reports Server (NTRS)

    Hyatt, L.; Rosenberg, L.

    1996-01-01

    A software quality model and its associated attributes are defined and used as the model for the basis for a discussion on risk. Specific quality goals and attributes are selected based on their importance to a software development project and their ability to be quantified. Risks that can be determined by the model's metrics are identified. A core set of metrics relating to the software development process and its products is defined. Measurements for each metric and their usability and applicability are discussed.

  3. A Teacher's Guide to Metrics. A Series of In-Service Booklets Designed for Adult Educators.

    ERIC Educational Resources Information Center

    Wendel, Robert, Ed.; And Others

    This series of seven booklets is designed to train teachers of adults in metrication, as a prerequisite to offering metrics in adult basic education and general educational development programs. The seven booklets provide a guide representing an integration of metric teaching methods and metric materials to place the adult in an active learning…

  4. An evaluation of software testing metrics for NASA's mission control center

    NASA Technical Reports Server (NTRS)

    Stark, George E.; Durst, Robert C.; Pelnik, Tammy M.

    1991-01-01

    Software metrics are used to evaluate the software development process and the quality of the resulting product. Five metrics were used during the testing phase of the Shuttle Mission Control Center Upgrade at the NASA Johnson Space Center. All but one metric provided useful information. Based on the experience, it is recommended that metrics be used during the test phase of software development and additional candidate metrics are proposed for further study.

  5. A benchmark for reaction coordinates in the transition path ensemble

    NASA Astrophysics Data System (ADS)

    Li, Wenjin; Ma, Ao

    2016-04-01

    The molecular mechanism of a reaction is embedded in its transition path ensemble, the complete collection of reactive trajectories. Utilizing the information in the transition path ensemble alone, we developed a novel metric, which we termed the emergent potential energy, for distinguishing reaction coordinates from the bath modes. The emergent potential energy can be understood as the average energy cost for making a displacement of a coordinate in the transition path ensemble. Where displacing a bath mode invokes essentially no cost, it costs significantly to move the reaction coordinate. Based on some general assumptions of the behaviors of reaction and bath coordinates in the transition path ensemble, we proved theoretically with statistical mechanics that the emergent potential energy could serve as a benchmark of reaction coordinates and demonstrated its effectiveness by applying it to a prototypical system of biomolecular dynamics. Using the emergent potential energy as guidance, we developed a committor-free and intuition-independent method for identifying reaction coordinates in complex systems. We expect this method to be applicable to a wide range of reaction processes in complex biomolecular systems.

  6. Benchmarking Global Food Safety Performances: The Era of Risk Intelligence.

    PubMed

    Valleé, Jean-Charles Le; Charlebois, Sylvain

    2015-10-01

    Food safety data segmentation and limitations hamper the world's ability to select, build up, monitor, and evaluate food safety performance. Currently, there is no metric that captures the entire food safety system, and performance data are not collected strategically on a global scale. Therefore, food safety benchmarking is essential not only to help monitor ongoing performance but also to inform continued food safety system design, adoption, and implementation toward more efficient and effective food safety preparedness, responsiveness, and accountability. This comparative study identifies and evaluates common elements among global food safety systems. It provides an overall world ranking of food safety performance for 17 Organisation for Economic Co-Operation and Development (OECD) countries, illustrated by 10 indicators organized across three food safety risk governance domains: risk assessment (chemical risks, microbial risks, and national reporting on food consumption), risk management (national food safety capacities, food recalls, food traceability, and radionuclides standards), and risk communication (allergenic risks, labeling, and public trust). Results show all countries have very high food safety standards, but Canada and Ireland, followed by France, earned excellent grades relative to their peers. However, any subsequent global ranking study should consider the development of survey instruments to gather adequate and comparable national evidence on food safety. PMID:26408141

  7. Towards a physics on fractals: Differential vector calculus in three-dimensional continuum with fractal metric

    NASA Astrophysics Data System (ADS)

    Balankin, Alexander S.; Bory-Reyes, Juan; Shapiro, Michael

    2016-02-01

    One way to deal with physical problems on nowhere differentiable fractals is the mapping of these problems into the corresponding problems for continuum with a proper fractal metric. On this way different definitions of the fractal metric were suggested to account for the essential fractal features. In this work we develop the metric differential vector calculus in a three-dimensional continuum with a non-Euclidean metric. The metric differential forms and Laplacian are introduced, fundamental identities for metric differential operators are established and integral theorems are proved by employing the metric version of the quaternionic analysis for the Moisil-Teodoresco operator, which has been introduced and partially developed in this paper. The relations between the metric and conventional operators are revealed. It should be emphasized that the metric vector calculus developed in this work provides a comprehensive mathematical formalism for the continuum with any suitable definition of fractal metric. This offers a novel tool to study physics on fractals.

  8. Sensor to User - NASA/EOS Data for Coastal Zone Management Applications Developed from Integrated Analyses: Verification, Validation and Benchmark Report

    NASA Technical Reports Server (NTRS)

    Hall, Callie; Arnone, Robert

    2006-01-01

    The NASA Applied Sciences Program seeks to transfer NASA data, models, and knowledge into the hands of end-users by forming links with partner agencies and associated decision support tools (DSTs). Through the NASA REASoN (Research, Education and Applications Solutions Network) Cooperative Agreement, the Oceanography Division of the Naval Research Laboratory (NRLSSC) is developing new products through the integration of data from NASA Earth-Sun System assets with coastal ocean forecast models and other available data to enhance coastal management in the Gulf of Mexico. The recipient federal agency for this research effort is the National Oceanic and Atmospheric Administration (NOAA). The contents of this report detail the effort to further the goals of the NASA Applied Sciences Program by demonstrating the use of NASA satellite products combined with data-assimilating ocean models to provide near real-time information to maritime users and coastal managers of the Gulf of Mexico. This effort provides new and improved capabilities for monitoring, assessing, and predicting the coastal environment. Coastal managers can exploit these capabilities through enhanced DSTs at federal, state and local agencies. The project addresses three major issues facing coastal managers: 1) Harmful Algal Blooms (HABs); 2) hypoxia; and 3) freshwater fluxes to the coastal ocean. A suite of ocean products capable of describing Ocean Weather is assembled on a daily basis as the foundation for this semi-operational multiyear effort. This continuous realtime capability brings decision makers a new ability to monitor both normal and anomalous coastal ocean conditions with a steady flow of satellite and ocean model conditions. Furthermore, as the baseline data sets are used more extensively and the customer list increased, customer feedback is obtained and additional customized products are developed and provided to decision makers. Continual customer feedback and response with new improved

  9. Evaluation metrics for bone segmentation in ultrasound

    NASA Astrophysics Data System (ADS)

    Lougheed, Matthew; Fichtinger, Gabor; Ungi, Tamas

    2015-03-01

    Tracked ultrasound is a safe alternative to X-ray for imaging bones. The interpretation of bony structures is challenging as ultrasound has no specific intensity characteristic of bones. Several image segmentation algorithms have been devised to identify bony structures. We propose an open-source framework that would aid in the development and comparison of such algorithms by quantitatively measuring segmentation performance in the ultrasound images. True-positive and false-negative metrics used in the framework quantify algorithm performance based on correctly segmented bone and correctly segmented boneless regions. Ground-truth for these metrics are defined manually and along with the corresponding automatically segmented image are used for the performance analysis. Manually created ground truth tests were generated to verify the accuracy of the analysis. Further evaluation metrics for determining average performance per slide and standard deviation are considered. The metrics provide a means of evaluating accuracy of frames along the length of a volume. This would aid in assessing the accuracy of the volume itself and the approach to image acquisition (positioning and frequency of frame). The framework was implemented as an open-source module of the 3D Slicer platform. The ground truth tests verified that the framework correctly calculates the implemented metrics. The developed framework provides a convenient way to evaluate bone segmentation algorithms. The implementation fits in a widely used application for segmentation algorithm prototyping. Future algorithm development will benefit by monitoring the effects of adjustments to an algorithm in a standard evaluation framework.

  10. Metrics for the NASA Airspace Systems Program

    NASA Technical Reports Server (NTRS)

    Smith, Jeremy C.; Neitzke, Kurt W.

    2009-01-01

    This document defines an initial set of metrics for use by the NASA Airspace Systems Program (ASP). ASP consists of the NextGen-Airspace Project and the NextGen-Airportal Project. The work in each project is organized along multiple, discipline-level Research Focus Areas (RFAs). Each RFA is developing future concept elements in support of the Next Generation Air Transportation System (NextGen), as defined by the Joint Planning and Development Office (JPDO). In addition, a single, system-level RFA is responsible for integrating concept elements across RFAs in both projects and for assessing system-wide benefits. The primary purpose of this document is to define a common set of metrics for measuring National Airspace System (NAS) performance before and after the introduction of ASP-developed concepts for NextGen as the system handles increasing traffic. The metrics are directly traceable to NextGen goals and objectives as defined by the JPDO and hence will be used to measure the progress of ASP research toward reaching those goals. The scope of this document is focused on defining a common set of metrics for measuring NAS capacity, efficiency, robustness, and safety at the system-level and at the RFA-level. Use of common metrics will focus ASP research toward achieving system-level performance goals and objectives and enable the discipline-level RFAs to evaluate the impact of their concepts at the system level.

  11. Optical metrics and projective equivalence

    SciTech Connect

    Casey, Stephen; Dunajski, Maciej; Gibbons, Gary; Warnick, Claude

    2011-04-15

    Trajectories of light rays in a static spacetime are described by unparametrized geodesics of the Riemannian optical metric associated with the Lorentzian spacetime metric. We investigate the uniqueness of this structure and demonstrate that two different observers, moving relative to one another, who both see the Universe as static may determine the geometry of the light rays differently. More specifically, we classify Lorentzian metrics admitting more than one hyper-surface orthogonal timelike Killing vector and analyze the projective equivalence of the resulting optical metrics. These metrics are shown to be projectively equivalent up to diffeomorphism if the static Killing vectors generate a group SL(2,R), but not projectively equivalent in general. We also consider the cosmological C metrics in Einstein-Maxwell theory and demonstrate that optical metrics corresponding to different values of the cosmological constant are projectively equivalent.

  12. Building a Metric

    NASA Technical Reports Server (NTRS)

    Spencer, Shakira

    2007-01-01

    Launch Services Program is a Kennedy Space Center based program whose job it is to undertake all the necessary roles required to successfully launch Expendable Launch Vehicles. This project was designed to help Launch Services Program accurately report how successful they have been at launching missions on time or +/- 2 days from the scheduled launch date and also if they weren't successful, why. This information will be displayed in the form of a metric, which answers these questions in a clear and accurate way.

  13. SI (Metric) handbook

    NASA Technical Reports Server (NTRS)

    Artusa, Elisa A.

    1994-01-01

    This guide provides information for an understanding of SI units, symbols, and prefixes; style and usage in documentation in both the US and in the international business community; conversion techniques; limits, fits, and tolerance data; and drawing and technical writing guidelines. Also provided is information of SI usage for specialized applications like data processing and computer programming, science, engineering, and construction. Related information in the appendixes include legislative documents, historical and biographical data, a list of metric documentation, rules for determining significant digits and rounding, conversion factors, shorthand notation, and a unit index.

  14. The software product assurance metrics study: JPL's software systems quality and productivity

    NASA Technical Reports Server (NTRS)

    Bush, Marilyn W.

    1989-01-01

    The findings are reported of the Jet Propulsion Laboratory (JPL)/Software Product Assurance (SPA) Metrics Study, conducted as part of a larger JPL effort to improve software quality and productivity. Until recently, no comprehensive data had been assembled on how JPL manages and develops software-intensive systems. The first objective was to collect data on software development from as many projects and for as many years as possible. Results from five projects are discussed. These results reflect 15 years of JPL software development, representing over 100 data points (systems and subsystems), over a third of a billion dollars, over four million lines of code and 28,000 person months. Analysis of this data provides a benchmark for gauging the effectiveness of past, present and future software development work. In addition, the study is meant to encourage projects to record existing metrics data and to gather future data. The SPA long term goal is to integrate the collection of historical data and ongoing project data with future project estimations.

  15. Shielding Integral Benchmark Archive and Database (SINBAD)

    SciTech Connect

    Kirk, Bernadette Lugue; Grove, Robert E; Kodeli, I.; Sartori, Enrico; Gulliford, J.

    2011-01-01

    The Shielding Integral Benchmark Archive and Database (SINBAD) collection of benchmarks was initiated in the early 1990 s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development s Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD is a major attempt to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD is also a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories fission, fusion, and accelerator benchmarks. Where possible, each experiment is described and analyzed using deterministic or probabilistic (Monte Carlo) radiation transport software.

  16. Energy benchmarking of South Australian WWTPs.

    PubMed

    Krampe, J

    2013-01-01

    Optimising the energy consumption and energy generation of wastewater treatment plants (WWTPs) is a topic with increasing importance for water utilities in times of rising energy costs and pressures to reduce greenhouse gas (GHG) emissions. Assessing the energy efficiency and energy optimisation of a WWTP are difficult tasks as most plants vary greatly in size, process layout and other influencing factors. To overcome these limits it is necessary to compare energy efficiency with a statistically relevant base to identify shortfalls and optimisation potential. Such energy benchmarks have been successfully developed and used in central Europe over the last two decades. This paper demonstrates how the latest available energy benchmarks from Germany have been applied to 24 WWTPs in South Australia. It shows how energy benchmarking can be used to identify shortfalls in current performance, prioritise detailed energy assessments and help inform decisions on capital investment. PMID:23656950

  17. Standardized benchmarking in the quest for orthologs.

    PubMed

    Altenhoff, Adrian M; Boeckmann, Brigitte; Capella-Gutierrez, Salvador; Dalquen, Daniel A; DeLuca, Todd; Forslund, Kristoffer; Huerta-Cepas, Jaime; Linard, Benjamin; Pereira, Cécile; Pryszcz, Leszek P; Schreiber, Fabian; da Silva, Alan Sousa; Szklarczyk, Damian; Train, Clément-Marie; Bork, Peer; Lecompte, Odile; von Mering, Christian; Xenarios, Ioannis; Sjölander, Kimmen; Jensen, Lars Juhl; Martin, Maria J; Muffato, Matthieu; Gabaldón, Toni; Lewis, Suzanna E; Thomas, Paul D; Sonnhammer, Erik; Dessimoz, Christophe

    2016-05-01

    Achieving high accuracy in orthology inference is essential for many comparative, evolutionary and functional genomic analyses, yet the true evolutionary history of genes is generally unknown and orthologs are used for very different applications across phyla, requiring different precision-recall trade-offs. As a result, it is difficult to assess the performance of orthology inference methods. Here, we present a community effort to establish standards and an automated web-based service to facilitate orthology benchmarking. Using this service, we characterize 15 well-established inference methods and resources on a battery of 20 different benchmarks. Standardized benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimum requirement for new tools and resources, and guides the development of more accurate orthology inference methods. PMID:27043882

  18. Benchmarks for the point kinetics equations

    SciTech Connect

    Ganapol, B.; Picca, P.; Previti, A.; Mostacci, D.

    2013-07-01

    A new numerical algorithm is presented for the solution to the point kinetics equations (PKEs), whose accurate solution has been sought for over 60 years. The method couples the simplest of finite difference methods, a backward Euler, with Richardsons extrapolation, also called an acceleration. From this coupling, a series of benchmarks have emerged. These include cases from the literature as well as several new ones. The novelty of this presentation lies in the breadth of reactivity insertions considered, covering both prescribed and feedback reactivities, and the extreme 8- to 9- digit accuracy achievable. The benchmarks presented are to provide guidance to those who wish to develop further numerical improvements. (authors)

  19. Benchmarking: implementing the process in practice.

    PubMed

    Stark, Sheila; MacHale, Anita; Lennon, Eileen; Shaw, Lynne

    Government guidance and policy promotes the use of benchmarks as measures against which practice and care can be measured. This provides the motivation for practitioners to make changes to improve patient care. Adopting a systematic approach, practitioners can implement changes in practice quickly. The process requires motivation and communication between professionals of all disciplines. It provides a forum for sharing good practice and developing a support network. In this article the authors outline the initial steps taken by three PCGs in implementing the benchmarking process as they move towards primary care trust status. PMID:12212335

  20. Evaluation metrics for biostatistical and epidemiological collaborations.

    PubMed

    Rubio, Doris McGartland; Del Junco, Deborah J; Bhore, Rafia; Lindsell, Christopher J; Oster, Robert A; Wittkowski, Knut M; Welty, Leah J; Li, Yi-Ju; Demets, Dave

    2011-10-15

    Increasing demands for evidence-based medicine and for the translation of biomedical research into individual and public health benefit have been accompanied by the proliferation of special units that offer expertise in biostatistics, epidemiology, and research design (BERD) within academic health centers. Objective metrics that can be used to evaluate, track, and improve the performance of these BERD units are critical to their successful establishment and sustainable future. To develop a set of reliable but versatile metrics that can be adapted easily to different environments and evolving needs, we consulted with members of BERD units from the consortium of academic health centers funded by the Clinical and Translational Science Award Program of the National Institutes of Health. Through a systematic process of consensus building and document drafting, we formulated metrics that covered the three identified domains of BERD practices: the development and maintenance of collaborations with clinical and translational science investigators, the application of BERD-related methods to clinical and translational research, and the discovery of novel BERD-related methodologies. In this article, we describe the set of metrics and advocate their use for evaluating BERD practices. The routine application, comparison of findings across diverse BERD units, and ongoing refinement of the metrics will identify trends, facilitate meaningful changes, and ultimately enhance the contribution of BERD activities to biomedical research. PMID:21284015

  1. Metrics for assessing improvements in primary health care.

    PubMed

    Stange, Kurt C; Etz, Rebecca S; Gullett, Heidi; Sweeney, Sarah A; Miller, William L; Jaén, Carlos Roberto; Crabtree, Benjamin F; Nutting, Paul A; Glasgow, Russell E

    2014-01-01

    Metrics focus attention on what is important. Balanced metrics of primary health care inform purpose and aspiration as well as performance. Purpose in primary health care is about improving the health of people and populations in their community contexts. It is informed by metrics that include long-term, meaning- and relationship-focused perspectives. Aspirational uses of metrics inspire evolving insights and iterative improvement, using a collaborative, developmental perspective. Performance metrics assess the complex interactions among primary care tenets of accessibility, a whole-person focus, integration and coordination of care, and ongoing relationships with individuals, families, and communities; primary health care principles of inclusion and equity, a focus on people's needs, multilevel integration of health, collaborative policy dialogue, and stakeholder participation; basic and goal-directed health care, prioritization, development, and multilevel health outcomes. Environments that support reflection, development, and collaborative action are necessary for metrics to advance health and minimize unintended consequences. PMID:24641561

  2. Benchmarking emerging logic devices

    NASA Astrophysics Data System (ADS)

    Nikonov, Dmitri

    2014-03-01

    As complementary metal-oxide-semiconductor field-effect transistors (CMOS FET) are being scaled to ever smaller sizes by the semiconductor industry, the demand is growing for emerging logic devices to supplement CMOS in various special functions. Research directions and concepts of such devices are overviewed. They include tunneling, graphene based, spintronic devices etc. The methodology to estimate future performance of emerging (beyond CMOS) devices and simple logic circuits based on them is explained. Results of benchmarking are used to identify more promising concepts and to map pathways for improvement of beyond CMOS computing.

  3. Pure Lovelock Kasner metrics

    NASA Astrophysics Data System (ADS)

    Camanho, Xián O.; Dadhich, Naresh; Molina, Alfred

    2015-09-01

    We study pure Lovelock vacuum and perfect fluid equations for Kasner-type metrics. These equations correspond to a single Nth order Lovelock term in the action in d=2N+1,2N+2 dimensions, and they capture the relevant gravitational dynamics when aproaching the big-bang singularity within the Lovelock family of theories. Pure Lovelock gravity also bears out the general feature that vacuum in the critical odd dimension, d=2N+1, is kinematic, i.e. we may define an analogue Lovelock-Riemann tensor that vanishes in vacuum for d=2N+1, yet the Riemann curvature is non-zero. We completely classify isotropic and vacuum Kasner metrics for this class of theories in several isotropy types. The different families can be characterized by means of certain higher order 4th rank tensors. We also analyze in detail the space of vacuum solutions for five- and six dimensional pure Gauss-Bonnet theory. It possesses an interesting and illuminating geometric structure and symmetries that carry over to the general case. We also comment on a closely related family of exponential solutions and on the possibility of solutions with complex Kasner exponents. We show that the latter imply the existence of closed timelike curves in the geometry.

  4. Benchmarking Multipacting Simulations in VORPAL

    SciTech Connect

    C. Nieter, C. Roark, P. Stoltz, K. Tian

    2009-05-01

    We will present the results of benchmarking simulations run to test the ability of VORPAL to model multipacting processes in Superconducting Radio Frequency structures. VORPAL is an electromagnetic (FDTD) particle-in-cell simulation code originally developed for applications in plasma and beam physics. The addition of conformal boundaries and algorithms for secondary electron emission allow VORPAL to be applied to multipacting processes. We start with simulations of multipacting between parallel plates where there are well understood theoretical predictions for the frequency bands where multipacting is expected to occur. We reproduce the predicted multipacting bands and demonstrate departures from the theoretical predictions when a more sophisticated model of secondary emission is used. Simulations of existing cavity structures developed at Jefferson National Laboratories will also be presented where we compare results from VORPAL to experimental data.

  5. The LSST Metrics Analysis Framework (MAF)

    NASA Astrophysics Data System (ADS)

    Jones, R. Lynne; Yoachim, Peter; Chandrasekharan, Srinivasan; Connolly, Andrew J.; Cook, Kem H.; Ivezic, Zeljko; Krughoff, K. Simon; Petry, Catherine E.; Ridgway, Stephen T.

    2015-01-01

    Studying potential observing strategies or cadences for the Large Synoptic Survey Telescope (LSST) is a complicated but important problem. To address this, LSST has created an Operations Simulator (OpSim) to create simulated surveys, including realistic weather and sky conditions. Analyzing the results of these simulated surveys for the wide variety of science cases to be considered for LSST is, however, difficult. We have created a Metric Analysis Framework (MAF), an open-source python framework, to be a user-friendly, customizable and easily extensible tool to help analyze the outputs of the OpSim.MAF reads the pointing history of the LSST generated by the OpSim, then enables the subdivision of these pointings based on position on the sky (RA/Dec, etc.) or the characteristics of the observations (e.g. airmass or sky brightness) and a calculation of how well these observations meet a specified science objective (or metric). An example simple metric could be the mean single visit limiting magnitude for each position in the sky; a more complex metric might be the expected astrometric precision. The output of these metrics can be generated for a full survey, for specified time intervals, or for regions of the sky, and can be easily visualized using a web interface.An important goal for MAF is to facilitate analysis of the OpSim outputs for a wide variety of science cases. A user can often write a new metric to evaluate OpSim for new science goals in less than a day once they are familiar with the framework. Some of these new metrics are illustrated in the accompanying poster, "Analyzing Simulated LSST Survey Performance With MAF".While MAF has been developed primarily for application to OpSim outputs, it can be applied to any dataset. The most obvious examples are examining pointing histories of other survey projects or telescopes, such as CFHT.

  6. Measurable Control System Security through Ideal Driven Technical Metrics

    SciTech Connect

    Miles McQueen; Wayne Boyer; Sean McBride; Marie Farrar; Zachary Tudor

    2008-01-01

    The Department of Homeland Security National Cyber Security Division supported development of a small set of security ideals as a framework to establish measurable control systems security. Based on these ideals, a draft set of proposed technical metrics was developed to allow control systems owner-operators to track improvements or degradations in their individual control systems security posture. The technical metrics development effort included review and evaluation of over thirty metrics-related documents. On the bases of complexity, ambiguity, or misleading and distorting effects the metrics identified during the reviews were determined to be weaker than necessary to aid defense against the myriad threats posed by cyber-terrorism to human safety, as well as to economic prosperity. Using the results of our metrics review and the set of security ideals as a starting point for metrics development, we identified thirteen potential technical metrics - with at least one metric supporting each ideal. Two case study applications of the ideals and thirteen metrics to control systems were then performed to establish potential difficulties in applying both the ideals and the metrics. The case studies resulted in no changes to the ideals, and only a few deletions and refinements to the thirteen potential metrics. This led to a final proposed set of ten core technical metrics. To further validate the security ideals, the modifications made to the original thirteen potential metrics, and the final proposed set of ten core metrics, seven separate control systems security assessments performed over the past three years were reviewed for findings and recommended mitigations. These findings and mitigations were then mapped to the security ideals and metrics to assess gaps in their coverage. The mappings indicated that there are no gaps in the security ideals and that the ten core technical metrics provide significant coverage of standard security issues with 87% coverage. Based

  7. Benchmarking Year Five Students' Reading Abilities

    ERIC Educational Resources Information Center

    Lim, Chang Kuan; Eng, Lin Siew; Mohamed, Abdul Rashid

    2014-01-01

    Reading and understanding a written text is one of the most important skills in English learning.This study attempts to benchmark Year Five students' reading abilities of fifteen rural schools in a district in Malaysia. The objectives of this study are to develop a set of standardised written reading comprehension and a set of indicators to inform…

  8. Core Benchmarks Descriptions

    SciTech Connect

    Pavlovichev, A.M.

    2001-05-24

    Actual regulations while designing of new fuel cycles for nuclear power installations comprise a calculational justification to be performed by certified computer codes. It guarantees that obtained calculational results will be within the limits of declared uncertainties that are indicated in a certificate issued by Gosatomnadzor of Russian Federation (GAN) and concerning a corresponding computer code. A formal justification of declared uncertainties is the comparison of calculational results obtained by a commercial code with the results of experiments or of calculational tests that are calculated with an uncertainty defined by certified precision codes of MCU type or of other one. The actual level of international cooperation provides an enlarging of the bank of experimental and calculational benchmarks acceptable for a certification of commercial codes that are being used for a design of fuel loadings with MOX fuel. In particular, the work is practically finished on the forming of calculational benchmarks list for a certification of code TVS-M as applied to MOX fuel assembly calculations. The results on these activities are presented.

  9. [Clinical trial data management and quality metrics system].

    PubMed

    Chen, Zhao-hua; Huang, Qin; Deng, Ya-zhong; Zhang, Yue; Xu, Yu; Yu, Hao; Liu, Zong-fan

    2015-11-01

    Data quality management system is essential to ensure accurate, complete, consistent, and reliable data collection in clinical research. This paper is devoted to various choices of data quality metrics. They are categorized by study status, e.g. study start up, conduct, and close-out. In each category, metrics for different purposes are listed according to ALCOA+ principles such us completeness, accuracy, timeliness, traceability, etc. Some general quality metrics frequently used are also introduced. This paper contains detail information as much as possible to each metric by providing definition, purpose, evaluation, referenced benchmark, and recommended targets in favor of real practice. It is important that sponsors and data management service providers establish a robust integrated clinical trial data quality management system to ensure sustainable high quality of clinical trial deliverables. It will also support enterprise level of data evaluation and bench marking the quality of data across projects, sponsors, data management service providers by using objective metrics from the real clinical trials. We hope this will be a significant input to accelerate the improvement of clinical trial data quality in the industry. PMID:26911027

  10. Handbook of aircraft noise metrics

    NASA Astrophysics Data System (ADS)

    Bennett, R. L.; Pearsons, K. S.

    1981-03-01

    Information is presented on 22 noise metrics that are associated with the measurement and prediction of the effects of aircraft noise. Some of the instantaneous frequency weighted sound level measures, such as A-weighted sound level, are used to provide multiple assessment of the aircraft noise level. Other multiple event metrics, such as day-night average sound level, were designed to relate sound levels measured over a period of time to subjective responses in an effort to determine compatible land uses and aid in community planning. The various measures are divided into: (1) instantaneous sound level metrics; (2) duration corrected single event metrics; (3) multiple event metrics; and (4) speech communication metrics. The scope of each measure is examined in terms of its: definition, purpose, background, relationship to other measures, calculation method, example, equipment, references, and standards.

  11. Handbook of aircraft noise metrics

    NASA Technical Reports Server (NTRS)

    Bennett, R. L.; Pearsons, K. S.

    1981-01-01

    Information is presented on 22 noise metrics that are associated with the measurement and prediction of the effects of aircraft noise. Some of the instantaneous frequency weighted sound level measures, such as A-weighted sound level, are used to provide multiple assessment of the aircraft noise level. Other multiple event metrics, such as day-night average sound level, were designed to relate sound levels measured over a period of time to subjective responses in an effort to determine compatible land uses and aid in community planning. The various measures are divided into: (1) instantaneous sound level metrics; (2) duration corrected single event metrics; (3) multiple event metrics; and (4) speech communication metrics. The scope of each measure is examined in terms of its: definition, purpose, background, relationship to other measures, calculation method, example, equipment, references, and standards.

  12. Characterizing Hurricane Tracks Using Multiple Statistical Metrics

    NASA Astrophysics Data System (ADS)

    Hui, K. L.; Emanuel, K.; Ravela, S.

    2015-12-01

    Historical tropical cyclone tracks reveal a wide range of shapes and speeds over different ocean basins. However, they have only been accurately recorded in the last few decades, limiting their representativeness to only a subset of possible tracks in a changing large-scale environment. Taking into account various climate conditions, synthetic tracks can be generated to produce a much larger sample of cyclone tracks to understand variability of cyclone activity and assess future changes. To evaluate how well the synthetic tracks capture the characteristics of the historical tracks, several statistical metrics have been developed to characterize and compare their shapes and movements. In one metric, the probability density functions of storm locations are estimated by modeling the position of the storms as a Markov chain. Another metric is constructed to capture the mutual information between two variables such as velocity and curvature. These metrics are then applied to the synthetic and historical tracks to determine if the latter are plausibly a subset of the former. Bootstrap sampling is used in applying the metrics to the synthetic tracks to accurately compare them with the historical tracks given the large sample size difference. If we confirm that the synthetic tracks capture the variability of the historical ones, high confidence intervals can be determined from the much larger set of synthetic tracks to look for highly unusual tracks and to assess their probability of occurrence.

  13. Do-It-Yourself Metrics

    ERIC Educational Resources Information Center

    Klubeck, Martin; Langthorne, Michael; Padgett, Don

    2006-01-01

    Something new is on the horizon, and depending on one's role on campus, it might be storm clouds or a cleansing shower. Either way, no matter how hard one tries to avoid it, sooner rather than later he/she will have to deal with metrics. Metrics do not have to cause fear and resistance. Metrics can, and should, be a powerful tool for improvement.…

  14. Performance metrics for the evaluation of hyperspectral chemical identification systems

    NASA Astrophysics Data System (ADS)

    Truslow, Eric; Golowich, Steven; Manolakis, Dimitris; Ingle, Vinay

    2016-02-01

    Remote sensing of chemical vapor plumes is a difficult but important task for many military and civilian applications. Hyperspectral sensors operating in the long-wave infrared regime have well-demonstrated detection capabilities. However, the identification of a plume's chemical constituents, based on a chemical library, is a multiple hypothesis testing problem which standard detection metrics do not fully describe. We propose using an additional performance metric for identification based on the so-called Dice index. Our approach partitions and weights a confusion matrix to develop both the standard detection metrics and identification metric. Using the proposed metrics, we demonstrate that the intuitive system design of a detector bank followed by an identifier is indeed justified when incorporating performance information beyond the standard detection metrics.

  15. The metric system: An introduction

    SciTech Connect

    Lumley, S.M.

    1995-05-01

    On July 13, 1992, Deputy Director Duane Sewell restated the Laboratory`s policy on conversion to the metric system which was established in 1974. Sewell`s memo announced the Laboratory`s intention to continue metric conversion on a reasonable and cost effective basis. Copies of the 1974 and 1992 Administrative Memos are contained in the Appendix. There are three primary reasons behind the Laboratory`s conversion to the metric system. First, Public Law 100-418, passed in 1988, states that by the end of fiscal year 1992 the Federal Government must begin using metric units in grants, procurements, and other business transactions. Second, on July 25, 1991, President George Bush signed Executive Order 12770 which urged Federal agencies to expedite conversion to metric units. Third, the contract between the University of California and the Department of Energy calls for the Laboratory to convert to the metric system. Thus, conversion to the metric system is a legal requirement and a contractual mandate with the University of California. Public Law 100-418 and Executive Order 12770 are discussed in more detail later in this section, but first they examine the reasons behind the nation`s conversion to the metric system. The second part of this report is on applying the metric system.

  16. The metric system: An introduction

    NASA Astrophysics Data System (ADS)

    Lumley, Susan M.

    On 13 Jul. 1992, Deputy Director Duane Sewell restated the Laboratory's policy on conversion to the metric system which was established in 1974. Sewell's memo announced the Laboratory's intention to continue metric conversion on a reasonable and cost effective basis. Copies of the 1974 and 1992 Administrative Memos are contained in the Appendix. There are three primary reasons behind the Laboratory's conversion to the metric system. First, Public Law 100-418, passed in 1988, states that by the end of fiscal year 1992 the Federal Government must begin using metric units in grants, procurements, and other business transactions. Second, on 25 Jul. 1991, President George Bush signed Executive Order 12770 which urged Federal agencies to expedite conversion to metric units. Third, the contract between the University of California and the Department of Energy calls for the Laboratory to convert to the metric system. Thus, conversion to the metric system is a legal requirement and a contractual mandate with the University of California. Public Law 100-418 and Executive Order 12770 are discussed in more detail later in this section, but first they examine the reasons behind the nation's conversion to the metric system. The second part of this report is on applying the metric system.

  17. Computing and Using Metrics in the ADS

    NASA Astrophysics Data System (ADS)

    Henneken, E. A.; Accomazzi, A.; Kurtz, M. J.; Grant, C. S.; Thompson, D.; Luker, J.; Chyla, R.; Holachek, A.; Murray, S. S.

    2015-04-01

    Finding measures for research impact, be it for individuals, institutions, instruments, or projects, has gained a lot of popularity. There are more papers written than ever on new impact measures, and problems with existing measures are being pointed out on a regular basis. Funding agencies require impact statistics in their reports, job candidates incorporate them in their resumes, and publication metrics have even been used in at least one recent court case. To support this need for research impact indicators, the SAO/NASA Astrophysics Data System (ADS) has developed a service that provides a broad overview of various impact measures. In this paper we discuss how the ADS can be used to quench the thirst for impact measures. We will also discuss a couple of the lesser-known indicators in the metrics overview and the main issues to be aware of when compiling publication-based metrics in the ADS, namely author name ambiguity and citation incompleteness.

  18. Software metrics: Software quality metrics for distributed systems. [reliability engineering

    NASA Technical Reports Server (NTRS)

    Post, J. V.

    1981-01-01

    Software quality metrics was extended to cover distributed computer systems. Emphasis is placed on studying embedded computer systems and on viewing them within a system life cycle. The hierarchy of quality factors, criteria, and metrics was maintained. New software quality factors were added, including survivability, expandability, and evolvability.

  19. Implementing the Metric System in Business Occupations. Metric Implementation Guide.

    ERIC Educational Resources Information Center

    Retzer, Kenneth A.; And Others

    Addressed to the business education teacher, this guide is intended to provide appropriate information, viewpoints, and attitudes regarding the metric system and to make suggestions regarding presentation of the material in the classroom. An introductory section on teaching suggestions emphasizes the need for a "think metric" approach made up of…

  20. Performance Metrics Research Project - Final Report

    SciTech Connect

    Deru, M.; Torcellini, P.

    2005-10-01

    NREL began work for DOE on this project to standardize the measurement and characterization of building energy performance. NREL's primary research objectives were to determine which performance metrics have greatest value for determining energy performance and to develop standard definitions and methods of measuring and reporting that performance.

  1. Environmental Decision Support with Consistent Metrics

    EPA Science Inventory

    One of the most effective ways to pursue environmental progress is through the use of consistent metrics within a decision making framework. The US Environmental Protection Agency’s Sustainable Technology Division has developed TRACI, the Tool for the Reduction and Assessment of...

  2. Benchmarking Evaluation Results for Prototype Extravehicular Activity Gloves

    NASA Technical Reports Server (NTRS)

    Aitchison, Lindsay; McFarland, Shane

    2012-01-01

    The Space Suit Assembly (SSA) Development Team at NASA Johnson Space Center has invested heavily in the advancement of rear-entry planetary exploration suit design but largely deferred development of extravehicular activity (EVA) glove designs, and accepted the risk of using the current flight gloves, Phase VI, for unique mission scenarios outside the Space Shuttle and International Space Station (ISS) Program realm of experience. However, as design reference missions mature, the risks of using heritage hardware have highlighted the need for developing robust new glove technologies. To address the technology gap, the NASA Game-Changing Technology group provided start-up funding for the High Performance EVA Glove (HPEG) Project in the spring of 2012. The overarching goal of the HPEG Project is to develop a robust glove design that increases human performance during EVA and creates pathway for future implementation of emergent technologies, with specific aims of increasing pressurized mobility to 60% of barehanded capability, increasing the durability by 100%, and decreasing the potential of gloves to cause injury during use. The HPEG Project focused initial efforts on identifying potential new technologies and benchmarking the performance of current state of the art gloves to identify trends in design and fit leading to establish standards and metrics against which emerging technologies can be assessed at both the component and assembly levels. The first of the benchmarking tests evaluated the quantitative mobility performance and subjective fit of four prototype gloves developed by Flagsuit LLC, Final Frontier Designs, LLC Dover, and David Clark Company as compared to the Phase VI. All of the companies were asked to design and fabricate gloves to the same set of NASA provided hand measurements (which corresponded to a single size of Phase Vi glove) and focus their efforts on improving mobility in the metacarpal phalangeal and carpometacarpal joints. Four test

  3. Benchmarking foreign electronics technologies

    SciTech Connect

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  4. Internal Benchmarking for Institutional Effectiveness

    ERIC Educational Resources Information Center

    Ronco, Sharron L.

    2012-01-01

    Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…

  5. Benchmarking. A Guide for Educators.

    ERIC Educational Resources Information Center

    Tucker, Sue

    This book offers strategies for enhancing a school's teaching and learning by using benchmarking, a team-research and data-driven process for increasing school effectiveness. Benchmarking enables professionals to study and know their systems and continually improve their practices. The book is designed to lead a team step by step through the…

  6. Best-case performance of quantum annealers on native spin-glass benchmarks: How chaos can affect success probabilities

    NASA Astrophysics Data System (ADS)

    Zhu, Zheng; Ochoa, Andrew J.; Schnabel, Stefan; Hamze, Firas; Katzgraber, Helmut G.

    2016-01-01

    Recent tests performed on the D-Wave Two quantum annealer have revealed no clear evidence of speedup over conventional silicon-based technologies. Here we present results from classical parallel-tempering Monte Carlo simulations combined with isoenergetic cluster moves of the archetypal benchmark problem—an Ising spin glass—on the native chip topology. Using realistic uncorrelated noise models for the D-Wave Two quantum annealer, we study the best-case resilience, i.e., the probability that the ground-state configuration is not affected by random fields and random-bond fluctuations found on the chip. We thus compute classical upper-bound success probabilities for different types of disorder used in the benchmarks and predict that an increase in the number of qubits will require either error correction schemes or a drastic reduction of the intrinsic noise found in these devices. We restrict this study to the exact ground state, however, the approach can be trivially extended to the inclusion of excited states if the success metric is relaxed. We outline strategies to develop robust, as well as hard benchmarks for quantum annealing devices, as well as any other (black box) computing paradigm affected by noise.

  7. A new numerical benchmark of a freshwater lens

    NASA Astrophysics Data System (ADS)

    Stoeckl, L.; Walther, M.; Graf, T.

    2016-04-01

    A numerical benchmark for 2-D variable-density flow and solute transport in a freshwater lens is presented. The benchmark is based on results of laboratory experiments conducted by Stoeckl and Houben (2012) using a sand tank on the meter scale. This benchmark describes the formation and degradation of a freshwater lens over time as it can be found under real-world islands. An error analysis gave the appropriate spatial and temporal discretization of 1 mm and 8.64 s, respectively. The calibrated parameter set was obtained using the parameter estimation tool PEST. Comparing density-coupled and density-uncoupled results showed that the freshwater-saltwater interface position is strongly dependent on density differences. A benchmark that adequately represents saltwater intrusion and that includes realistic features of coastal aquifers or freshwater lenses was lacking. This new benchmark was thus developed and is demonstrated to be suitable to test variable-density groundwater models applied to saltwater intrusion investigations.

  8. Applications of Integral Benchmark Data

    SciTech Connect

    Giuseppe Palmiotti; Teruhiko Kugo; Fitz Trumble; Albert C. Kahler; Dale Lancaster

    2014-10-09

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) provide evaluated integral benchmark data that may be used for validation of reactor physics / nuclear criticality safety analytical methods and data, nuclear data testing, advanced modeling and simulation, and safety analysis licensing activities. The handbooks produced by these programs are used in over 30 countries. Five example applications are presented in this paper: (1) Use of IRPhEP Data in Uncertainty Analyses and Cross Section Adjustment, (2) Uncertainty Evaluation Methods for Reactor Core Design at JAEA Using Reactor Physics Experimental Data, (3) Application of Benchmarking Data to a Broad Range of Criticality Safety Problems, (4) Cross Section Data Testing with ICSBEP Benchmarks, and (5) Use of the International Handbook of Evaluated Reactor Physics Benchmark Experiments to Support the Power Industry.

  9. FireHose Streaming Benchmarks

    Energy Science and Technology Software Center (ESTSC)

    2015-01-27

    The FireHose Streaming Benchmarks are a suite of stream-processing benchmarks defined to enable comparison of streaming software and hardware, both quantitatively vis-a-vis the rate at which they can process data, and qualitatively by judging the effort involved to implement and run the benchmarks. Each benchmark has two parts. The first is a generator which produces and outputs datums at a high rate in a specific format. The second is an analytic which reads the streammore » of datums and is required to perform a well-defined calculation on the collection of datums, typically to find anomalous datums that have been created in the stream by the generator. The FireHose suite provides code for the generators, sample code for the analytics (which users are free to re-implement in their own custom frameworks), and a precise definition of each benchmark calculation.« less

  10. FireHose Streaming Benchmarks

    SciTech Connect

    Karl Anderson, Steve Plimpton

    2015-01-27

    The FireHose Streaming Benchmarks are a suite of stream-processing benchmarks defined to enable comparison of streaming software and hardware, both quantitatively vis-a-vis the rate at which they can process data, and qualitatively by judging the effort involved to implement and run the benchmarks. Each benchmark has two parts. The first is a generator which produces and outputs datums at a high rate in a specific format. The second is an analytic which reads the stream of datums and is required to perform a well-defined calculation on the collection of datums, typically to find anomalous datums that have been created in the stream by the generator. The FireHose suite provides code for the generators, sample code for the analytics (which users are free to re-implement in their own custom frameworks), and a precise definition of each benchmark calculation.

  11. Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program

    SciTech Connect

    Bess, John D.; Montierth, Leland; Köberl, Oliver; Snoj, Luka

    2014-10-09

    Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the ²³⁵U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of keff with MCNP5 and ENDF/B-VII.0 neutron nuclear data are greater than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of keff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.

  12. Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program

    DOE PAGESBeta

    Bess, John D.; Montierth, Leland; Köberl, Oliver; Snoj, Luka

    2014-10-09

    Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the ²³⁵U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of keff with MCNP5 and ENDF/B-VII.0 neutron nuclear data are greatermore » than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of keff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  13. The Nature and Predictive Validity of a Benchmark Assessment Program in an American Indian School District

    ERIC Educational Resources Information Center

    Payne, Beverly J. R.

    2013-01-01

    This mixed methods study explored the nature of a benchmark assessment program and how well the benchmark assessments predicted End-of-Grade (EOG) and End-of-Course (EOC) test scores in an American Indian school district. Five major themes were identified and used to develop a Dimensions of Benchmark Assessment Program Effectiveness model:…

  14. Metric Supplement to Technical Drawing.

    ERIC Educational Resources Information Center

    Henschel, Mark

    This manual is intended for use in training persons whose vocations involve technical drawing to use the metric system of measurement. It could be used in a short course designed for that purpose or for individual study. The manual begins with a brief discussion of the rationale for conversion to the metric system. It then provides a…

  15. Inching toward the Metric System.

    ERIC Educational Resources Information Center

    Moore, Randy

    1989-01-01

    Provides an overview and description of the metric system. Discusses the evolution of measurement systems and their early cultures, the beginnings of metric measurement, the history of measurement systems in the United States, the International System of Units, its general style and usage, and supplementary units. (RT)

  16. Metric Activities, Grades K-6.

    ERIC Educational Resources Information Center

    Draper, Bob, Comp.

    This pamphlet presents worksheets for use in fifteen activities or groups of activities designed for teaching the metric system to children in grades K through 6. The approach taken in several of the activities is one of conversion between metric and English units. The majority of the activities concern length, area, volume, and capacity. A…

  17. What About Metric? Revised Edition.

    ERIC Educational Resources Information Center

    Barbrow, Louis E.

    Described are the advantages of using the metric system over the English system. The most common units of both systems are listed and compared. Pictures are used to exhibit use of the metric system in connection with giving prices or sizes of common items. Several examples provide computations of area, total weight of several objects, and volume;…

  18. Conversion to the Metric System

    ERIC Educational Resources Information Center

    Crunkilton, John C.; Lee, Jasper S.

    1974-01-01

    The authors discuss background information about the metric system and explore the effect of metrication of agriculture in areas such as equipment calibration, chemical measurement, and marketing of agricultural products. Suggestions are given for possible leadership roles and approaches that agricultural education might take in converting to the…

  19. Metrics for Soft Goods Merchandising.

    ERIC Educational Resources Information Center

    Cooper, Gloria S., Ed.; Magisos, Joel H., Ed.

    Designed to meet the job-related metric measurement needs of students interested in soft goods merchandising, this instructional package is one of five for the marketing and distribution cluster, part of a set of 55 packages for metric instruction in different occupations. The package is intended for students who already know the occupational…

  20. Metrics for Hard Goods Merchandising.

    ERIC Educational Resources Information Center

    Cooper, Gloria S., Ed.; Magisos, Joel H., Ed.

    Designed to meet the job-related metric measurement needs of students interested in hard goods merchandising, this instructional package is one of five for the marketing and distribution cluster, part of a set of 55 packages for metric instruction in different occupations. The package is intended for students who already know the occupational…