Science.gov

Sample records for metric development benchmarking

  1. Issues in Benchmark Metric Selection

    NASA Astrophysics Data System (ADS)

    Crolotte, Alain

    It is true that a metric can influence a benchmark but will esoteric metrics create more problems than they will solve? We answer this question affirmatively by examining the case of the TPC-D metric which used the much debated geometric mean for the single-stream test. We will show how a simple choice influenced the benchmark and its conduct and, to some extent, DBMS development. After examining other alternatives our conclusion is that the “real” measure for a decision-support benchmark is the arithmetic mean.

  2. Conceptual Soundness, Metric Development, Benchmarking, and Targeting for PATH Subprogram Evaluation

    SciTech Connect

    Mosey. G.; Doris, E.; Coggeshall, C.; Antes, M.; Ruch, J.; Mortensen, J.

    2009-01-01

    The objective of this study is to evaluate the conceptual soundness of the U.S. Department of Housing and Urban Development (HUD) Partnership for Advancing Technology in Housing (PATH) program's revised goals and establish and apply a framework to identify and recommend metrics that are the most useful for measuring PATH's progress. This report provides an evaluative review of PATH's revised goals, outlines a structured method for identifying and selecting metrics, proposes metrics and benchmarks for a sampling of individual PATH programs, and discusses other metrics that potentially could be developed that may add value to the evaluation process. The framework and individual program metrics can be used for ongoing management improvement efforts and to inform broader program-level metrics for government reporting requirements.

  3. Self-benchmarking Guide for Data Centers: Metrics, Benchmarks, Actions

    SciTech Connect

    Mathew, Paul; Ganguly, Srirupa; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in data centers. This guide is primarily intended for personnel who have responsibility for managing energy use in existing data centers - including facilities managers, energy managers, and their engineering consultants. Additionally, data center designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior data center benchmarking studies supported by the California Energy Commission. Much of the benchmarking data are drawn from the LBNL data center benchmarking database that was developed from these studies. Additional benchmark data were obtained from engineering experts including facility designers and energy managers. This guide also builds on recent research supported by the U.S. Department of Energy's Save Energy Now program.

  4. Self-benchmarking Guide for Laboratory Buildings: Metrics, Benchmarks, Actions

    SciTech Connect

    Mathew, Paul; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  5. Self-benchmarking Guide for Cleanrooms: Metrics, Benchmarks, Actions

    SciTech Connect

    Mathew, Paul; Sartor, Dale; Tschudi, William

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  6. Metrics and Benchmarks for Visualization

    NASA Technical Reports Server (NTRS)

    Uselton, Samuel P.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    What is a "good" visualization? How can the quality of a visualization be measured? How can one tell whether one visualization is "better" than another? I claim that the true quality of a visualization can only be measured in the context of a particular purpose. The same image generated from the same data may be excellent for one purpose and abysmal for another. A good measure of visualization quality will correspond to the performance of users in accomplishing the intended purpose, so the "gold standard" is user testing. As a user of visualization software (or at least a consultant to such users) I don't expect visualization software to have been tested in this way for every possible use. In fact, scientific visualization (as distinct from more "production oriented" uses of visualization) will continually encounter new data, new questions and new purposes; user testing can never keep up. User need software they can trust, and advice on appropriate visualizations of particular purposes. Considering the following four processes, and their impact on visualization trustworthiness, reveals important work needed to create worthwhile metrics and benchmarks for visualization. These four processes are (1) complete system testing (user-in-loop), (2) software testing, (3) software design and (4) information dissemination. Additional information is contained in the original extended abstract.

  7. Beyond Benchmarking: Value-Adding Metrics

    ERIC Educational Resources Information Center

    Fitz-enz, Jac

    2007-01-01

    HR metrics has grown up a bit over the past two decades, moving away from simple benchmarking practices and toward a more inclusive approach to measuring institutional performance and progress. In this article, the acknowledged "father" of human capital performance benchmarking provides an overview of several aspects of today's HR metrics…

  8. Cleanroom Energy Efficiency: Metrics and Benchmarks

    SciTech Connect

    International SEMATECH Manufacturing Initiative; Mathew, Paul A.; Tschudi, William; Sartor, Dale; Beasley, James

    2010-07-07

    Cleanrooms are among the most energy-intensive types of facilities. This is primarily due to the cleanliness requirements that result in high airflow rates and system static pressures, as well as process requirements that result in high cooling loads. Various studies have shown that there is a wide range of cleanroom energy efficiencies and that facility managers may not be aware of how energy efficient their cleanroom facility can be relative to other cleanroom facilities with the same cleanliness requirements. Metrics and benchmarks are an effective way to compare one facility to another and to track the performance of a given facility over time. This article presents the key metrics and benchmarks that facility managers can use to assess, track, and manage their cleanroom energy efficiency or to set energy efficiency targets for new construction. These include system-level metrics such as air change rates, air handling W/cfm, and filter pressure drops. Operational data are presented from over 20 different cleanrooms that were benchmarked with these metrics and that are part of the cleanroom benchmark dataset maintained by Lawrence Berkeley National Laboratory (LBNL). Overall production efficiency metrics for cleanrooms in 28 semiconductor manufacturing facilities in the United States and recorded in the Fabs21 database are also presented.

  9. Metrics and Benchmarks for Energy Efficiency in Laboratories

    SciTech Connect

    Rumsey Engineers; Mathew, Paul; Mathew, Paul; Greenberg, Steve; Sartor, Dale; Rumsey, Peter; Weale, John

    2008-04-10

    A wide spectrum of laboratory owners, ranging from universities to federal agencies, have explicit goals for energy efficiency in their facilities. For example, the Energy Policy Act of 2005 (EPACT 2005) requires all new federal buildings to exceed ASHRAE 90.1-2004 [1] by at least 30%. A new laboratory is much more likely to meet energy efficiency goals if quantitative metrics and targets are specified in programming documents and tracked during the course of the delivery process. If not, any additional capital costs or design time associated with attaining higher efficiencies can be difficult to justify. This article describes key energy efficiency metrics and benchmarks for laboratories, which have been developed and applied to several laboratory buildings--both for design and operation. In addition to traditional whole building energy use metrics (e.g. BTU/ft{sup 2}.yr, kWh/m{sup 2}.yr), the article describes HVAC system metrics (e.g. ventilation W/cfm, W/L.s{sup -1}), which can be used to identify the presence or absence of energy features and opportunities during design and operation.

  10. Collected notes from the Benchmarks and Metrics Workshop

    NASA Technical Reports Server (NTRS)

    Drummond, Mark E.; Kaelbling, Leslie P.; Rosenschein, Stanley J.

    1991-01-01

    In recent years there has been a proliferation of proposals in the artificial intelligence (AI) literature for integrated agent architectures. Each architecture offers an approach to the general problem of constructing an integrated agent. Unfortunately, the ways in which one architecture might be considered better than another are not always clear. There has been a growing realization that many of the positive and negative aspects of an architecture become apparent only when experimental evaluation is performed and that to progress as a discipline, we must develop rigorous experimental methods. In addition to the intrinsic intellectual interest of experimentation, rigorous performance evaluation of systems is also a crucial practical concern to our research sponsors. DARPA, NASA, and AFOSR (among others) are actively searching for better ways of experimentally evaluating alternative approaches to building intelligent agents. One tool for experimental evaluation involves testing systems on benchmark tasks in order to assess their relative performance. As part of a joint DARPA and NASA funded project, NASA-Ames and Teleos Research are carrying out a research effort to establish a set of benchmark tasks and evaluation metrics by which the performance of agent architectures may be determined. As part of this project, we held a workshop on Benchmarks and Metrics at the NASA Ames Research Center on June 25, 1990. The objective of the workshop was to foster early discussion on this important topic. We did not achieve a consensus, nor did we expect to. Collected here is some of the information that was exchanged at the workshop. Given here is an outline of the workshop, a list of the participants, notes taken on the white-board during open discussions, position papers/notes from some participants, and copies of slides used in the presentations.

  11. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design

    PubMed Central

    Pache, Roland A.; Ollikainen, Noah; Kundert, Kale; O'Meara, Matthew J.; Smith, Colin A.; Kortemme, Tanja

    2015-01-01

    The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks) to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a “best practice” set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available. PMID:26335248

  12. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    PubMed

    Ó Conchúir, Shane; Barlow, Kyle A; Pache, Roland A; Ollikainen, Noah; Kundert, Kale; O'Meara, Matthew J; Smith, Colin A; Kortemme, Tanja

    2015-01-01

    The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks) to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available. PMID:26335248

  13. Metrics and Benchmarks for Energy Efficiency in Laboratories

    SciTech Connect

    Mathew, Paul

    2007-10-26

    A wide spectrum of laboratory owners, ranging from universities to federal agencies, have explicit goals for energy efficiency in their facilities. For example, the Energy Policy Act of 2005 (EPACT 2005) requires all new federal buildings to exceed ASHRAE 90.1-2004 1 by at least 30 percent. The University of California Regents Policy requires all new construction to exceed California Title 24 2 by at least 20 percent. A new laboratory is much more likely to meet energy efficiency goals if quantitative metrics and targets are explicitly specified in programming documents and tracked during the course of the delivery process. If efficiency targets are not explicitly and properly defined, any additional capital costs or design time associated with attaining higher efficiencies can be difficult to justify. The purpose of this guide is to provide guidance on how to specify and compute energy efficiency metrics and benchmarks for laboratories, at the whole building as well as the system level. The information in this guide can be used to incorporate quantitative metrics and targets into the programming of new laboratory facilities. Many of these metrics can also be applied to evaluate existing facilities. For information on strategies and technologies to achieve energy efficiency, the reader is referred to Labs21 resources, including technology best practice guides, case studies, and the design guide (available at www.labs21century.gov/toolkit).

  14. Synthetic neuronal datasets for benchmarking directed functional connectivity metrics

    PubMed Central

    Andrade, Alexandre

    2015-01-01

    Background. Datasets consisting of synthetic neural data generated with quantifiable and controlled parameters are a valuable asset in the process of testing and validating directed functional connectivity metrics. Considering the recent debate in the neuroimaging community concerning the use of these metrics for fMRI data, synthetic datasets that emulate the BOLD signal dynamics have played a central role by supporting claims that argue in favor or against certain choices. Generative models often used in studies that simulate neuronal activity, with the aim of gaining insight into specific brain regions and functions, have different requirements from the generative models for benchmarking datasets. Even though the latter must be realistic, there is a tradeoff between realism and computational demand that needs to be contemplated and simulations that efficiently mimic the real behavior of single neurons or neuronal populations are preferred, instead of more cumbersome and marginally precise ones. Methods. This work explores how simple generative models are able to produce neuronal datasets, for benchmarking purposes, that reflect the simulated effective connectivity and, how these can be used to obtain synthetic recordings of EEG and fMRI BOLD signals. The generative models covered here are AR processes, neural mass models consisting of linear and nonlinear stochastic differential equations and populations with thousands of spiking units. Forward models for EEG consist in the simple three-shell head model while the fMRI BOLD signal is modeled with the Balloon-Windkessel model or by convolution with a hemodynamic response function. Results. The simulated datasets are tested for causality with the original spectral formulation for Granger causality. Modeled effective connectivity can be detected in the generated data for varying connection strengths and interaction delays. Discussion. All generative models produce synthetic neuronal data with detectable causal

  15. A screening life cycle metric to benchmark the environmental sustainability of waste management systems.

    PubMed

    Kaufman, Scott M; Krishnan, Nikhil; Themelis, Nickolas J

    2010-08-01

    The disposal of municipal solid waste (MSW) can lead to significant environmental burdens. The implementation of effective waste management practices, however, requires the ability to benchmark alternative systems from an environmental sustainability perspective. Existing metrics--such as recycling and generation rates, or the emissions of individual pollutants--often are not goal-oriented, are not readily comparable, and may not provide insight into the most effective options for improvement. Life cycle assessment (LCA) is an effective approach to quantify and compare systems, but full LCA comparisons typically involve significant expenditure of resources and time. In this work we develop a metric called the Resource Conservation Efficiency (RCE) that is based on a screening-LCA approach, and that can be used to rapidly and effectively benchmark (on a screening level) the ecological sustainability of waste management practices across multiple locations. We first demonstrate that this measure is an effective proxy by comparing RCE results with existing LCA inventory and impact assessment methods. We then demonstrate the use of the RCE metric by benchmarking the sustainability of waste management practices in two U.S. cities: San Francisco and Honolulu. The results show that while San Francisco does an excellent job recovering recyclable materials, adding a waste to energy (WTE) facility to their infrastructure would most beneficially impact the environmental performance of their waste management system. Honolulu would achieve the greatest gains by increasing the capture of easily recycled materials not currently being recovered. Overall results also highlight how the RCE metric may be used to provide insight into effective actions cities can take to boost the environmental performance of their waste management practices. PMID:20666561

  16. Evaluation of mobile phone camera benchmarking using objective camera speed and image quality metrics

    NASA Astrophysics Data System (ADS)

    Peltoketo, Veli-Tapani

    2014-11-01

    When a mobile phone camera is tested and benchmarked, the significance of image quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. However, the speed or rapidity metrics of the mobile phone's camera system has not been used with the quality metrics even if the camera speed has become a more and more important camera performance feature. There are several tasks in this work. First, the most important image quality and speed-related metrics of a mobile phone's camera system are collected from the standards and papers and, also, novel speed metrics are identified. Second, combinations of the quality and speed metrics are validated using mobile phones on the market. The measurements are done toward application programming interface of different operating systems. Finally, the results are evaluated and conclusions are made. The paper defines a solution to combine different image quality and speed metrics to a single benchmarking score. A proposal of the combined benchmarking metric is evaluated using measurements of 25 mobile phone cameras on the market. The paper is a continuation of a previous benchmarking work expanded with visual noise measurement and updates of the latest mobile phone versions.

  17. Improved product energy intensity benchmarking metrics for thermally concentrated food products.

    PubMed

    Walker, Michael E; Arnold, Craig S; Lettieri, David J; Hutchins, Margot J; Masanet, Eric

    2014-10-21

    Product energy intensity (PEI) metrics allow industry and policymakers to quantify manufacturing energy requirements on a product-output basis. However, complexities can arise for benchmarking of thermally concentrated products, particularly in the food processing industry, due to differences in outlet composition, feed material composition, and processing technology. This study analyzes tomato paste as a typical, high-volume concentrated product using a thermodynamics-based model. Results show that PEI for tomato pastes and purees varies from 1200 to 9700 kJ/kg over the range of 8%-40% outlet solids concentration for a 3-effect evaporator, and 980-7000 kJ/kg for a 5-effect evaporator. Further, the PEI for producing paste at 31% outlet solids concentration in a 3-effect evaporator varies from 13,000 kJ/kg at 3% feed solids concentration to 5900 kJ/kg at 6%; for a 5-effect evaporator, the variation is from 9200 kJ/kg at 3%, to 4300 kJ/kg at 6%. Methods to compare the PEI of different product concentrations on a standard basis are evaluated. This paper also presents methods to develop PEI benchmark values for multiple plants. These results focus on the case of a tomato paste processing facility, but can be extended to other products and industries that utilize thermal concentration. PMID:25215537

  18. Developing Financial Benchmarks for Critical Access Hospitals

    PubMed Central

    Pink, George H.; Holmes, George M.; Slifkin, Rebecca T.; Thompson, Roger E.

    2009-01-01

    This study developed and applied benchmarks for five indicators included in the CAH Financial Indicators Report, an annual, hospital-specific report distributed to all critical access hospitals (CAHs). An online survey of Chief Executive Officers and Chief Financial Officers was used to establish benchmarks. Indicator values for 2004, 2005, and 2006 were calculated for 421 CAHs and hospital performance was compared to the benchmarks. Although many hospitals performed better than benchmark on one indicator in 1 year, very few performed better than benchmark on all five indicators in all 3 years. The probability of performing better than benchmark differed among peer groups. PMID:19544935

  19. Metrics for antibody therapeutics development.

    PubMed

    Reichert, Janice M

    2010-01-01

    A wide variety of full-size monoclonal antibodies (mAbs) and therapeutics derived from alternative antibody formats can be produced through genetic and biological engineering techniques. These molecules are now filling the preclinical and clinical pipelines of every major pharmaceutical company and many biotechnology firms. Metrics for the development of antibody therapeutics, including averages for the number of candidates entering clinical study and development phase lengths for mAbs approved in the United States, were derived from analysis of a dataset of over 600 therapeutic mAbs that entered clinical study sponsored, at least in part, by commercial firms. The results presented provide an overview of the field and context for the evaluation of on-going and prospective mAb development programs. The expansion of therapeutic antibody use through supplemental marketing approvals and the increase in the study of therapeutics derived from alternative antibody formats are discussed. PMID:20930555

  20. How Does Your Data Center Measure Up? Energy Efficiency Metrics and Benchmarks for Data Center Infrastructure Systems

    SciTech Connect

    Mathew, Paul; Greenberg, Steve; Ganguly, Srirupa; Sartor, Dale; Tschudi, William

    2009-04-01

    Data centers are among the most energy intensive types of facilities, and they are growing dramatically in terms of size and intensity [EPA 2007]. As a result, in the last few years there has been increasing interest from stakeholders - ranging from data center managers to policy makers - to improve the energy efficiency of data centers, and there are several industry and government organizations that have developed tools, guidelines, and training programs. There are many opportunities to reduce energy use in data centers and benchmarking studies reveal a wide range of efficiency practices. Data center operators may not be aware of how efficient their facility may be relative to their peers, even for the same levels of service. Benchmarking is an effective way to compare one facility to another, and also to track the performance of a given facility over time. Toward that end, this article presents the key metrics that facility managers can use to assess, track, and manage the efficiency of the infrastructure systems in data centers, and thereby identify potential efficiency actions. Most of the benchmarking data presented in this article are drawn from the data center benchmarking database at Lawrence Berkeley National Laboratory (LBNL). The database was developed from studies commissioned by the California Energy Commission, Pacific Gas and Electric Co., the U.S. Department of Energy and the New York State Energy Research and Development Authority.

  1. ImQual: a web-service dedicated to image quality evaluation and metrics benchmark

    NASA Astrophysics Data System (ADS)

    Nauge, Michael; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine

    2011-01-01

    Quality assessment is becoming an important issue in the framework of image and video processing. Images are generally intended to be viewed by human observers and thus the consideration of the visual perception is an intrinsic aspect of the effective assessment of image quality. This observation has been made for different application domains such as printing, compression, transmission, and so on. Recently hundreds of research paper have proposed objective quality metrics dedicated to several image and video applications. With this abundance of quality tools, it is more than ever important to have a set of rules/methods allowing to assess the efficiency of a given metric. In this direction, technical groups such as VQEG (Video Quality Experts Group) or JPEG AIC (Advanced Image Coding) have focused their interest on the definition of test-plans to measure the impact of a metric. Following this wave in the image and video community, we propose in this paper a web-service or a web-application dedicated to the benchmark of quality metrics for image compression and open to all possible extensions. This application is intended to be the reference tool for the JPEG committee in order to ease the evaluation of new compression technologies. Also it is seen as a global help for our community to help researchers time while trying to evaluate their algorithms of watermarking, compression, enhancement, . . . As an illustration of the web-application, we propose a benchmark of many well-known metrics on several image databases to provide a small overview of the possible use.

  2. Benchmarking Learning and Teaching: Developing a Method

    ERIC Educational Resources Information Center

    Henderson-Smart, Cheryl; Winning, Tracey; Gerzina, Tania; King, Shalinie; Hyde, Sarah

    2006-01-01

    Purpose: To develop a method for benchmarking teaching and learning in response to an institutional need to validate a new program in Dentistry at the University of Sydney, Australia. Design/methodology/approach: After a collaborative partner, University of Adelaide, was identified, the areas of teaching and learning to be benchmarked, PBL…

  3. Achieving palliative care research efficiency through defining and benchmarking performance metrics

    PubMed Central

    Lodato, Jordan E.; Aziz, Noreen; Bennett, Rachael E.; Abernethy, Amy P.; Kutner, Jean S.

    2014-01-01

    Purpose of Review Research efficiency is gaining increasing attention in the research enterprise, including palliative care research. The importance of generating meaningful findings and translating these scientific advances to improved patient care creates urgency in the field to address well-documented system inefficiencies. The Palliative Care Research Cooperative Group (PCRC) provides useful examples for ensuring research efficiency in palliative care. Recent Findings Literature on maximizing research efficiency focuses on the importance of clearly delineated process maps, working instructions, and standard operating procedures (SOPs) in creating synchronicity in expectations across research sites. Examples from the PCRC support these objectives and suggest that early creation and employment of performance metrics aligned with these processes are essential to generate clear expectations and identify benchmarks. These benchmarks are critical in effective monitoring and ultimately the generation of high quality findings that are translatable to clinical populations. Prioritization of measurable goals and tasks to ensure that activities align with programmatic aims is critical. Summary Examples from the PCRC affirm and expand the existing literature on research efficiency, providing a palliative care focus. Operating procedures, performance metrics, prioritization, and monitoring for success should all be informed by and inform the process map to achieve maximum research efficiency. PMID:23080309

  4. Developing integrated benchmarks for DOE performance measurement

    SciTech Connect

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  5. Performance Benchmarks for Scholarly Metrics Associated with Fisheries and Wildlife Faculty

    PubMed Central

    Swihart, Robert K.; Sundaram, Mekala; Höök, Tomas O.; DeWoody, J. Andrew; Kellner, Kenneth F.

    2016-01-01

    Research productivity and impact are often considered in professional evaluations of academics, and performance metrics based on publications and citations increasingly are used in such evaluations. To promote evidence-based and informed use of these metrics, we collected publication and citation data for 437 tenure-track faculty members at 33 research-extensive universities in the United States belonging to the National Association of University Fisheries and Wildlife Programs. For each faculty member, we computed 8 commonly used performance metrics based on numbers of publications and citations, and recorded covariates including academic age (time since Ph.D.), sex, percentage of appointment devoted to research, and the sub-disciplinary research focus. Standardized deviance residuals from regression models were used to compare faculty after accounting for variation in performance due to these covariates. We also aggregated residuals to enable comparison across universities. Finally, we tested for temporal trends in citation practices to assess whether the “law of constant ratios”, used to enable comparison of performance metrics between disciplines that differ in citation and publication practices, applied to fisheries and wildlife sub-disciplines when mapped to Web of Science Journal Citation Report categories. Our regression models reduced deviance by ¼ to ½. Standardized residuals for each faculty member, when combined across metrics as a simple average or weighted via factor analysis, produced similar results in terms of performance based on percentile rankings. Significant variation was observed in scholarly performance across universities, after accounting for the influence of covariates. In contrast to findings for other disciplines, normalized citation ratios for fisheries and wildlife sub-disciplines increased across years. Increases were comparable for all sub-disciplines except ecology. We discuss the advantages and limitations of our methods

  6. Performance Benchmarks for Scholarly Metrics Associated with Fisheries and Wildlife Faculty.

    PubMed

    Swihart, Robert K; Sundaram, Mekala; Höök, Tomas O; DeWoody, J Andrew; Kellner, Kenneth F

    2016-01-01

    Research productivity and impact are often considered in professional evaluations of academics, and performance metrics based on publications and citations increasingly are used in such evaluations. To promote evidence-based and informed use of these metrics, we collected publication and citation data for 437 tenure-track faculty members at 33 research-extensive universities in the United States belonging to the National Association of University Fisheries and Wildlife Programs. For each faculty member, we computed 8 commonly used performance metrics based on numbers of publications and citations, and recorded covariates including academic age (time since Ph.D.), sex, percentage of appointment devoted to research, and the sub-disciplinary research focus. Standardized deviance residuals from regression models were used to compare faculty after accounting for variation in performance due to these covariates. We also aggregated residuals to enable comparison across universities. Finally, we tested for temporal trends in citation practices to assess whether the "law of constant ratios", used to enable comparison of performance metrics between disciplines that differ in citation and publication practices, applied to fisheries and wildlife sub-disciplines when mapped to Web of Science Journal Citation Report categories. Our regression models reduced deviance by ¼ to ½. Standardized residuals for each faculty member, when combined across metrics as a simple average or weighted via factor analysis, produced similar results in terms of performance based on percentile rankings. Significant variation was observed in scholarly performance across universities, after accounting for the influence of covariates. In contrast to findings for other disciplines, normalized citation ratios for fisheries and wildlife sub-disciplines increased across years. Increases were comparable for all sub-disciplines except ecology. We discuss the advantages and limitations of our methods

  7. Enhanced Accident Tolerant LWR Fuels: Metrics Development

    SciTech Connect

    Shannon Bragg-Sitton; Lori Braase; Rose Montgomery; Chris Stanek; Robert Montgomery; Lance Snead; Larry Ott; Mike Billone

    2013-09-01

    The Department of Energy (DOE) Fuel Cycle Research and Development (FCRD) Advanced Fuels Campaign (AFC) is conducting research and development on enhanced Accident Tolerant Fuels (ATF) for light water reactors (LWRs). This mission emphasizes the development of novel fuel and cladding concepts to replace the current zirconium alloy-uranium dioxide (UO2) fuel system. The overall mission of the ATF research is to develop advanced fuels/cladding with improved performance, reliability and safety characteristics during normal operations and accident conditions, while minimizing waste generation. The initial effort will focus on implementation in operating reactors or reactors with design certifications. To initiate the development of quantitative metrics for ATR, a LWR Enhanced Accident Tolerant Fuels Metrics Development Workshop was held in October 2012 in Germantown, MD. This paper summarizes the outcome of that workshop and the current status of metrics development for LWR ATF.

  8. Proposing Metrics for Benchmarking Novel EEG Technologies Towards Real-World Measurements

    PubMed Central

    Oliveira, Anderson S.; Schlink, Bryan R.; Hairston, W. David; König, Peter; Ferris, Daniel P.

    2016-01-01

    Recent advances in electroencephalographic (EEG) acquisition allow for recordings using wet and dry sensors during whole-body motion. The large variety of commercially available EEG systems contrasts with the lack of established methods for objectively describing their performance during whole-body motion. Therefore, the aim of this study was to introduce methods for benchmarking the suitability of new EEG technologies for that context. Subjects performed an auditory oddball task using three different EEG systems (Biosemi wet—BSM, Cognionics Wet—Cwet, Conionics Dry—Cdry). Nine subjects performed the oddball task while seated and walking on a treadmill. We calculated EEG epoch rejection rate, pre-stimulus noise (PSN), signal-to-noise ratio (SNR) and EEG amplitude variance across the P300 event window (CVERP) from a subset of 12 channels common to all systems. We also calculated test-retest reliability and the subject’s level of comfort while using each system. Our results showed that using the traditional 75 μV rejection threshold BSM and Cwet epoch rejection rates are ~25% and ~47% in the seated and walking conditions respectively. However, this threshold rejects ~63% of epochs for Cdry in the seated condition and excludes 100% of epochs for the majority of subjects during walking. BSM showed predominantly no statistical differences between seated and walking condition for all metrics, whereas Cwet showed increases in PSN and CVERP, as well as reduced SNR in the walking condition. Data quality from Cdry in seated conditions were predominantly inferior in comparison to the wet systems. Test-retest reliability was mostly moderate/good for these variables, especially in seated conditions. In addition, subjects felt less discomfort and were motivated for longer recording periods while using wet EEG systems in comparison to the dry system. The proposed method was successful in identifying differences across systems that are mostly caused by motion

  9. Proposing Metrics for Benchmarking Novel EEG Technologies Towards Real-World Measurements.

    PubMed

    Oliveira, Anderson S; Schlink, Bryan R; Hairston, W David; König, Peter; Ferris, Daniel P

    2016-01-01

    Recent advances in electroencephalographic (EEG) acquisition allow for recordings using wet and dry sensors during whole-body motion. The large variety of commercially available EEG systems contrasts with the lack of established methods for objectively describing their performance during whole-body motion. Therefore, the aim of this study was to introduce methods for benchmarking the suitability of new EEG technologies for that context. Subjects performed an auditory oddball task using three different EEG systems (Biosemi wet-BSM, Cognionics Wet-Cwet, Conionics Dry-Cdry). Nine subjects performed the oddball task while seated and walking on a treadmill. We calculated EEG epoch rejection rate, pre-stimulus noise (PSN), signal-to-noise ratio (SNR) and EEG amplitude variance across the P300 event window (CVERP) from a subset of 12 channels common to all systems. We also calculated test-retest reliability and the subject's level of comfort while using each system. Our results showed that using the traditional 75 μV rejection threshold BSM and Cwet epoch rejection rates are ~25% and ~47% in the seated and walking conditions respectively. However, this threshold rejects ~63% of epochs for Cdry in the seated condition and excludes 100% of epochs for the majority of subjects during walking. BSM showed predominantly no statistical differences between seated and walking condition for all metrics, whereas Cwet showed increases in PSN and CVERP, as well as reduced SNR in the walking condition. Data quality from Cdry in seated conditions were predominantly inferior in comparison to the wet systems. Test-retest reliability was mostly moderate/good for these variables, especially in seated conditions. In addition, subjects felt less discomfort and were motivated for longer recording periods while using wet EEG systems in comparison to the dry system. The proposed method was successful in identifying differences across systems that are mostly caused by motion-related artifacts and

  10. Benchmarking the performance of fixed-image receptor digital radiography systems. Part 2: system performance metric.

    PubMed

    Lee, Kam L; Bernardo, Michael; Ireland, Timothy A

    2016-06-01

    This is part two of a two-part study in benchmarking system performance of fixed digital radiographic systems. The study compares the system performance of seven fixed digital radiography systems based on quantitative metrics like modulation transfer function (sMTF), normalised noise power spectrum (sNNPS), detective quantum efficiency (sDQE) and entrance surface air kerma (ESAK). It was found that the most efficient image receptors (greatest sDQE) were not necessarily operating at the lowest ESAK. In part one of this study, sMTF is shown to depend on system configuration while sNNPS is shown to be relatively consistent across systems. Systems are ranked on their signal-to-noise ratio efficiency (sDQE) and their ESAK. Systems using the same equipment configuration do not necessarily have the same system performance. This implies radiographic practice at the site will have an impact on the overall system performance. In general, systems are more dose efficient at low dose settings. PMID:27222199

  11. Understanding Acceptance of Software Metrics--A Developer Perspective

    ERIC Educational Resources Information Center

    Umarji, Medha

    2009-01-01

    Software metrics are measures of software products and processes. Metrics are widely used by software organizations to help manage projects, improve product quality and increase efficiency of the software development process. However, metrics programs tend to have a high failure rate in organizations, and developer pushback is one of the sources…

  12. Metrics. [measurement for effective software development and management

    NASA Technical Reports Server (NTRS)

    Mcgarry, Frank

    1991-01-01

    A development status evaluation is presented for practical software performance measurement, or 'metrics', in which major innovations have recently occurred. Metrics address such aspects of software performance as whether a software project is on schedule, how many errors can be expected from it, whether the methodology being used is effective and the relative quality of the software employed. Metrics may be characterized as explicit, analytical, and subjective. Attention is given to the bases for standards and the conduct of metrics research.

  13. Structural Life and Reliability Metrics: Benchmarking and Verification of Probabilistic Life Prediction Codes

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.; Soditus, Sherry; Hendricks, Robert C.; Zaretsky, Erwin V.

    2002-01-01

    Over the past two decades there has been considerable effort by NASA Glenn and others to develop probabilistic codes to predict with reasonable engineering certainty the life and reliability of critical components in rotating machinery and, more specifically, in the rotating sections of airbreathing and rocket engines. These codes have, to a very limited extent, been verified with relatively small bench rig type specimens under uniaxial loading. Because of the small and very narrow database the acceptance of these codes within the aerospace community has been limited. An alternate approach to generating statistically significant data under complex loading and environments simulating aircraft and rocket engine conditions is to obtain, catalog and statistically analyze actual field data. End users of the engines, such as commercial airlines and the military, record and store operational and maintenance information. This presentation describes a cooperative program between the NASA GRC, United Airlines, USAF Wright Laboratory, U.S. Army Research Laboratory and Australian Aeronautical & Maritime Research Laboratory to obtain and analyze these airline data for selected components such as blades, disks and combustors. These airline data will be used to benchmark and compare existing life prediction codes.

  14. Structural Life and Reliability Metrics: Benchmarking and Verification of Probabilistic Life Prediction Codes

    NASA Astrophysics Data System (ADS)

    Litt, Jonathan S.; Soditus, Sherry; Hendricks, Robert C.; Zaretsky, Erwin V.

    2002-10-01

    Over the past two decades there has been considerable effort by NASA Glenn and others to develop probabilistic codes to predict with reasonable engineering certainty the life and reliability of critical components in rotating machinery and, more specifically, in the rotating sections of airbreathing and rocket engines. These codes have, to a very limited extent, been verified with relatively small bench rig type specimens under uniaxial loading. Because of the small and very narrow database the acceptance of these codes within the aerospace community has been limited. An alternate approach to generating statistically significant data under complex loading and environments simulating aircraft and rocket engine conditions is to obtain, catalog and statistically analyze actual field data. End users of the engines, such as commercial airlines and the military, record and store operational and maintenance information. This presentation describes a cooperative program between the NASA GRC, United Airlines, USAF Wright Laboratory, U.S. Army Research Laboratory and Australian Aeronautical & Maritime Research Laboratory to obtain and analyze these airline data for selected components such as blades, disks and combustors. These airline data will be used to benchmark and compare existing life prediction codes.

  15. Can Human Capital Metrics Effectively Benchmark Higher Education with For-Profit Companies?

    ERIC Educational Resources Information Center

    Hagedorn, Kathy; Forlaw, Blair

    2007-01-01

    Last fall, Saint Louis University participated in St. Louis, Missouri's, first Human Capital Performance Study alongside several of the region's largest for-profit employers. The university also participated this year in the benchmarking of employee engagement factors conducted by the St. Louis Business Journal in its effort to quantify and select…

  16. A Question of Accountability: Looking beyond Federal Mandates for Metrics That Accurately Benchmark Community College Success

    ERIC Educational Resources Information Center

    Joch, Alan

    2014-01-01

    The need for increased accountability in higher education and, specifically, the nation's community colleges-is something most educators can agree on. The challenge has, and continues to be, finding a system of metrics that meets the unique needs of two-year institutions versus their four-year-counterparts. Last summer, President Obama unveiled…

  17. Developing Benchmarks to Measure Teacher Candidates' Performance

    ERIC Educational Resources Information Center

    Frazier, Laura Corbin; Brown-Hobbs, Stacy; Palmer, Barbara Martin

    2013-01-01

    This paper traces the development of teacher candidate benchmarks at one liberal arts institution. Begun as a classroom assessment activity over ten years ago, the benchmarks, through collaboration with professional development school partners, now serve as a primary measure of teacher candidates' performance in the final phases of the…

  18. Development of a California commercial building benchmarking database

    SciTech Connect

    Kinney, Satkartar; Piette, Mary Ann

    2002-05-17

    Building energy benchmarking is a useful starting point for commercial building owners and operators to target energy savings opportunities. There are a number of tools and methods for benchmarking energy use. Benchmarking based on regional data can provides more relevant information for California buildings than national tools such as Energy Star. This paper discusses issues related to benchmarking commercial building energy use and the development of Cal-Arch, a building energy benchmarking database for California. Currently Cal-Arch uses existing survey data from California's Commercial End Use Survey (CEUS), a largely underutilized wealth of information collected by California's major utilities. Doe's Commercial Building Energy Consumption Survey (CBECS) is used by a similar tool, Arch, and by a number of other benchmarking tools. Future versions of Arch/Cal-Arch will utilize additional data sources including modeled data and individual buildings to expand the database.

  19. Developing a Security Metrics Scorecard for Healthcare Organizations.

    PubMed

    Elrefaey, Heba; Borycki, Elizabeth; Kushniruk, Andrea

    2015-01-01

    In healthcare, information security is a key aspect of protecting a patient's privacy and ensuring systems availability to support patient care. Security managers need to measure the performance of security systems and this can be achieved by using evidence-based metrics. In this paper, we describe the development of an evidence-based security metrics scorecard specific to healthcare organizations. Study participants were asked to comment on the usability and usefulness of a prototype of a security metrics scorecard that was developed based on current research in the area of general security metrics. Study findings revealed that scorecards need to be customized for the healthcare setting in order for the security information to be useful and usable in healthcare organizations. The study findings resulted in the development of a security metrics scorecard that matches the healthcare security experts' information requirements. PMID:26718256

  20. Developing Metrics in Systems Integration (ISS Program COTS Integration Model)

    NASA Technical Reports Server (NTRS)

    Lueders, Kathryn

    2007-01-01

    This viewgraph presentation reviews some of the complications in developing metrics for systems integration. Specifically it reviews a case study of how two programs within NASA try to develop and measure performance while meeting the encompassing organizational goals.

  1. Hospital Energy Benchmarking Guidance - Version 1.0

    SciTech Connect

    Singer, Brett C.

    2009-09-08

    This document describes an energy benchmarking framework for hospitals. The document is organized as follows. The introduction provides a brief primer on benchmarking and its application to hospitals. The next two sections discuss special considerations including the identification of normalizing factors. The presentation of metrics is preceded by a description of the overall framework and the rationale for the grouping of metrics. Following the presentation of metrics, a high-level protocol is provided. The next section presents draft benchmarks for some metrics; benchmarks are not available for many metrics owing to a lack of data. This document ends with a list of research needs for further development.

  2. Advanced Life Support Research and Technology Development Metric

    NASA Technical Reports Server (NTRS)

    Hanford, A. J.

    2004-01-01

    The Metric is one of several measures employed by the NASA to assess the Agency s progress as mandated by the United States Congress and the Office of Management and Budget. Because any measure must have a reference point, whether explicitly defined or implied, the Metric is a comparison between a selected ALS Project life support system and an equivalently detailed life support system using technology from the Environmental Control and Life Support System (ECLSS) for the International Space Station (ISS). This document provides the official calculation of the Advanced Life Support (ALS) Research and Technology Development Metric (the Metric) for Fiscal Year 2004. The values are primarily based on Systems Integration, Modeling, and Analysis (SIMA) Element approved software tools or reviewed and approved reference documents. For Fiscal Year 2004, the Advanced Life Support Research and Technology Development Metric value is 2.03 for an Orbiting Research Facility and 1.62 for an Independent Exploration Mission.

  3. Development of Technology Transfer Economic Growth Metrics

    NASA Technical Reports Server (NTRS)

    Mastrangelo, Christina M.

    1998-01-01

    The primary objective of this project is to determine the feasibility of producing technology transfer metrics that answer the question: Do NASA/MSFC technical assistance activities impact economic growth? The data for this project resides in a 7800-record database maintained by Tec-Masters, Incorporated. The technology assistance data results from survey responses from companies and individuals who have interacted with NASA via a Technology Transfer Agreement, or TTA. The goal of this project was to determine if the existing data could provide indications of increased wealth. This work demonstrates that there is evidence that companies that used NASA technology transfer have a higher job growth rate than the rest of the economy. It also shows that the jobs being supported are jobs in higher wage SIC codes, and this indicates improvements in personal wealth. Finally, this work suggests that with correct data, the wealth issue may be addressed.

  4. Developing scheduling benchmark tests for the Space Network

    NASA Technical Reports Server (NTRS)

    Moe, Karen L.; Happell, Nadine; Brady, Sean

    1993-01-01

    A set of benchmark tests were developed to analyze and measure Space Network scheduling characteristics and to assess the potential benefits of a proposed flexible scheduling concept. This paper discusses the role of the benchmark tests in evaluating alternative flexible scheduling approaches and defines a set of performance measurements. The paper describes the rationale for the benchmark tests as well as the benchmark components, which include models of the Tracking and Data Relay Satellite (TDRS), mission spacecraft, their orbital data, and flexible requests for communication services. Parameters which vary in the tests address the degree of request flexibility, the request resource load, and the number of events to schedule. Test results are evaluated based on time to process and schedule quality. Preliminary results and lessons learned are addressed.

  5. Developing Metrics for Managing Soybean Aphids

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Stage-specific economic injury levels form the basis of integrated pest management for soybean aphid (Aphis glycines Matsumura) in soybean (Glycine max L.). Experimental objectives were to develop a procedure for calculating economic injury levels of the soybean aphid specific to the R2 (full bloom...

  6. Metrics in Urban Health: Current Developments and Future Prospects.

    PubMed

    Prasad, Amit; Gray, Chelsea Bettina; Ross, Alex; Kano, Megumi

    2016-01-01

    The research community has shown increasing interest in developing and using metrics to determine the relationships between urban living and health. In particular, we have seen a recent exponential increase in efforts aiming to investigate and apply metrics for urban health, especially the health impacts of the social and built environments as well as air pollution. A greater recognition of the need to investigate the impacts and trends of health inequities is also evident through more recent literature. Data availability and accuracy have improved through new affordable technologies for mapping, geographic information systems (GIS), and remote sensing. However, less research has been conducted in low- and middle-income countries where quality data are not always available, and capacity for analyzing available data may be limited. For this increased interest in research and development of metrics to be meaningful, the best available evidence must be accessible to decision makers to improve health impacts through urban policies. PMID:26789382

  7. Measures and metrics for software development

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The evaluations of and recommendations for the use of software development measures based on the practical and analytical experience of the Software Engineering Laboratory are discussed. The basic concepts of measurement and system of classification for measures are described. The principal classes of measures defined are explicit, analytic, and subjective. Some of the major software measurement schemes appearing in the literature are derived. The applications of specific measures in a production environment are explained. These applications include prediction and planning, review and assessment, and evaluation and selection.

  8. Development of a Benchmark Example for Delamination Fatigue Growth Prediction

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2010-01-01

    The development of a benchmark example for cyclic delamination growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of a Double Cantilever Beam (DCB) specimen, which is independent of the analysis software used and allows the assessment of the delamination growth prediction capabilities in commercial finite element codes. First, the benchmark result was created for the specimen. Second, starting from an initially straight front, the delamination was allowed to grow under cyclic loading in a finite element model of a commercial code. The number of cycles to delamination onset and the number of cycles during stable delamination growth for each growth increment were obtained from the analysis. In general, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. Overall, the results are encouraging but further assessment for mixed-mode delamination is required

  9. Development of a Benchmark Example for Delamination Fatigue Growth Prediction

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2010-01-01

    The development of a benchmark example for cyclic delamination growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of a Double Cantilever Beam (DCB) specimen, which is independent of the analysis software used and allows the assessment of the delamination growth prediction capabilities in commercial finite element codes. First, the benchmark result was created for the specimen. Second, starting from an initially straight front, the delamination was allowed to grow under cyclic loading in a finite element model of a commercial code. The number of cycles to delamination onset and the number of cycles during stable delamination growth for each growth increment were obtained from the analysis. In general, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. Overall, the results are encouraging but further assessment for mixed-mode delamination is required.

  10. Developing a Benchmark Tool for Sustainable Consumption: An Iterative Process

    ERIC Educational Resources Information Center

    Heiskanen, E.; Timonen, P.; Nissinen, A.; Gronroos, J.; Honkanen, A.; Katajajuuri, J. -M.; Kettunen, J.; Kurppa, S.; Makinen, T.; Seppala, J.; Silvenius, F.; Virtanen, Y.; Voutilainen, P.

    2007-01-01

    This article presents the development process of a consumer-oriented, illustrative benchmarking tool enabling consumers to use the results of environmental life cycle assessment (LCA) to make informed decisions. LCA provides a wealth of information on the environmental impacts of products, but its results are very difficult to present concisely…

  11. The Applicability of Proposed Object-Oriented Metrics to Developer Feedback in Time to Impact Development

    NASA Technical Reports Server (NTRS)

    Neal, Ralph D.

    1996-01-01

    This paper looks closely at each of the software metrics generated by the McCabe object-Oriented Tool(TM) and its ability to convey timely information to developers. The metrics are examined for meaningfulness in terms of the scale assignable to the metric by the rules of measurement theory and the software dimension being measured. Recommendations are made as to the proper use of each metric and its ability to influence development at an early stage. The metrics of the McCabe Object-Oriented Tool(TM) set were selected because of the tool's use in a couple of NASA IV&V projects.

  12. Development of Technology Readiness Level (TRL) Metrics and Risk Measures

    SciTech Connect

    Engel, David W.; Dalton, Angela C.; Anderson, K. K.; Sivaramakrishnan, Chandrika; Lansing, Carina

    2012-10-01

    This is an internal project milestone report to document the CCSI Element 7 team's progress on developing Technology Readiness Level (TRL) metrics and risk measures. In this report, we provide a brief overview of the current technology readiness assessment research, document the development of technology readiness levels (TRLs) specific to carbon capture technologies, describe the risk measures and uncertainty quantification approaches used in our research, and conclude by discussing the next steps that the CCSI Task 7 team aims to accomplish.

  13. Development of Management Metrics for Research and Technology

    NASA Technical Reports Server (NTRS)

    Sheskin, Theodore J.

    2003-01-01

    Professor Ted Sheskin from CSU will be tasked to research and investigate metrics that can be used to determine the technical progress for advanced development and research tasks. These metrics will be implemented in a software environment that hosts engineering design, analysis and management tools to be used to support power system and component research work at GRC. Professor Sheskin is an Industrial Engineer and has been involved in issues related to management of engineering tasks and will use his knowledge from this area to allow extrapolation into the research and technology management area. Over the course of the summer, Professor Sheskin will develop a bibliography of management papers covering current management methods that may be applicable to research management. At the completion of the summer work we expect to have him recommend a metric system to be reviewed prior to implementation in the software environment. This task has been discussed with Professor Sheskin and some review material has already been given to him.

  14. Pragmatic quality metrics for evolutionary software development models

    NASA Technical Reports Server (NTRS)

    Royce, Walker

    1990-01-01

    Due to the large number of product, project, and people parameters which impact large custom software development efforts, measurement of software product quality is a complex undertaking. Furthermore, the absolute perspective from which quality is measured (customer satisfaction) is intangible. While we probably can't say what the absolute quality of a software product is, we can determine the relative quality, the adequacy of this quality with respect to pragmatic considerations, and identify good and bad trends during development. While no two software engineers will ever agree on an optimum definition of software quality, they will agree that the most important perspective of software quality is its ease of change. We can call this flexibility, adaptability, or some other vague term, but the critical characteristic of software is that it is soft. The easier the product is to modify, the easier it is to achieve any other software quality perspective. This paper presents objective quality metrics derived from consistent lifecycle perspectives of rework which, when used in concert with an evolutionary development approach, can provide useful insight to produce better quality per unit cost/schedule or to achieve adequate quality more efficiently. The usefulness of these metrics is evaluated by applying them to a large, real world, Ada project.

  15. Learning Through Benchmarking: Developing a Relational, Prospective Approach to Benchmarking ICT in Learning and Teaching

    ERIC Educational Resources Information Center

    Ellis, Robert A.; Moore, Roger R.

    2006-01-01

    This study discusses benchmarking the use of information and communication technologies (ICT) in teaching and learning between two universities with different missions: one an Australian campus-based metropolitan university and the other a British distance-education provider. It argues that the differences notwithstanding, it is possible to…

  16. Verification and validation benchmarks.

    SciTech Connect

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  17. Career performance trajectories of Olympic swimmers: benchmarks for talent development.

    PubMed

    Allen, Sian V; Vandenbogaerde, Tom J; Hopkins, William G

    2014-01-01

    The age-related progression of elite athletes to their career-best performances can provide benchmarks for talent development. The purpose of this study was to model career performance trajectories of Olympic swimmers to develop these benchmarks. We searched the Web for annual best times of swimmers who were top 16 in pool events at the 2008 or 2012 Olympics, from each swimmer's earliest available competitive performance through to 2012. There were 6959 times in the 13 events for each sex, for 683 swimmers, with 10 ± 3 performances per swimmer (mean ± s). Progression to peak performance was tracked with individual quadratic trajectories derived using a mixed linear model that included adjustments for better performance in Olympic years and for the use of full-body polyurethane swimsuits in 2009. Analysis of residuals revealed appropriate fit of quadratic trends to the data. The trajectories provided estimates of age of peak performance and the duration of the age window of trivial improvement and decline around the peak. Men achieved peak performance later than women (24.2 ± 2.1 vs. 22.5 ± 2.4 years), while peak performance occurred at later ages for the shorter distances for both sexes (∼1.5-2.0 years between sprint and distance-event groups). Men and women had a similar duration in the peak-performance window (2.6 ± 1.5 years) and similar progressions to peak performance over four years (2.4 ± 1.2%) and eight years (9.5 ± 4.8%). These data provide performance targets for swimmers aiming to achieve elite-level performance. PMID:24597644

  18. Developing a Metrics-Based Online Strategy for Libraries

    ERIC Educational Resources Information Center

    Pagano, Joe

    2009-01-01

    Purpose: The purpose of this paper is to provide an introduction to the various web metrics tools that are available, and to indicate how these might be used in libraries. Design/methodology/approach: The paper describes ways in which web metrics can be used to inform strategic decision making in libraries. Findings: A framework of possible web…

  19. Hospital readiness for health information exchange: development of metrics associated with successful collaboration for quality improvement

    PubMed Central

    Korst, Lisa M.; Aydin, Carolyn E.; Signer, Jordana M. K.; Fink, Arlene

    2011-01-01

    Objective The development of readiness metrics for organizational participation in health information exchange is critical for monitoring progress toward, and achievement of, successful inter-organizational collaboration. In preparation for the development of a tool to measure readiness for data-sharing, we tested whether organizational capacities known to be related to readiness were associated with successful participation in an American data-sharing collaborative for quality improvement. Design Cross-sectional design, using an on-line survey of hospitals in a large, mature data-sharing collaborative organized for benchmarking and improvement in nursing care quality. Measurements Factor analysis was used to identify salient constructs, and identified factors were analyzed with respect to “successful” participation. “Success” was defined as the incorporation of comparative performance data into the hospital dashboard. Results The most important factor in predicting success included survey items measuring the strength of organizational leadership in fostering a culture of quality improvement (QI Leadership): 1) presence of a supportive hospital executive; 2) the extent to which a hospital values data; 3) the presence of leaders’ vision for how the collaborative advances the hospital’s strategic goals; 4) hospital use of the collaborative data to track quality outcomes; and 5) staff recognition of a strong mandate for collaborative participation (α = 0.84, correlation with Success 0.68 [P < 0.0001]). Conclusion The data emphasize the importance of hospital QI Leadership in collaboratives that aim to share data for QI or safety purposes. Such metrics should prove useful in the planning and development of this complex form of inter-organizational collaboration. PMID:21330191

  20. 40 CFR 141.540 - Who has to develop a disinfection benchmark?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Who has to develop a disinfection benchmark? 141.540 Section 141.540 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... Disinfection-Systems Serving Fewer Than 10,000 People Disinfection Benchmark § 141.540 Who has to develop...

  1. Benchmarks and Quality Assurance for Online Course Development in Higher Education

    ERIC Educational Resources Information Center

    Wang, Hong

    2008-01-01

    As online education has entered the main stream of the U.S. higher education, quality assurance in online course development has become a critical topic in distance education. This short article summarizes the major benchmarks related to online course development, listing and comparing the benchmarks of the National Education Association (NEA),…

  2. Development of a Benchmark Hydroclimate Data Library for N. America

    NASA Astrophysics Data System (ADS)

    Lall, U.; Cook, E.

    2001-12-01

    This poster presents the recommendations of an international workshop held May 24-25, 2001, at the Lamont-Doherty Earth Observatory, Palisades, New York. The purpose of the workshop was to: (1) Identify the needs for a continental and eventually global benchmark hydroclimatic dataset; (2) Evaluate how they are currently being met in the 3 countries of N. America; and (3)Identify the main scientific and institutional challenges in improving access, and associated implementation strategies to improve the data elements and access. An initial focus on N. American streamflow was suggested. The estimation of streamflow (or its specific statistics) at ungaged, poorly gaged locations or locations with a substantial modification of the hydrologic regime was identified as a priority. The potential for the use of extended (to 1856) climate records and of tree rings and other proxies (that may go back multiple centuries)for the reconstruction of a comprehensive data set of concurrent hydrologic and climate fields was considered. Specific recommendations for the implementation of a research program to support the development and enhance availability of the products in conjunction with the major federal and state agencies in the three countries of continental N. America were made. The implications of these recommendations for the Hydrologic Information Systems initiative of the Consortium of Universities for the Advanced of Hydrologic Science are discussed.

  3. Metrics Evolution in an Energy Research & Development Program

    SciTech Connect

    Brent Dixon

    2011-08-01

    All technology programs progress through three phases: Discovery, Definition, and Deployment. The form and application of program metrics needs to evolve with each phase. During the discovery phase, the program determines what is achievable. A set of tools is needed to define program goals, to analyze credible technical options, and to ensure that the options are compatible and meet the program objectives. A metrics system that scores the potential performance of technical options is part of this system of tools, supporting screening of concepts and aiding in the overall definition of objectives. During the definition phase, the program defines what specifically is wanted. What is achievable is translated into specific systems and specific technical options are selected and optimized. A metrics system can help with the identification of options for optimization and the selection of the option for deployment. During the deployment phase, the program shows that the selected system works. Demonstration projects are established and classical systems engineering is employed. During this phase, the metrics communicate system performance. This paper discusses an approach to metrics evolution within the Department of Energy's Nuclear Fuel Cycle R&D Program, which is working to improve the sustainability of nuclear energy.

  4. Development and application of freshwater sediment-toxicity benchmarks for currently used pesticides.

    PubMed

    Nowell, Lisa H; Norman, Julia E; Ingersoll, Christopher G; Moran, Patrick W

    2016-04-15

    Sediment-toxicity benchmarks are needed to interpret the biological significance of currently used pesticides detected in whole sediments. Two types of freshwater sediment benchmarks for pesticides were developed using spiked-sediment bioassay (SSB) data from the literature. These benchmarks can be used to interpret sediment-toxicity data or to assess the potential toxicity of pesticides in whole sediment. The Likely Effect Benchmark (LEB) defines a pesticide concentration in whole sediment above which there is a high probability of adverse effects on benthic invertebrates, and the Threshold Effect Benchmark (TEB) defines a concentration below which adverse effects are unlikely. For compounds without available SSBs, benchmarks were estimated using equilibrium partitioning (EqP). When a sediment sample contains a pesticide mixture, benchmark quotients can be summed for all detected pesticides to produce an indicator of potential toxicity for that mixture. Benchmarks were developed for 48 pesticide compounds using SSB data and 81 compounds using the EqP approach. In an example application, data for pesticides measured in sediment from 197 streams across the United States were evaluated using these benchmarks, and compared to measured toxicity from whole-sediment toxicity tests conducted with the amphipod Hyalella azteca (28-d exposures) and the midge Chironomus dilutus (10-d exposures). Amphipod survival, weight, and biomass were significantly and inversely related to summed benchmark quotients, whereas midge survival, weight, and biomass showed no relationship to benchmarks. Samples with LEB exceedances were rare (n=3), but all were toxic to amphipods (i.e., significantly different from control). Significant toxicity to amphipods was observed for 72% of samples exceeding one or more TEBs, compared to 18% of samples below all TEBs. Factors affecting toxicity below TEBs may include the presence of contaminants other than pesticides, physical/chemical characteristics

  5. Development and application of freshwater sediment-toxicity benchmarks for currently used pesticides

    USGS Publications Warehouse

    Nowell, Lisa H.; Norman, Julia E.; Ingersoll, Christopher G.; Moran, Patrick W.

    2016-01-01

    Sediment-toxicity benchmarks are needed to interpret the biological significance of currently used pesticides detected in whole sediments. Two types of freshwater sediment benchmarks for pesticides were developed using spiked-sediment bioassay (SSB) data from the literature. These benchmarks can be used to interpret sediment-toxicity data or to assess the potential toxicity of pesticides in whole sediment. The Likely Effect Benchmark (LEB) defines a pesticide concentration in whole sediment above which there is a high probability of adverse effects on benthic invertebrates, and the Threshold Effect Benchmark (TEB) defines a concentration below which adverse effects are unlikely. For compounds without available SSBs, benchmarks were estimated using equilibrium partitioning (EqP). When a sediment sample contains a pesticide mixture, benchmark quotients can be summed for all detected pesticides to produce an indicator of potential toxicity for that mixture. Benchmarks were developed for 48 pesticide compounds using SSB data and 81 compounds using the EqP approach. In an example application, data for pesticides measured in sediment from 197 streams across the United States were evaluated using these benchmarks, and compared to measured toxicity from whole-sediment toxicity tests conducted with the amphipod Hyalella azteca (28-d exposures) and the midge Chironomus dilutus (10-d exposures). Amphipod survival, weight, and biomass were significantly and inversely related to summed benchmark quotients, whereas midge survival, weight, and biomass showed no relationship to benchmarks. Samples with LEB exceedances were rare (n = 3), but all were toxic to amphipods (i.e., significantly different from control). Significant toxicity to amphipods was observed for 72% of samples exceeding one or more TEBs, compared to 18% of samples below all TEBs. Factors affecting toxicity below TEBs may include the presence of contaminants other than pesticides, physical

  6. Developing Image Processing Meta-Algorithms with Data Mining of Multiple Metrics

    PubMed Central

    Cunha, Alexandre; Toga, A. W.; Parker, D. Stott

    2014-01-01

    People often use multiple metrics in image processing, but here we take a novel approach of mining the values of batteries of metrics on image processing results. We present a case for extending image processing methods to incorporate automated mining of multiple image metric values. Here by a metric we mean any image similarity or distance measure, and in this paper we consider intensity-based and statistical image measures and focus on registration as an image processing problem. We show how it is possible to develop meta-algorithms that evaluate different image processing results with a number of different metrics and mine the results in an automated fashion so as to select the best results. We show that the mining of multiple metrics offers a variety of potential benefits for many image processing problems, including improved robustness and validation. PMID:24653748

  7. Benchmarking University Community Engagement: Developing a National Approach in Australia

    ERIC Educational Resources Information Center

    Garlick, Steve; Langworthy, Anne

    2008-01-01

    This article provides the background and describes the processes involved in establishing a national approach to benchmarking the way universities engage with their local and regional communities in Australia. Local and regional community engagement is a rapidly expanding activity in Australian public universities and is increasingly being seen as…

  8. A rationale for developing benchmarks for the treatment of muscle-invasive bladder cancer.

    PubMed

    Lee, Cheryl T

    2007-01-01

    Benchmarks are established standards of operation developed by a given group or industry generally designed to improve outcomes. The health care industry is increasingly required to develop such standards and document adherence to meet demands of regulatory bodies. Although established practice patterns exist for the treatment of invasive bladder cancer, there is significant treatment variation. This article provides a rationale for the development of benchmarks in the treatment of invasive bladder cancer. Such benchmarks may permit advances in treatment application and potentially improve patient outcomes. PMID:17208141

  9. Development of a perceptually calibrated objective metric of noise

    NASA Astrophysics Data System (ADS)

    Keelan, Brian W.; Jin, Elaine W.; Prokushkin, Sergey

    2011-01-01

    A system simulation model was used to create scene-dependent noise masks that reflect current performance of mobile phone cameras. Stimuli with different overall magnitudes of noise and with varying mixtures of red, green, blue, and luminance noises were included in the study. Eleven treatments in each of ten pictorial scenes were evaluated by twenty observers using the softcopy ruler method. In addition to determining the quality loss function in just noticeable differences (JNDs) for the average observer and scene, transformations for different combinations of observer sensitivity and scene susceptibility were derived. The psychophysical results were used to optimize an objective metric of isotropic noise based on system noise power spectra (NPS), which were integrated over a visual frequency weighting function to yield perceptually relevant variances and covariances in CIE L*a*b* space. Because the frequency weighting function is expressed in terms of cycles per degree at the retina, it accounts for display pixel size and viewing distance effects, so application-specific predictions can be made. Excellent results were obtained using only L* and a* variances and L*a* covariance, with relative weights of 100, 5, and 12, respectively. The positive a* weight suggests that the luminance (photopic) weighting is slightly narrow on the long wavelength side for predicting perceived noisiness. The L*a* covariance term, which is normally negative, reflects masking between L* and a* noise, as confirmed in informal evaluations. Test targets in linear sRGB and rendered L*a*b* spaces for each treatment are available at http://www.aptina.com/ImArch/ to enable other researchers to test metrics of their own design and calibrate them to JNDs of quality loss without performing additional observer experiments. Such JND-calibrated noise metrics are particularly valuable for comparing the impact of noise and other attributes, and for computing overall image quality.

  10. Performance metric development for a group state estimator in airborne UHF GMTI applications

    NASA Astrophysics Data System (ADS)

    Elwell, Ryan A.

    2013-05-01

    This paper describes the development and implementation of evaluation metrics for group state estimator (GSE, i.e. group tracking) algorithms. Key differences between group tracker metrics and individual tracker metrics are the method used for track-to-truth association and the characterization of group raid size. Another significant contribution of this work is the incorporation of measured radar performance in assessing tracker performance. The result of this work is a set of measures of performance derived from canonical individual target tracker metrics, extended to characterize the additional information provided by a group tracker. The paper discusses additional considerations in group tracker evaluation, including the definition of a group and group-to-group confusion. Metrics are computed on real field data to provide examples of real-world analysis, demonstrating an approach which provides characterization of group tracker performance, independent of the sensor's performance.

  11. Development of a Quantitative Decision Metric for Selecting the Most Suitable Discretization Method for SN Transport Problems

    NASA Astrophysics Data System (ADS)

    Schunert, Sebastian

    In this work we develop a quantitative decision metric for spatial discretization methods of the SN equations. The quantitative decision metric utilizes performance data from selected test problems for computing a fitness score that is used for the selection of the most suitable discretization method for a particular SN transport application. The fitness score is aggregated as a weighted geometric mean of single performance indicators representing various performance aspects relevant to the user. Thus, the fitness function can be adjusted to the particular needs of the code practitioner by adding/removing single performance indicators or changing their importance via the supplied weights. Within this work a special, broad class of methods is considered, referred to as nodal methods. This class is naturally comprised of the DGFEM methods of all function space families. Within this work it is also shown that the Higher Order Diamond Difference (HODD) method is a nodal method. Building on earlier findings that the Arbitrarily High Order Method of the Nodal type (AHOTN) is also a nodal method, a generalized finite-element framework is created to yield as special cases various methods that were developed independently using profoundly different formalisms. A selection of test problems related to a certain performance aspect are considered: an Method of Manufactured Solutions (MMS) test suite for assessing accuracy and execution time, Lathrop's test problem for assessing resilience against occurrence of negative fluxes, and a simple, homogeneous cube test problem to verify if a method possesses the thick diffusive limit. The contending methods are implemented as efficiently as possible under a common SN transport code framework to level the playing field for a fair comparison of their computational load. Numerical results are presented for all three test problems and a qualitative rating of each method's performance is provided for each aspect: accuracy

  12. Development of Benchmark Examples for Quasi-Static Delamination Propagation and Fatigue Growth Predictions

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2012-01-01

    The development of benchmark examples for quasi-static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for Abaqus/Standard. The example is based on a finite element model of a Double-Cantilever Beam specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, a quasi-static benchmark example was created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall the results are encouraging, but further assessment for mixed-mode delamination is required.

  13. Development and Application of Benchmark Examples for Mode II Static Delamination Propagation and Fatigue Growth Predictions

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2011-01-01

    The development of benchmark examples for static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of an End-Notched Flexure (ENF) specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, static benchmark examples were created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall the results are encouraging, but further assessment for mixed-mode delamination is required.

  14. Development of Benchmark Examples for Static Delamination Propagation and Fatigue Growth Predictions

    NASA Technical Reports Server (NTRS)

    Kruger, Ronald

    2011-01-01

    The development of benchmark examples for static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of an End-Notched Flexure (ENF) specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, static benchmark examples were created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during stable delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall, the results are encouraging but further assessment for mixed-mode delamination is required.

  15. Advanced Life Support Research and Technology Development Metric: Fiscal Year 2003

    NASA Technical Reports Server (NTRS)

    Hanford, A. J.

    2004-01-01

    This document provides the official calculation of the Advanced Life Support (ALS) Research and Technology Development Metric (the Metric) for Fiscal Year 2003. As such, the values herein are primarily based on Systems Integration, Modeling, and Analysis (SIMA) Element approved software tools or reviewed and approved reference documents. The Metric is one of several measures employed by the National Aeronautics and Space Administration (NASA) to assess the Agency s progress as mandated by the United States Congress and the Office of Management and Budget. Because any measure must have a reference point, whether explicitly defined or implied, the Metric is a comparison between a selected ALS Project life support system and an equivalently detailed life support system using technology from the Environmental Control and Life Support System (ECLSS) for the International Space Station (ISS). More specifically, the Metric is the ratio defined by the equivalent system mass (ESM) of a life support system for a specific mission using the ISS ECLSS technologies divided by the ESM for an equivalent life support system using the best ALS technologies. As defined, the Metric should increase in value as the ALS technologies become lighter, less power intensive, and require less volume. For Fiscal Year 2003, the Advanced Life Support Research and Technology Development Metric value is 1.47 for an Orbiting Research Facility and 1.36 for an Independent Exploration Mission.

  16. Developing Common Metrics for the Clinical and Translational Science Awards (CTSAs): Lessons Learned.

    PubMed

    Rubio, Doris M; Blank, Arthur E; Dozier, Ann; Hites, Lisle; Gilliam, Victoria A; Hunt, Joe; Rainwater, Julie; Trochim, William M

    2015-10-01

    The National Institutes of Health (NIH) Roadmap for Medical Research initiative, funded by the NIH Common Fund and offered through the Clinical and Translational Science Award (CTSA) program, developed more than 60 unique models for achieving the NIH goal of accelerating discoveries toward better public health. The variety of these models enabled participating academic centers to experiment with different approaches to fit their research environment. A central challenge related to the diversity of approaches is the ability to determine the success and contribution of each model. This paper describes the effort by the Evaluation Key Function Committee to develop and test a methodology for identifying a set of common metrics to assess the efficiency of clinical research processes and for pilot testing these processes for collecting and analyzing metrics. The project involved more than one-fourth of all CTSAs and resulted in useful information regarding the challenges in developing common metrics, the complexity and costs of acquiring data for the metrics, and limitations on the utility of the metrics in assessing clinical research performance. The results of this process led to the identification of lessons learned and recommendations for development and use of common metrics to evaluate the CTSA effort. PMID:26073891

  17. Thermal Performance Benchmarking (Presentation)

    SciTech Connect

    Moreno, G.

    2014-11-01

    This project will benchmark the thermal characteristics of automotive power electronics and electric motor thermal management systems. Recent vehicle systems will be benchmarked to establish baseline metrics, evaluate advantages and disadvantages of different thermal management systems, and identify areas of improvement to advance the state-of-the-art.

  18. Development and Implementation of a Design Metric for Systems Containing Long-Term Fluid Loops

    NASA Technical Reports Server (NTRS)

    Steele, John W.

    2016-01-01

    John Steele, a chemist and technical fellow from United Technologies Corporation, provided a water quality module to assist engineers and scientists with a metric tool to evaluate risks associated with the design of space systems with fluid loops. This design metric is a methodical, quantitative, lessons-learned based means to evaluate the robustness of a long-term fluid loop system design. The tool was developed by a cross-section of engineering disciplines who had decades of experience and problem resolution.

  19. A framework for benchmarking land models

    SciTech Connect

    Luo, Yiqi; Randerson, J.; Abramowitz, G.; Bacour, C.; Blyth, E.; Carvalhais, N.; Ciais, Philippe; Dalmonech, D.; Fisher, J.B.; Fisher, R.; Friedlingstein, P.; Hibbard, Kathleen A.; Hoffman, F. M.; Huntzinger, Deborah; Jones, C.; Koven, C.; Lawrence, David M.; Li, D.J.; Mahecha, M.; Niu, S.L.; Norby, Richard J.; Piao, S.L.; Qi, X.; Peylin, P.; Prentice, I.C.; Riley, William; Reichstein, M.; Schwalm, C.; Wang, Y.; Xia, J. Y.; Zaehle, S.; Zhou, X. H.

    2012-10-09

    Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1) targeted aspects of model performance to be evaluated, (2) a set of benchmarks as defined references to test model performance, (3) metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4) model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1) a priori thresholds of acceptable model performance and (2) a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties of land models

  20. Developing a Benchmarking Process in Perfusion: A Report of the Perfusion Downunder Collaboration

    PubMed Central

    Baker, Robert A.; Newland, Richard F.; Fenton, Carmel; McDonald, Michael; Willcox, Timothy W.; Merry, Alan F.

    2012-01-01

    Abstract: Improving and understanding clinical practice is an appropriate goal for the perfusion community. The Perfusion Downunder Collaboration has established a multi-center perfusion focused database aimed at achieving these goals through the development of quantitative quality indicators for clinical improvement through benchmarking. Data were collected using the Perfusion Downunder Collaboration database from procedures performed in eight Australian and New Zealand cardiac centers between March 2007 and February 2011. At the Perfusion Downunder Meeting in 2010, it was agreed by consensus, to report quality indicators (QI) for glucose level, arterial outlet temperature, and pCO2 management during cardiopulmonary bypass. The values chosen for each QI were: blood glucose ≥4 mmol/L and ≤10 mmol/L; arterial outlet temperature ≤37°C; and arterial blood gas pCO2 ≥ 35 and ≤45 mmHg. The QI data were used to derive benchmarks using the Achievable Benchmark of Care (ABC™) methodology to identify the incidence of QIs at the best performing centers. Five thousand four hundred and sixty-five procedures were evaluated to derive QI and benchmark data. The incidence of the blood glucose QI ranged from 37–96% of procedures, with a benchmark value of 90%. The arterial outlet temperature QI occurred in 16–98% of procedures with the benchmark of 94%; while the arterial pCO2 QI occurred in 21–91%, with the benchmark value of 80%. We have derived QIs and benchmark calculations for the management of several key aspects of cardiopulmonary bypass to provide a platform for improving the quality of perfusion practice. PMID:22730861

  1. Metrics for Developing an Endorsed Set of Radiographic Threat Surrogates for JINII/CAARS

    SciTech Connect

    Wurtz, R; Walston, S; Dietrich, D; Martz, H

    2009-02-11

    CAARS (Cargo Advanced Automated Radiography System) is developing x-ray dual energy and x-ray backscatter methods to automatically detect materials that are greater than Z=72 (hafnium). This works well for simple geometry materials, where most of the radiographic path is through one material. However, this is usually not the case. Instead, the radiographic path includes many materials of different lengths. Single energy can be used to compute {mu}y{sub l} which is related to areal density (mass per unit area) while dual energy yields more information. This report describes a set of metrics suitable and sufficient for characterizing the appearance of assemblies as detected by x-ray radiographic imaging systems, such as those being tested by Joint Integrated Non-Intrusive Inspection (JINII) or developed under CAARS. These metrics will be simulated both for threat assemblies and surrogate threat assemblies (such as are found in Roney et al. 2007) using geometrical and compositional information of the assemblies. The imaging systems are intended to distinguish assemblies containing high-Z material from those containing low-Z material, regardless of thickness, density, or compounds and mixtures. The systems in question operate on the principle of comparing images obtained by using two different x-ray end-point energies--so-called 'dual energy' imaging systems. At the direction of the DHS JINII sponsor, this report does not cover metrics that implement scattering, in the form of either forward-scattered radiation or high-Z detection systems operating on the principle of backscatter detection. Such methods and effects will be covered in a later report. The metrics described here are to be used to compare assemblies and not x-ray radiography systems. We intend to use these metrics to determine whether two assemblies do or do not look the same. We are tasked to develop a set of assemblies whose appearance using this class of detection systems is indistinguishable from the

  2. Development of a benchmarking model for lithium battery electrodes

    NASA Astrophysics Data System (ADS)

    Bergholz, Timm; Korte, Carsten; Stolten, Detlef

    2016-07-01

    This paper presents a benchmarking model to enable systematic selection of anode and cathode materials for lithium batteries in stationary applications, hybrid and battery electric vehicles. The model incorporates parameters for energy density, power density, safety, lifetime, costs and raw materials. Combinations of carbon anodes, Li4Ti5O12 or TiO2 with LiFePO4 cathodes comprise interesting combinations for application in hybrid power trains. Higher cost and raw material prioritization of stationary applications hinders the breakthrough of Li4Ti5O12, while a combination of TiO2 and LiFePO4 is suggested. The favored combinations resemble state-of-the-art materials, whereas novel cell chemistries must be optimized for cells in battery electric vehicles. In contrast to actual research efforts, sulfur as a cathode material is excluded due to its low volumetric energy density and its known lifetime and safety issues. Lithium as anode materials is discarded due to safety issues linked to electrode melting and dendrite formation. A high capacity composite Li2MnO3·LiNi0.5Co0.5O2 and high voltage spinel LiNi0.5Mn1.5O4 cathode with silicon as anode material promise high energy densities with sufficient lifetime and safety properties if electrochemical and thermal stabilization of the electrolyte/electrode interfaces and bulk materials is achieved. The model allows a systematic top-down orientation of research on lithium batteries.

  3. International E-Benchmarking: Flexible Peer Development of Authentic Learning Principles in Higher Education

    ERIC Educational Resources Information Center

    Leppisaari, Irja; Vainio, Leena; Herrington, Jan; Im, Yeonwook

    2011-01-01

    More and more, social technologies and virtual work methods are facilitating new ways of crossing boundaries in professional development and international collaborations. This paper examines the peer development of higher education teachers through the experiences of the IVBM project (International Virtual Benchmarking, 2009-2010). The…

  4. Metric transition

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This report describes NASA's metric transition in terms of seven major program elements. Six are technical areas involving research, technology development, and operations; they are managed by specific Program Offices at NASA Headquarters. The final program element, Institutional Management, covers both NASA-wide functional management under control of NASA Headquarters and metric capability development at the individual NASA Field Installations. This area addresses issues common to all NASA program elements, including: Federal, state, and local coordination; standards; private industry initiatives; public-awareness initiatives; and employee training. The concluding section identifies current barriers and impediments to metric transition; NASA has no specific recommendations for consideration by the Congress.

  5. Using Web Metric Software to Drive: Mobile Website Development

    ERIC Educational Resources Information Center

    Tidal, Junior

    2011-01-01

    Many libraries have developed mobile versions of their websites. In order to understand their users, web developers have conducted both usability tests and focus groups, yet analytical software and web server logs can also be used to better understand users. Using data collected from these tools, the Ursula C. Schwerin Library has made informed…

  6. Using Participatory Action Research to Study the Implementation of Career Development Benchmarks at a New Zealand University

    ERIC Educational Resources Information Center

    Furbish, Dale S.; Bailey, Robyn; Trought, David

    2016-01-01

    Benchmarks for career development services at tertiary institutions have been developed by Careers New Zealand. The benchmarks are intended to provide standards derived from international best practices to guide career development services. A new career development service was initiated at a large New Zealand university just after the benchmarks…

  7. Benchmarking Organizational Career Development in the United States.

    ERIC Educational Resources Information Center

    Simonsen, Peggy

    Career development has evolved from the mid-1970s, when it was rarely linked with the word "organizational," to Walter Storey's work in organizational career development at General Electric in 1978. Its evolution has continued with career development workshops in organizations in the early 1980s to implementation of Corning's organizational career…

  8. Development of an Objective Space Suit Mobility Performance Metric Using Metabolic Cost and Functional Tasks

    NASA Technical Reports Server (NTRS)

    McFarland, Shane M.; Norcross, Jason

    2016-01-01

    Existing methods for evaluating EVA suit performance and mobility have historically concentrated on isolated joint range of motion and torque. However, these techniques do little to evaluate how well a suited crewmember can actually perform during an EVA. An alternative method of characterizing suited mobility through measurement of metabolic cost to the wearer has been evaluated at Johnson Space Center over the past several years. The most recent study involved six test subjects completing multiple trials of various functional tasks in each of three different space suits; the results indicated it was often possible to discern between different suit designs on the basis of metabolic cost alone. However, other variables may have an effect on real-world suited performance; namely, completion time of the task, the gravity field in which the task is completed, etc. While previous results have analyzed completion time, metabolic cost, and metabolic cost normalized to system mass individually, it is desirable to develop a single metric comprising these (and potentially other) performance metrics. This paper outlines the background upon which this single-score metric is determined to be feasible, and initial efforts to develop such a metric. Forward work includes variable coefficient determination and verification of the metric through repeated testing.

  9. Benchmarking infrastructure for mutation text mining

    PubMed Central

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  10. Development of Adherence Metrics for Caloric Restriction Interventions

    PubMed Central

    Pieper, Carl F.; Redman, Leanne M.; Bapkar, Manju; Roberts, Susan B.; Racette, Susan B.; Rochon, James; Martin, Corby K.; Kraus, William E.; Das, Sai; Williamson, Donald; Ravussin, Eric

    2011-01-01

    Background Objective measures are needed to quantify dietary adherence during caloric restriction (CR) while participants are free-living. One method to monitor adherence is to compare observed weight loss to the expected weight loss during a prescribed level of CR. Normograms (graphs) of expected weight loss can be created from mathematical modeling of weight change to a given level of CR, conditional on the individual's set of baseline characteristics. These normograms can then be used by counselors to help the participant adhere to their caloric target. Purpose (1) To develop models of weight loss over a year of caloric restriction given demographics (age and sex), and well defined measurements of of Body Mass Index, total daily energy expenditure (TDEE) and %CR. (2) To utilize these models to develop normograms given level of caloric restriction, and measures of these variables. Methods Seventy-seven individuals completing a 6-12 month CR intervention (CALERIE) had body weight and body composition measured frequently. Energy intake (and %CR) was estimated from TDEE (by doubly labeled water) and body composition (by DXA) at baseline and months 1, 3, 6 and 12. Body weight was modeled to determine the predictors and distribution of the expected trajectory of percent weight change over 12 months of caloric restriction. Results As expected, CR was related to change in body weight. Controlling for time-varying measures, initially simple models of the functional form indicated that the trajectory of percent weight change was predicted by a non-linear function of initial age, TDEE, %CR, and sex. Using these estimates, normograms for the weight change expected during a 25%CR were developed. Our model estimates that the mean weight loss (% change from baseline weight) for an individual adherent to a 25% CR regimen is -10.9±6.3% for females and -13.9±6.4% for men after 12 months. Limitations There are several limitations. Sample sizes are small (n=77), and, by design

  11. Developing Student Character through Disciplinary Curricula: An Analysis of UK QAA Subject Benchmark Statements

    ERIC Educational Resources Information Center

    Quinlan, Kathleen M.

    2016-01-01

    What aspects of student character are expected to be developed through disciplinary curricula? This paper examines the UK written curriculum through an analysis of the Quality Assurance Agency's subject benchmark statements for the most popular subjects studied in the UK. It explores the language, principles and intended outcomes that suggest…

  12. Developing Evidence for Action on the Postgraduate Experience: An Effective Local Instrument to Move beyond Benchmarking

    ERIC Educational Resources Information Center

    Sampson, K. A.; Johnston, L.; Comer, K.; Brogt, E.

    2016-01-01

    Summative and benchmarking surveys to measure the postgraduate student research experience are well reported in the literature. While useful, we argue that local instruments that provide formative resources with an academic development focus are also required. If higher education institutions are to move beyond the identification of issues and…

  13. Benchmarking and Threshold Standards in Higher Education. Staff and Educational Development Series.

    ERIC Educational Resources Information Center

    Smith, Helen, Ed.; Armstrong, Michael, Ed.; Brown, Sally, Ed.

    This book explores the issues involved in developing standards in higher education, examining the practical issues involved in benchmarking and offering a critical analysis of the problems associated with this developmental tool. The book focuses primarily on experience in the United Kingdom (UK), but looks also at international activity in this…

  14. Developing of Indicators of an E-Learning Benchmarking Model for Higher Education Institutions

    ERIC Educational Resources Information Center

    Sae-Khow, Jirasak

    2014-01-01

    This study was the development of e-learning indicators used as an e-learning benchmarking model for higher education institutes. Specifically, it aimed to: 1) synthesize the e-learning indicators; 2) examine content validity by specialists; and 3) explore appropriateness of the e-learning indicators. Review of related literature included…

  15. Development of oil product toxicity benchmarks using SSDs

    EPA Science Inventory

    Determining the sensitivity of a diversity of species to spilled oil and chemically dispersed oil continues to be a significant challenge in spill response and impact assessment. We developed species sensitivity distributions (SSDs) of acute toxicity values using standardized te...

  16. Defining Exercise Performance Metrics for Flight Hardware Development

    NASA Technical Reports Server (NTRS)

    Beyene, Nahon M.

    2004-01-01

    The space industry has prevailed over numerous design challenges in the spirit of exploration. Manned space flight entails creating products for use by humans and the Johnson Space Center has pioneered this effort as NASA's center for manned space flight. NASA Astronauts use a suite of flight exercise hardware to maintain strength for extravehicular activities and to minimize losses in muscle mass and bone mineral density. With a cycle ergometer, treadmill, and the Resistive Exercise Device available on the International Space Station (ISS), the Space Medicine community aspires to reproduce physical loading schemes that match exercise performance in Earth s gravity. The resistive exercise device presents the greatest challenge with the duty of accommodating 20 different exercises and many variations on the core set of exercises. This paper presents a methodology for capturing engineering parameters that can quantify proper resistive exercise performance techniques. For each specified exercise, the method provides engineering parameters on hand spacing, foot spacing, and positions of the point of load application at the starting point, midpoint, and end point of the exercise. As humans vary in height and fitness levels, the methodology presents values as ranges. In addition, this method shows engineers the proper load application regions on the human body. The methodology applies to resistive exercise in general and is in use for the current development of a Resistive Exercise Device. Exercise hardware systems must remain available for use and conducive to proper exercise performance as a contributor to mission success. The astronauts depend on exercise hardware to support extended stays aboard the ISS. Future plans towards exploration of Mars and beyond acknowledge the necessity of exercise. Continuous improvement in technology and our understanding of human health maintenance in space will allow us to support the exploration of Mars and the future of space

  17. Benchmark Dose Software Development and Maintenance Ten Berge Cxt Models

    EPA Science Inventory

    This report is intended to provide an overview of beta version 1.0 of the implementation of a concentration-time (CxT) model originally programmed and provided by Wil ten Berge (referred to hereafter as the ten Berge model). The recoding and development described here represent ...

  18. Development of PE Metrics Elementary Assessments for National Physical Education Standard 1

    ERIC Educational Resources Information Center

    Dyson, Ben; Placek, Judith H.; Graber, Kim C.; Fisette, Jennifer L.; Rink, Judy; Zhu, Weimo; Avery, Marybell; Franck, Marian; Fox, Connie; Raynes, De; Park, Youngsik

    2011-01-01

    This article describes how assessments in PE Metrics were developed following six steps: (a) determining test blueprint, (b) writing assessment tasks and scoring rubrics, (c) establishing content validity, (d) piloting assessments, (e) conducting item analysis, and (f) modifying the assessments based on analysis and expert opinion. A task force,…

  19. IBI METRIC DEVELOPMENT FOR STREAMS AND RIVERS IN WESTERN FORESTED MOUNTAINS AND ARID LANDS

    EPA Science Inventory

    In the western USA, development of metrics and indices of vertebrate assemblage condition in streams and rivers is challenged by low species richness, by strong natural gradients, by human impact gradients that co-vary with natural gradients, and by a shortage of minimally-distur...

  20. USING BROAD-SCALE METRICS TO DEVELOP INDICATORS OF WATERSHED VULNERABILITY IN THE OZARK MOUNTAINS (USA)

    EPA Science Inventory

    Multiple broad-scale landscape metrics were tested as potential indicators of total phosphorus (TP) concentration, total ammonia (TA) concentration, and Escherichia coli (E. coli) bacteria count, among 244 sub-watersheds in the Ozark Mountains (USA). Indicator models were develop...

  1. Performation Metrics Development Analysis for Information and Communications Technology Outsourcing: A Case Study

    ERIC Educational Resources Information Center

    Travis, James L., III

    2014-01-01

    This study investigated how and to what extent the development and use of the OV-5a operational architecture decomposition tree (OADT) from the Department of Defense (DoD) Architecture Framework (DoDAF) affects requirements analysis with respect to complete performance metrics for performance-based services acquisition of ICT under rigid…

  2. Developing and Benchmarking Native Linux Applications on Android

    NASA Astrophysics Data System (ADS)

    Batyuk, Leonid; Schmidt, Aubrey-Derrick; Schmidt, Hans-Gunther; Camtepe, Ahmet; Albayrak, Sahin

    Smartphones get increasingly popular where more and more smartphone platforms emerge. Special attention was gained by the open source platform Android which was presented by the Open Handset Alliance (OHA) hosting members like Google, Motorola, and HTC. Android uses a Linux kernel and a stripped-down userland with a custom Java VM set on top. The resulting system joins the advantages of both environments, while third-parties are intended to develop only Java applications at the moment.

  3. Benchmarking for the Learning and Skills Sector.

    ERIC Educational Resources Information Center

    Owen, Jane

    This document is designed to introduce practitioners in the United Kingdom's learning and skills sector to the principles and practice of benchmarking. The first section defines benchmarking and differentiates metric, diagnostic, and process benchmarking. The remainder of the booklet details the following steps of the benchmarking process: (1) get…

  4. Benchmarks of fairness for health care reform: a policy tool for developing countries.

    PubMed Central

    Daniels, N.; Bryant, J.; Castano, R. A.; Dantes, O. G.; Khan, K. S.; Pannarunothai, S.

    2000-01-01

    Teams of collaborators from Colombia, Mexico, Pakistan, and Thailand have adapted a policy tool originally developed for evaluating health insurance reforms in the United States into "benchmarks of fairness" for assessing health system reform in developing countries. We describe briefly the history of the benchmark approach, the tool itself, and the uses to which it may be put. Fairness is a wide term that includes exposure to risk factors, access to all forms of care, and to financing. It also includes efficiency of management and resource allocation, accountability, and patient and provider autonomy. The benchmarks standardize the criteria for fairness. Reforms are then evaluated by scoring according to the degree to which they improve the situation, i.e. on a scale of -5 to 5, with zero representing the status quo. The object is to promote discussion about fairness across the disciplinary divisions that keep policy analysts and the public from understanding how trade-offs between different effects of reforms can affect the overall fairness of the reform. The benchmarks can be used at both national and provincial or district levels, and we describe plans for such uses in the collaborating sites. A striking feature of the adaptation process is that there was wide agreement on this ethical framework among the collaborating sites despite their large historical, political and cultural differences. PMID:10916911

  5. Development of a Computer-based Benchmarking and Analytical Tool. Benchmarking and Energy & Water Savings Tool in Dairy Plants (BEST-Dairy)

    SciTech Connect

    Xu, Tengfang; Flapper, Joris; Ke, Jing; Kramer, Klaas; Sathaye, Jayant

    2012-02-01

    The overall goal of the project is to develop a computer-based benchmarking and energy and water savings tool (BEST-Dairy) for use in the California dairy industry – including four dairy processes – cheese, fluid milk, butter, and milk powder.

  6. Development of Benchmark Examples for Delamination Onset and Fatigue Growth Prediction

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2011-01-01

    An approach for assessing the delamination propagation and growth capabilities in commercial finite element codes was developed and demonstrated for the Virtual Crack Closure Technique (VCCT) implementations in ABAQUS. The Double Cantilever Beam (DCB) specimen was chosen as an example. First, benchmark results to assess delamination propagation capabilities under static loading were created using models simulating specimens with different delamination lengths. For each delamination length modeled, the load and displacement at the load point were monitored. The mixed-mode strain energy release rate components were calculated along the delamination front across the width of the specimen. A failure index was calculated by correlating the results with the mixed-mode failure criterion of the graphite/epoxy material. The calculated critical loads and critical displacements for delamination onset for each delamination length modeled were used as a benchmark. The load/displacement relationship computed during automatic propagation should closely match the benchmark case. Second, starting from an initially straight front, the delamination was allowed to propagate based on the algorithms implemented in the commercial finite element software. The load-displacement relationship obtained from the propagation analysis results and the benchmark results were compared. Good agreements could be achieved by selecting the appropriate input parameters, which were determined in an iterative procedure.

  7. Development of a HEX-Z Partially Homogenized Benchmark Model for the FFTF Isothermal Physics Measurements

    SciTech Connect

    John D. Bess

    2012-05-01

    A series of isothermal physics measurements were performed as part of an acceptance testing program for the Fast Flux Test Facility (FFTF). A HEX-Z partially-homogenized benchmark model of the FFTF fully-loaded core configuration was developed for evaluation of these measurements. Evaluated measurements include the critical eigenvalue of the fully-loaded core, two neutron spectra, 32 reactivity effects measurements, an isothermal temperature coefficient, and low-energy gamma and electron spectra. Dominant uncertainties in the critical configuration include the placement of radial shielding around the core, reactor core assembly pitch, composition of the stainless steel components, plutonium content in the fuel pellets, and boron content in the absorber pellets. Calculations of criticality, reactivity effects measurements, and the isothermal temperature coefficient using MCNP5 and ENDF/B-VII.0 cross sections with the benchmark model are in good agreement with the benchmark experiment measurements. There is only some correlation between calculated and measured spectral measurements; homogenization of many of the core components may have impacted computational assessment of these measurements. This benchmark evaluation has been added to the IRPhEP Handbook.

  8. Coral growth on three reefs: development of recovery benchmarks using a space for time approach

    NASA Astrophysics Data System (ADS)

    Done, T. J.; Devantier, L. M.; Turak, E.; Fisk, D. A.; Wakeford, M.; van Woesik, R.

    2010-12-01

    This 14-year study (1989-2003) develops recovery benchmarks based on a period of very strong coral recovery in Acropora-dominated assemblages on the Great Barrier Reef (GBR) following major setbacks from the predatory sea-star Acanthaster planci in the early 1980s. A space for time approach was used in developing the benchmarks, made possible by the choice of three study reefs (Green Island, Feather Reef and Rib Reef), spread along 3 degrees of latitude (300 km) of the GBR. The sea-star outbreaks progressed north to south, causing death of corals that reached maximum levels in the years 1980 (Green), 1982 (Feather) and 1984 (Rib). The reefs were initially surveyed in 1989, 1990, 1993 and 1994, which represent recovery years 5-14 in the space for time protocol. Benchmark trajectories for coral abundance, colony sizes, coral cover and diversity were plotted against nominal recovery time (years 5-14) and defined as non-linear functions. A single survey of the same three reefs was conducted in 2003, when the reefs were nominally 1, 3 and 5 years into a second recovery period, following further Acanthaster impacts and coincident coral bleaching events around the turn of the century. The 2003 coral cover was marginally above the benchmark trajectory, but colony density (colonies.m-2) was an order of magnitude lower than the benchmark, and size structure was biased toward larger colonies that survived the turn of the century disturbances. The under-representation of small size classes in 2003 suggests that mass recruitment of corals had been suppressed, reflecting low regional coral abundance and depression of coral fecundity by recent bleaching events. The marginally higher cover and large colonies of 2003 were thus indicative of a depleted and aging assemblage not yet rejuvenated by a strong cohort of recruits.

  9. Measuring in Metric.

    ERIC Educational Resources Information Center

    Sorenson, Juanita S.

    Eight modules for an in-service course on metric education for elementary teachers are included in this document. The modules are on an introduction to the metric system, length and basic prefixes, volume, mass, temperature, relationships within the metric system, and metric and English system relationships. The eighth one is on developing a…

  10. The relationship between settlement population size and sustainable development measured by two sustainability metrics

    SciTech Connect

    O'Regan, Bernadette Morrissey, John; Foley, Walter; Moles, Richard

    2009-04-15

    This paper reports on a study of the relative sustainability of 79 Irish villages, towns and a small city (collectively called 'settlements') classified by population size. Quantitative data on more than 300 economic, social and environmental attributes of each settlement were assembled into a database. Two aggregated metrics were selected to model the relative sustainability of settlements: Ecological Footprint (EF) and Sustainable Development Index (SDI). Subsequently these were aggregated to create a single Combined Sustainable Development Index. Creation of this database meant that metric calculations did not rely on proxies, and were therefore considered to be robust. Methods employed provided values for indicators at various stages of the aggregation process. This allowed both the first reported empirical analysis of the relationship between settlement sustainability and population size, and the elucidation of information provided at different stages of aggregation. At the highest level of aggregation, settlement sustainability increased with population size, but important differences amongst individual settlements were masked by aggregation. EF and SDI metrics ranked settlements in differing orders of relative sustainability. Aggregation of indicators to provide Ecological Footprint values was found to be especially problematic, and this metric was inadequately sensitive to distinguish amongst the relative sustainability achieved by all settlements. Many authors have argued that, for policy makers to be able to inform planning decisions using sustainability indicators, it is necessary that they adopt a toolkit of aggregated indicators. Here it is argued that to interpret correctly each aggregated metric value, policy makers also require a hierarchy of disaggregated component indicator values, each explained fully. Possible implications for urban planning are briefly reviewed.

  11. The State of Energy and Performance Benchmarking for Enterprise Servers

    NASA Astrophysics Data System (ADS)

    Fanara, Andrew; Haines, Evan; Howard, Arthur

    To address the server industry’s marketing focus on performance, benchmarking organizations have played a pivotal role in developing techniques to determine the maximum achievable performance level of a system. Generally missing has been an assessment of energy use to achieve that performance. The connection between performance and energy consumption is becoming necessary information for designers and operators as they grapple with power constraints in the data center. While industry and policy makers continue to strategize about a universal metric to holistically measure IT equipment efficiency, existing server benchmarks for various workloads could provide an interim proxy to assess the relative energy efficiency of general servers. This paper discusses ideal characteristics a future energy-performance benchmark might contain, suggests ways in which current benchmarks might be adapted to provide a transitional step to this end, and notes the need for multiple workloads to provide a holistic proxy for a universal metric.

  12. HPGMG 1.0: A Benchmark for Ranking High Performance Computing Systems

    SciTech Connect

    Adams, Mark; Brown, Jed; Shalf, John; Straalen, Brian Van; Strohmaier, Erich; Williams, Sam

    2014-05-05

    This document provides an overview of the benchmark ? HPGMG ? for ranking large scale general purpose computers for use on the Top500 list [8]. We provide a rationale for the need for a replacement for the current metric HPL, some background of the Top500 list and the challenges of developing such a metric; we discuss our design philosophy and methodology, and an overview of the specification of the benchmark. The primary documentation with maintained details on the specification can be found at hpgmg.org and the Wiki and benchmark code itself can be found in the repository https://bitbucket.org/hpgmg/hpgmg.

  13. Development and Applications of Benchmark Examples for Static Delamination Propagation Predictions

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2013-01-01

    The development and application of benchmark examples for the assessment of quasistatic delamination propagation capabilities was demonstrated for ANSYS (TradeMark) and Abaqus/Standard (TradeMark). The examples selected were based on finite element models of Double Cantilever Beam (DCB) and Mixed-Mode Bending (MMB) specimens. First, quasi-static benchmark results were created based on an approach developed previously. Second, the delamination was allowed to propagate under quasi-static loading from its initial location using the automated procedure implemented in ANSYS (TradeMark) and Abaqus/Standard (TradeMark). Input control parameters were varied to study the effect on the computed delamination propagation. Overall, the benchmarking procedure proved valuable by highlighting the issues associated with choosing the appropriate input parameters for the VCCT implementations in ANSYS® and Abaqus/Standard®. However, further assessment for mixed-mode delamination fatigue onset and growth is required. Additionally studies should include the assessment of the propagation capabilities in more complex specimens and on a structural level.

  14. Design and development of a community carbon cycle benchmarking system for CMIP5 models

    NASA Astrophysics Data System (ADS)

    Mu, M.; Hoffman, F. M.; Lawrence, D. M.; Riley, W. J.; Keppel-Aleks, G.; Randerson, J. T.

    2013-12-01

    Benchmarking has been widely used to assess the ability of atmosphere, ocean, sea ice, and land surface models to capture the spatial and temporal variability of observations during the historical period. For the carbon cycle and terrestrial ecosystems, the design and development of an open-source community platform has been an important goal as part of the International Land Model Benchmarking (ILAMB) project. Here we designed and developed a software system that enables the user to specify the models, benchmarks, and scoring systems so that results can be tailored to specific model intercomparison projects. We used this system to evaluate the performance of CMIP5 Earth system models (ESMs). Our scoring system used information from four different aspects of climate, including the climatological mean spatial pattern of gridded surface variables, seasonal cycle dynamics, the amplitude of interannual variability, and long-term decadal trends. We used this system to evaluate burned area, global biomass stocks, net ecosystem exchange, gross primary production, and ecosystem respiration from CMIP5 historical simulations. Initial results indicated that the multi-model mean often performed better than many of the individual models for most of the observational constraints.

  15. International land Model Benchmarking (ILAMB) Package v002.00

    DOE Data Explorer

    Collier, Nathaniel [Oak Ridge National Laboratory; Hoffman, Forrest M. [Oak Ridge National Laboratory; Mu, Mingquan [University of California, Irvine; Randerson, James T. [University of California, Irvine; Riley, William J. [Lawrence Berkeley National Laboratory

    2016-05-09

    As a contribution to International Land Model Benchmarking (ILAMB) Project, we are providing new analysis approaches, benchmarking tools, and science leadership. The goal of ILAMB is to assess and improve the performance of land models through international cooperation and to inform the design of new measurement campaigns and field studies to reduce uncertainties associated with key biogeochemical processes and feedbacks. ILAMB is expected to be a primary analysis tool for CMIP6 and future model-data intercomparison experiments. This team has developed initial prototype benchmarking systems for ILAMB, which will be improved and extended to include ocean model metrics and diagnostics.

  16. International land Model Benchmarking (ILAMB) Package v001.00

    DOE Data Explorer

    Mu, Mingquan [University of California, Irvine; Randerson, James T. [University of California, Irvine; Riley, William J. [Lawrence Berkeley National Laboratory; Hoffman, Forrest M. [Oak Ridge National Laboratory

    2016-05-02

    As a contribution to International Land Model Benchmarking (ILAMB) Project, we are providing new analysis approaches, benchmarking tools, and science leadership. The goal of ILAMB is to assess and improve the performance of land models through international cooperation and to inform the design of new measurement campaigns and field studies to reduce uncertainties associated with key biogeochemical processes and feedbacks. ILAMB is expected to be a primary analysis tool for CMIP6 and future model-data intercomparison experiments. This team has developed initial prototype benchmarking systems for ILAMB, which will be improved and extended to include ocean model metrics and diagnostics.

  17. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  18. Development and Analysis of Psychomotor Skills Metrics for Procedural Skills Decay.

    PubMed

    Parthiban, Chembian; Ray, Rebecca; Rutherford, Drew; Zinn, Mike; Pugh, Carla

    2016-01-01

    In this paper we develop and analyze the metrics associated with a force production task involving a stationary target with the help of advanced VR and Force Dimension Omega 6 haptic device. We study the effects of force magnitude and direction on the various metrics namely path length, movement smoothness, velocity and acceleration patterns, reaction time and overall error in achieving the target. Data was collected from 47 participants who were residents. Results show a positive correlation between the maximum force applied and the deflection error, velocity while reducing the path length and increasing smoothness with a force of higher magnitude showing the stabilizing characteristics of higher magnitude forces. This approach paves a way to assess and model procedural skills decay. PMID:27046593

  19. A newly developed dispersal metric indicates the succession of benthic invertebrates in restored rivers.

    PubMed

    Li, Fengqing; Sundermann, Andrea; Stoll, Stefan; Haase, Peter

    2016-11-01

    Dispersal capacity plays a fundamental role in the riverine benthic invertebrate colonization of new habitats that emerges following flash floods or restoration. However, an appropriate measure of dispersal capacity for benthic invertebrates is still lacking. The dispersal of benthic invertebrates occurs mainly during the aquatic (larval) and aerial (adult) life stages, and the dispersal of each stage can be further subdivided into active and passive modes. Based on these four possible dispersal modes, we first developed a metric (which is very similar to the well-known and widely used saprobic index) to estimate the dispersal capacity for 802 benthic invertebrate taxa by incorporating a weight for each mode. Second, we tested this metric using benthic invertebrate community data from a) 23 large restored river sites with substantial improvements of river bottom habitats dating back 1 to 10years, b) 23 unrestored sites very close to the restored sites, and c) 298 adjacent surrounding sites (mean±standard deviation: 13.0±9.5 per site) within a distance of up to 5km for each restored site in the low mountain and lowland areas of Germany. We hypothesize that our metric will reflect the temporal succession process of benthic invertebrate communities colonizing the restored sites, whereas no temporal changes are expected in the unrestored and surrounding sites. By applying our metric to these three river treatment categories, we found that the average dispersal capacity of benthic invertebrate communities in the restored sites significantly decreased in the early years following restoration, whereas there were no changes in either the unrestored or the surrounding sites. After all taxa had been divided into quartiles representing weak to strong dispersers, this pattern became even more obvious; strong dispersers colonized the restored sites during the first year after restoration and then significantly decreased over time, whereas weak dispersers continued to increase

  20. Development of a reference dose for BDE-47, 99, and 209 using benchmark dose methods.

    PubMed

    Li, Lu Xi; Chen, Li; Cao, Dan; Chen, Bing Heng; Zhao, Yan; Meng, Xiang Zhou; Xie, Chang Ming; Zhang, Yun Hui

    2014-09-01

    Eleven recently completed toxicological studies were critically reviewed to identify toxicologically significant endpoints and dose-response information. Dose-response data were compiled and entered into the USEPA's benchmark dose software (BMDS) for calculation of a benchmark dose (BMD) and a benchmark dose low (BMDL). After assessing 91 endpoints across the nine studies, a total of 23 of these endpoints were identified for BMD modeling, and BMDL estimates corresponding to various dose-response models were compiled for these separate endpoints. Thyroid, neurobehavior and reproductive endpoints for BDE-47, -99, -209 were quantitatively evaluated. According to methods and feature of each study, different uncertainty factor (UF) value was decided and subsequently reference doses (RfDs) were proposed. Consistent with USEPA, the lowest BMDLs of 2.10, 81.77, and 1698 µg/kg were used to develop RfDs for BDE-47, -99, and -209, respectively. RfDs for BDE-99 and BDE-209 were comparable to EPA results, and however, RfD of BDE-47 was much lower than that of EPA, which may result from that reproductive/developmental proves to be more sensitive than neurobehavior for BDE-47 and the principal study uses very-low-dose exposure. PMID:25256863

  1. Deriving phenological metrics from NDVI through an open source tool developed in QGIS

    NASA Astrophysics Data System (ADS)

    Duarte, Lia; Teodoro, A. C.; Gonçalves, Hernãni

    2014-10-01

    Vegetation indices have been commonly used over the past 30 years for studying vegetation characteristics using images collected by remote sensing satellites. One of the most commonly used is the Normalized Difference Vegetation Index (NDVI). The various stages that green vegetation undergoes during a complete growing season can be summarized through time-series analysis of NDVI data. The analysis of such time-series allow for extracting key phenological variables or metrics of a particular season. These characteristics may not necessarily correspond directly to conventional, ground-based phenological events, but do provide indications of ecosystem dynamics. A complete list of the phenological metrics that can be extracted from smoothed, time-series NDVI data is available in the USGS online resources (http://phenology.cr.usgs.gov/methods_deriving.php).This work aims to develop an open source application to automatically extract these phenological metrics from a set of satellite input data. The main advantage of QGIS for this specific application relies on the easiness and quickness in developing new plug-ins, using Python language, based on the experience of the research group in other related works. QGIS has its own application programming interface (API) with functionalities and programs to develop new features. The toolbar developed for this application was implemented using the plug-in NDVIToolbar.py. The user introduces the raster files as input and obtains a plot and a report with the metrics. The report includes the following eight metrics: SOST (Start Of Season - Time) corresponding to the day of the year identified as having a consistent upward trend in the NDVI time series; SOSN (Start Of Season - NDVI) corresponding to the NDVI value associated with SOST; EOST (End of Season - Time) which corresponds to the day of year identified at the end of a consistent downward trend in the NDVI time series; EOSN (End of Season - NDVI) corresponding to the NDVI value

  2. Pollutant Emissions and Energy Efficiency under Controlled Conditions for Household Biomass Cookstoves and Implications for Metrics Useful in Setting International Test Standards

    EPA Science Inventory

    Realistic metrics and methods for testing household biomass cookstoves are required to develop standards needed by international policy makers, donors, and investors. Application of consistent test practices allows emissions and energy efficiency performance to be benchmarked and...

  3. NASA metric transition plan

    NASA Astrophysics Data System (ADS)

    NASA science publications have used the metric system of measurement since 1970. Although NASA has maintained a metric use policy since 1979, practical constraints have restricted actual use of metric units. In 1988, an amendment to the Metric Conversion Act of 1975 required the Federal Government to adopt the metric system except where impractical. In response to Public Law 100-418 and Executive Order 12770, NASA revised its metric use policy and developed this Metric Transition Plan. NASA's goal is to use the metric system for program development and functional support activities to the greatest practical extent by the end of 1995. The introduction of the metric system into new flight programs will determine the pace of the metric transition. Transition of institutional capabilities and support functions will be phased to enable use of the metric system in flight program development and operations. Externally oriented elements of this plan will introduce and actively support use of the metric system in education, public information, and small business programs. The plan also establishes a procedure for evaluating and approving waivers and exceptions to the required use of the metric system for new programs. Coordination with other Federal agencies and departments (through the Interagency Council on Metric Policy) and industry (directly and through professional societies and interest groups) will identify sources of external support and minimize duplication of effort.

  4. NASA metric transition plan

    NASA Technical Reports Server (NTRS)

    1992-01-01

    NASA science publications have used the metric system of measurement since 1970. Although NASA has maintained a metric use policy since 1979, practical constraints have restricted actual use of metric units. In 1988, an amendment to the Metric Conversion Act of 1975 required the Federal Government to adopt the metric system except where impractical. In response to Public Law 100-418 and Executive Order 12770, NASA revised its metric use policy and developed this Metric Transition Plan. NASA's goal is to use the metric system for program development and functional support activities to the greatest practical extent by the end of 1995. The introduction of the metric system into new flight programs will determine the pace of the metric transition. Transition of institutional capabilities and support functions will be phased to enable use of the metric system in flight program development and operations. Externally oriented elements of this plan will introduce and actively support use of the metric system in education, public information, and small business programs. The plan also establishes a procedure for evaluating and approving waivers and exceptions to the required use of the metric system for new programs. Coordination with other Federal agencies and departments (through the Interagency Council on Metric Policy) and industry (directly and through professional societies and interest groups) will identify sources of external support and minimize duplication of effort.

  5. Benchmarking for Higher Education.

    ERIC Educational Resources Information Center

    Jackson, Norman, Ed.; Lund, Helen, Ed.

    The chapters in this collection explore the concept of benchmarking as it is being used and developed in higher education (HE). Case studies and reviews show how universities in the United Kingdom are using benchmarking to aid in self-regulation and self-improvement. The chapters are: (1) "Introduction to Benchmarking" (Norman Jackson and Helen…

  6. International small dam safety assurance policy benchmarks to avoid dam failure flood disasters in developing countries

    NASA Astrophysics Data System (ADS)

    Pisaniello, John D.; Dam, Tuyet Thi; Tingey-Holyoak, Joanne L.

    2015-12-01

    In developing countries small dam failure disasters are common yet research on their dam safety management is lacking. This paper reviews available small dam safety assurance policy benchmarks from international literature, synthesises them for applicability in developing countries, and provides example application through a case study of Vietnam. Generic models from 'minimum' to 'best' practice (Pisaniello, 1997) are synthesised with the World Bank's 'essential' and 'desirable' elements (Bradlow et al., 2002) leading to novel policy analysis and design criteria for developing countries. The case study involved 22 on-site dam surveys finding micro level physical and management inadequacies that indicates macro dam safety management policy performs far below the minimum benchmark in Vietnam. Moving assurance policy towards 'best practice' is necessary to improve the safety of Vietnam's considerable number of hazardous dams to acceptable community standards, but firstly achieving 'minimum practice' per the developed guidance is essential. The policy analysis/design process provides an exemplar for other developing countries to follow for avoiding dam failure flood disasters.

  7. Millennium development health metrics: where do Africa’s children and women of childbearing age live?

    PubMed Central

    2013-01-01

    The Millennium Development Goals (MDGs) have prompted an expansion in approaches to deriving health metrics to measure progress toward their achievement. Accurate measurements should take into account the high degrees of spatial heterogeneity in health risks across countries, and this has prompted the development of sophisticated cartographic techniques for mapping and modeling risks. Conversion of these risks to relevant population-based metrics requires equally detailed information on the spatial distribution and attributes of the denominator populations. However, spatial information on age and sex composition over large areas is lacking, prompting many influential studies that have rigorously accounted for health risk heterogeneities to overlook the substantial demographic variations that exist subnationally and merely apply national-level adjustments. Here we outline the development of high resolution age- and sex-structured spatial population datasets for Africa in 2000-2015 built from over a million measurements from more than 20,000 subnational units, increasing input data detail from previous studies by over 400-fold. We analyze the large spatial variations seen within countries and across the continent for key MDG indicator groups, focusing on children under 5 and women of childbearing age, and find that substantial differences in health and development indicators can result through using only national level statistics, compared to accounting for subnational variation. Progress toward meeting the MDGs will be measured through national-level indicators that mask substantial inequalities and heterogeneities across nations. Cartographic approaches are providing opportunities for quantitative assessments of these inequalities and the targeting of interventions, but demographic spatial datasets to support such efforts remain reliant on coarse and outdated input data for accurately locating risk groups. We have shown here that sufficient data exist to map the

  8. Millennium development health metrics: where do Africa's children and women of childbearing age live?

    PubMed

    Tatem, Andrew J; Garcia, Andres J; Snow, Robert W; Noor, Abdisalan M; Gaughan, Andrea E; Gilbert, Marius; Linard, Catherine

    2013-01-01

    The Millennium Development Goals (MDGs) have prompted an expansion in approaches to deriving health metrics to measure progress toward their achievement. Accurate measurements should take into account the high degrees of spatial heterogeneity in health risks across countries, and this has prompted the development of sophisticated cartographic techniques for mapping and modeling risks. Conversion of these risks to relevant population-based metrics requires equally detailed information on the spatial distribution and attributes of the denominator populations. However, spatial information on age and sex composition over large areas is lacking, prompting many influential studies that have rigorously accounted for health risk heterogeneities to overlook the substantial demographic variations that exist subnationally and merely apply national-level adjustments.Here we outline the development of high resolution age- and sex-structured spatial population datasets for Africa in 2000-2015 built from over a million measurements from more than 20,000 subnational units, increasing input data detail from previous studies by over 400-fold. We analyze the large spatial variations seen within countries and across the continent for key MDG indicator groups, focusing on children under 5 and women of childbearing age, and find that substantial differences in health and development indicators can result through using only national level statistics, compared to accounting for subnational variation.Progress toward meeting the MDGs will be measured through national-level indicators that mask substantial inequalities and heterogeneities across nations. Cartographic approaches are providing opportunities for quantitative assessments of these inequalities and the targeting of interventions, but demographic spatial datasets to support such efforts remain reliant on coarse and outdated input data for accurately locating risk groups. We have shown here that sufficient data exist to map the

  9. Restaurant Energy Use Benchmarking Guideline

    SciTech Connect

    Hedrick, R.; Smith, V.; Field, K.

    2011-07-01

    A significant operational challenge for food service operators is defining energy use benchmark metrics to compare against the performance of individual stores. Without metrics, multiunit operators and managers have difficulty identifying which stores in their portfolios require extra attention to bring their energy performance in line with expectations. This report presents a method whereby multiunit operators may use their own utility data to create suitable metrics for evaluating their operations.

  10. Benchmarking short sequence mapping tools

    PubMed Central

    2013-01-01

    Background The development of next-generation sequencing instruments has led to the generation of millions of short sequences in a single run. The process of aligning these reads to a reference genome is time consuming and demands the development of fast and accurate alignment tools. However, the current proposed tools make different compromises between the accuracy and the speed of mapping. Moreover, many important aspects are overlooked while comparing the performance of a newly developed tool to the state of the art. Therefore, there is a need for an objective evaluation method that covers all the aspects. In this work, we introduce a benchmarking suite to extensively analyze sequencing tools with respect to various aspects and provide an objective comparison. Results We applied our benchmarking tests on 9 well known mapping tools, namely, Bowtie, Bowtie2, BWA, SOAP2, MAQ, RMAP, GSNAP, Novoalign, and mrsFAST (mrFAST) using synthetic data and real RNA-Seq data. MAQ and RMAP are based on building hash tables for the reads, whereas the remaining tools are based on indexing the reference genome. The benchmarking tests reveal the strengths and weaknesses of each tool. The results show that no single tool outperforms all others in all metrics. However, Bowtie maintained the best throughput for most of the tests while BWA performed better for longer read lengths. The benchmarking tests are not restricted to the mentioned tools and can be further applied to others. Conclusion The mapping process is still a hard problem that is affected by many factors. In this work, we provided a benchmarking suite that reveals and evaluates the different factors affecting the mapping process. Still, there is no tool that outperforms all of the others in all the tests. Therefore, the end user should clearly specify his needs in order to choose the tool that provides the best results. PMID:23758764

  11. Metric Madness

    ERIC Educational Resources Information Center

    Kroon, Cindy D.

    2007-01-01

    Created for a Metric Day activity, Metric Madness is a board game for two to four players. Students review and practice metric vocabulary, measurement, and calculations by playing the game. Playing time is approximately twenty to thirty minutes.

  12. Benchmarking and Its Relevance to the Library and Information Sector. Interim Findings of "Best Practice Benchmarking in the Library and Information Sector," a British Library Research and Development Department Project.

    ERIC Educational Resources Information Center

    Kinnell, Margaret; Garrod, Penny

    This British Library Research and Development Department study assesses current activities and attitudes toward quality management in library and information services (LIS) in the academic sector as well as the commercial/industrial sector. Definitions and types of benchmarking are described, and the relevance of benchmarking to LIS is evaluated.…

  13. Development of water quality criteria and screening benchmarks for 2,4,6 trinitrotoluene

    SciTech Connect

    Talmage, S.S.; Opresko, D.M.

    1995-12-31

    Munitions compounds and their degradation products are present at many Army Ammunition Plant Superfund sites. Neither Water Quality Criteria (WQC) for aquatic organisms nor safe soil levels for terrestrial plants and animals have been developed for munitions compounds including trinitrotoluene (TNT). Data are available for the calculation of an acute WQC for TNT according to US EPA guidelines but are insufficient to calculate a chronic criterion. However, available data can be used to determine a Secondary Chronic Value (SCV) and to determine lowest chronic values for fish and daphnids (used by EPA in the absence of criteria). Based on data from eight genera of aquatic organisms, an acute WOC of 0.566 mg/L was calculated. Using available data, a SCV of 0.137 mg/L was calculated. Lowest chronic values for fish and for daphnids are 0.04 mg/L and 1.03 mg/L, respectively. The lowest concentration that affected the growth of aquatic plants was 1.0 mg/L. For terrestrial animals, data from studies of laboratory animals can be extrapolated to derive screening benchmarks in the same way in which human toxicity values are derived from laboratory animal data. For terrestrial animals, a no-observed-adverse-effect-level (NOAEL) for reproductive effects of 1.60 mg/kg/day was determined from a subchronic laboratory feeding study with rats. By scaling the test NOAEL on the basis of differences in body size, screening benchmarks were calculated for oral intake for selected mammalian wildlife species. Screening benchmarks were also derived for protection of benthic organisms in sediment, for soil invertebrates, and for terrestrial plants.

  14. [Development of lead benchmarks for soil based on human blood lead level in China].

    PubMed

    Zhang, Hong-zhen; Luo, Yong-ming; Zhang, Hai-bo; Song, Jing; Xia, Jia-qi; Zhao, Qi-guo

    2009-10-15

    Lead benchmarks for soil are mainly established based on blood lead concentration of children. This is because lead plays a dramatically negative role in children's cognitive development and intellectual performance and thus soil lead has been concerned as main lead exposure source for children. Based on the extensively collection of domestic available data, lead levels in air, drinking water are 0.12-1.0 microg x m(-3) and 2-10 microg x L(-1); ingestion of lead from food by children of 0-6 years old is 10-25 microg x d(-1); geometric mean of women blood lead 1concentration of child bearing age is 4.79 microg x dL(-1), with 1.48 GSD. Lead benchmarks for soil were calculated with the Integration Exposure Uptake Biokinetic Model (IEUBK) and the Adult Lead Model (ALM). The results showed the lead criteria values for residual land and commercial/industrial land was 282 mg x kg(-1) and 627 mg x kg(-1) respectively, which was slightly lower compared with U.S.A. and U.K. Parameters sensitivity analysis indicated that lead exposure scenario of children in China was significantly different from children in developed countries and children lead exposure level in China was obviously higher. Urgent work is required for the relationship studies between lead exposure scenario and blood lead level of children and establishment of risk assessment guideline of lead contaminated soil based on human blood lead level. PMID:19968127

  15. Development of Methodologies, Metrics, and Tools for Investigating Human-Robot Interaction in Space Robotics

    NASA Technical Reports Server (NTRS)

    Ezer, Neta; Zumbado, Jennifer Rochlis; Sandor, Aniko; Boyer, Jennifer

    2011-01-01

    Human-robot systems are expected to have a central role in future space exploration missions that extend beyond low-earth orbit [1]. As part of a directed research project funded by NASA s Human Research Program (HRP), researchers at the Johnson Space Center have started to use a variety of techniques, including literature reviews, case studies, knowledge capture, field studies, and experiments to understand critical human-robot interaction (HRI) variables for current and future systems. Activities accomplished to date include observations of the International Space Station s Special Purpose Dexterous Manipulator (SPDM), Robonaut, and Space Exploration Vehicle (SEV), as well as interviews with robotics trainers, robot operators, and developers of gesture interfaces. A survey of methods and metrics used in HRI was completed to identify those most applicable to space robotics. These methods and metrics included techniques and tools associated with task performance, the quantification of human-robot interactions and communication, usability, human workload, and situation awareness. The need for more research in areas such as natural interfaces, compensations for loss of signal and poor video quality, psycho-physiological feedback, and common HRI testbeds were identified. The initial findings from these activities and planned future research are discussed. Human-robot systems are expected to have a central role in future space exploration missions that extend beyond low-earth orbit [1]. As part of a directed research project funded by NASA s Human Research Program (HRP), researchers at the Johnson Space Center have started to use a variety of techniques, including literature reviews, case studies, knowledge capture, field studies, and experiments to understand critical human-robot interaction (HRI) variables for current and future systems. Activities accomplished to date include observations of the International Space Station s Special Purpose Dexterous Manipulator

  16. Recognition and Assessment of Eosinophilic Esophagitis: The Development of New Clinical Outcome Metrics

    PubMed Central

    Nguyen, Nathalie; Menard-Katcher, Calies

    2015-01-01

    Eosinophilic esophagitis (EoE) is a chronic, food-allergic disease manifest by symptoms of esophageal dysfunction and dense esophageal eosinophilia in which other causes have been excluded. Treatments include dietary restriction of the offending allergens, topical corticosteroids, and dilation of strictures. EoE has become increasingly prevalent over the past decade and has been increasingly recognized as a major health concern. Advancements in research and clinical needs have led to the development of novel pediatric- and adult-specific clinical outcome metrics (COMs). These COMs provide ways to measure clinically relevant features in EoE and set the stage for measuring outcomes in future therapeutic trials. In this article, we review novel symptom measurement assessments, the use of radiographic imaging to serve as a metric for therapeutic interventions, recently developed standardized methods for endoscopic assessment, novel techniques to evaluate esophageal mucosal inflammation, and methods for functional assessment of the esophagus. These advancements, in conjunction with current consensus recommendations, will improve the clinical assessment of patients with EoE. PMID:27330494

  17. Development and evaluation of aperture-based complexity metrics using film and EPID measurements of static MLC openings

    SciTech Connect

    Götstedt, Julia; Karlsson Hauer, Anna; Bäck, Anna

    2015-07-15

    Purpose: Complexity metrics have been suggested as a complement to measurement-based quality assurance for intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT). However, these metrics have not yet been sufficiently validated. This study develops and evaluates new aperture-based complexity metrics in the context of static multileaf collimator (MLC) openings and compares them to previously published metrics. Methods: This study develops the converted aperture metric and the edge area metric. The converted aperture metric is based on small and irregular parts within the MLC opening that are quantified as measured distances between MLC leaves. The edge area metric is based on the relative size of the region around the edges defined by the MLC. Another metric suggested in this study is the circumference/area ratio. Earlier defined aperture-based complexity metrics—the modulation complexity score, the edge metric, the ratio monitor units (MU)/Gy, the aperture area, and the aperture irregularity—are compared to the newly proposed metrics. A set of small and irregular static MLC openings are created which simulate individual IMRT/VMAT control points of various complexities. These are measured with both an amorphous silicon electronic portal imaging device and EBT3 film. The differences between calculated and measured dose distributions are evaluated using a pixel-by-pixel comparison with two global dose difference criteria of 3% and 5%. The extent of the dose differences, expressed in terms of pass rate, is used as a measure of the complexity of the MLC openings and used for the evaluation of the metrics compared in this study. The different complexity scores are calculated for each created static MLC opening. The correlation between the calculated complexity scores and the extent of the dose differences (pass rate) are analyzed in scatter plots and using Pearson’s r-values. Results: The complexity scores calculated by the edge

  18. Software development predictors, error analysis, reliability models and software metric analysis

    NASA Technical Reports Server (NTRS)

    Basili, Victor

    1983-01-01

    The use of dynamic characteristics as predictors for software development was studied. It was found that there are some significant factors that could be useful as predictors. From a study on software errors and complexity, it was shown that meaningful results can be obtained which allow insight into software traits and the environment in which it is developed. Reliability models were studied. The research included the field of program testing because the validity of some reliability models depends on the answers to some unanswered questions about testing. In studying software metrics, data collected from seven software engineering laboratory (FORTRAN) projects were examined and three effort reporting accuracy checks were applied to demonstrate the need to validate a data base. Results are discussed.

  19. Development of a Computer Program for Analyzing Preliminary Aircraft Configurations in Relationship to Emerging Agility Metrics

    NASA Technical Reports Server (NTRS)

    Bauer, Brent

    1993-01-01

    This paper discusses the development of a FORTRAN computer code to perform agility analysis on aircraft configurations. This code is to be part of the NASA-Ames ACSYNT (AirCraft SYNThesis) design code. This paper begins with a discussion of contemporary agility research in the aircraft industry and a survey of a few agility metrics. The methodology, techniques and models developed for the code are then presented. Finally, example trade studies using the agility module along with ACSYNT are illustrated. These trade studies were conducted using a Northrop F-20 Tigershark aircraft model. The studies show that the agility module is effective in analyzing the influence of common parameters such as thrust-to-weight ratio and wing loading on agility criteria. The module can compare the agility potential between different configurations. In addition, one study illustrates the module's ability to optimize a configuration's agility performance.

  20. Translating diagnostic assays from the laboratory to the clinic: analytical and clinical metrics for device development and evaluation.

    PubMed

    Borysiak, Mark D; Thompson, Matthew J; Posner, Jonathan D

    2016-04-21

    As lab-on-a-chip health diagnostic technologies mature, there is a push to translate them from the laboratory to the clinic. For these diagnostics to achieve maximum impact on patient care, scientists and engineers developing the tests should understand the analytical and clinical statistical metrics that determine the efficacy of the test. Appreciating and using these metrics will benefit test developers by providing consistent measures to evaluate analytical and clinical test performance, as well as guide the design of tests that will most benefit clinicians and patients. This paper is broken into four sections that discuss metrics related to general stages of development including: (1) laboratory assay development (analytical sensitivity, limit of detection, analytical selectivity, and trueness/precision), (2) pre-clinical development (diagnostic sensitivity, diagnostic specificity, clinical cutoffs, and receiver-operator curves), (3) clinical use (prevalence, predictive values, and likelihood ratios), and (4) case studies from existing clinical data for tests relevant to the lab-on-a-chip community (HIV, group A strep, and chlamydia). Each section contains definitions of recommended statistical measures, as well as examples demonstrating the importance of these metrics at various stages of the development process. Increasing the use of these metrics in lab-on-a-chip research will improve the rigor of diagnostic performance reporting and provide a better understanding of how to design tests that will ultimately meet clinical needs. PMID:27043204

  1. Benchmark Development in Support of Generation-IV Reactor Validation (IRPhEP 2010 Handbook)

    SciTech Connect

    John D. Bess; J. Blair Briggs

    2010-06-01

    The March 2010 edition of the International Reactor Physics Experiment Evaluation Project (IRPhEP) Handbook includes additional benchmark data that can be implemented in the validation of data and methods for Generation IV (GEN-IV) reactor designs. Evaluations supporting sodium-cooled fast reactor (SFR) efforts include the initial isothermal tests of the Fast Flux Test Facility (FFTF) at the Hanford Site, the Zero Power Physics Reactor (ZPPR) 10B and 10C experiments at the Idaho National Laboratory (INL), and the burn-up reactivity coefficient of Japan’s JOYO reactor. An assessment of Russia’s BFS-61 assemblies at the Institute of Physics and Power Engineering (IPPE) provides additional information for lead-cooled fast reactor (LFR) systems. Benchmarks in support of the very high temperature reactor (VHTR) project include evaluations of the HTR-PROTEUS experiments performed at the Paul Scherrer Institut (PSI) in Switzerland and the start-up core physics tests of Japan’s High Temperature Engineering Test Reactor. The critical configuration of the Power Burst Facility (PBF) at the INL which used ternary ceramic fuel, U(18)O2-CaO-ZrO2, is of interest for fuel cycle research and development (FCR&D) and has some similarities to “inert-matrix” fuels that are of interest in GEN-IV advanced reactor design. Two additional evaluations were revised to include additional evaluated experimental data, in support of light water reactor (LWR) and heavy water reactor (HWR) research; these include reactor physics experiments at Brazil’s IPEN/MB-01 Research Reactor Facility and the French High Flux Reactor (RHF), respectively. The IRPhEP Handbook now includes data from 45 experimental series (representing 24 reactor facilities) and represents contributions from 15 countries. These experimental measurements represent large investments of infrastructure, experience, and cost that have been evaluated and preserved as benchmarks for the validation of methods and collection of

  2. Toward the Development of Cognitive Task Difficulty Metrics to Support Intelligence Analysis Research

    SciTech Connect

    Greitzer, Frank L.

    2005-08-08

    Intelligence analysis is a cognitively complex task that is the subject of considerable research aimed at developing methods and tools to aid the analysis process. To support such research, it is necessary to characterize the difficulty or complexity of intelligence analysis tasks in order to facilitate assessments of the impact or effectiveness of tools that are being considered for deployment. A number of informal accounts of ''What makes intelligence analysis hard'' are available, but there has been no attempt to establish a more rigorous characterization with well-defined difficulty factors or dimensions. This paper takes an initial step in this direction by describing a set of proposed difficulty metrics based on cognitive principles.

  3. Benchmarking Using Basic DBMS Operations

    NASA Astrophysics Data System (ADS)

    Crolotte, Alain; Ghazal, Ahmad

    The TPC-H benchmark proved to be successful in the decision support area. Many commercial database vendors and their related hardware vendors used these benchmarks to show the superiority and competitive edge of their products. However, over time, the TPC-H became less representative of industry trends as vendors keep tuning their database to this benchmark-specific workload. In this paper, we present XMarq, a simple benchmark framework that can be used to compare various software/hardware combinations. Our benchmark model is currently composed of 25 queries that measure the performance of basic operations such as scans, aggregations, joins and index access. This benchmark model is based on the TPC-H data model due to its maturity and well-understood data generation capability. We also propose metrics to evaluate single-system performance and compare two systems. Finally we illustrate the effectiveness of this model by showing experimental results comparing two systems under different conditions.

  4. Stakeholder insights on the planning and development of an independent benchmark standard for responsible food marketing.

    PubMed

    Cairns, Georgina; Macdonald, Laura

    2016-06-01

    A mixed methods qualitative survey investigated stakeholder responses to the proposal to develop an independently defined, audited and certifiable set of benchmark standards for responsible food marketing. Its purpose was to inform the policy planning and development process. A majority of respondents were supportive of the proposal. A majority also viewed the engagement and collaboration of a broad base of stakeholders in its planning and development as potentially beneficial. Positive responses were associated with views that policy controls can and should be extended to include all form of marketing, that obesity and non-communicable diseases prevention and control was a shared responsibility and an urgent policy priority and prior experience of independent standardisation as a policy lever for good practice. Strong policy leadership, demonstrable utilisation of the evidence base in its development and deployment and a conceptually clear communications plan were identified as priority targets for future policy planning. Future research priorities include generating more evidence on the feasibility of developing an effective community of practice and theory of change, the strengths and limitations of these and developing an evidence-based step-wise communications strategy. PMID:27085486

  5. Development and Implementation of a Metric Inservice Program for Teachers at Samuel Morse Elementary School.

    ERIC Educational Resources Information Center

    Butler, Thelma R.

    A model for organizing an introductory in-service workshop for elementary school teachers in the basic fundamentals and contents of the metric system is presented. Data collected from various questionnaires and tests suggest that the program improved the teacher's performance in presenting the metric system and that this improvement had a positive…

  6. Subsystem Details for the Fiscal Year 2004 Advanced Life Support Research and Technology Development Metric

    NASA Technical Reports Server (NTRS)

    Hanford, Anthony J.

    2004-01-01

    This document provides values at the assembly level for the subsystems described in the Fiscal Year 2004 Advanced Life Support Research and Technology Development Metric (Hanford, 2004). Hanford (2004) summarizes the subordinate computational values for the Advanced Life Support Research and Technology Development (ALS R&TD) Metric at the subsystem level, while this manuscript provides a summary at the assembly level. Hanford (2004) lists mass, volume, power, cooling, and crewtime for each mission examined by the ALS R&TD Metric according to the nominal organization for the Advanced Life Support (ALS) elements. The values in the tables below, Table 2.1 through Table 2.8, list the assemblies, using the organization and names within the Advanced Life Support Sizing Analysis Tool (ALSSAT) for each ALS element. These tables specifically detail mass, volume, power, cooling, and crewtime. Additionally, mass and volume are designated in terms of values associated with initial hardware and resupplied hardware just as they are within ALSSAT. The overall subsystem values are listed on the line following each subsystem entry. These values are consistent with those reported in Hanford (2004) for each listed mission. Any deviations between these values and those in Hanford (2004) arise from differences in when individual numerical values are rounded within each report, and therefore the resulting minor differences should not concern even a careful reader. Hanford (2004) u es the uni ts kW(sub e) and kW(sub th) for power and cooling, respectively, while the nomenclature below uses W(sub e) and W(sub th), which is consistent with the native units within ALSSAT. The assemblies, as specified within ALSSAT, are listed in bold below their respective subsystems. When recognizable assembly components are not listed within ALSSAT, a summary of the assembly is provided on the same line as the entry for the assembly. Assemblies with one or more recognizable components are further

  7. Evaluation of Two Crew Module Boilerplate Tests Using Newly Developed Calibration Metrics

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.

    2012-01-01

    The paper discusses a application of multi-dimensional calibration metrics to evaluate pressure data from water drop tests of the Max Launch Abort System (MLAS) crew module boilerplate. Specifically, three metrics are discussed: 1) a metric to assess the probability of enveloping the measured data with the model, 2) a multi-dimensional orthogonality metric to assess model adequacy between test and analysis, and 3) a prediction error metric to conduct sensor placement to minimize pressure prediction errors. Data from similar (nearly repeated) capsule drop tests shows significant variability in the measured pressure responses. When compared to expected variability using model predictions, it is demonstrated that the measured variability cannot be explained by the model under the current uncertainty assumptions.

  8. Benchmarking HRD.

    ERIC Educational Resources Information Center

    Ford, Donald J.

    1993-01-01

    Discusses benchmarking, the continuous process of measuring one's products, services, and practices against those recognized as leaders in that field to identify areas for improvement. Examines ways in which benchmarking can benefit human resources functions. (JOW)

  9. Benchmarking progress in tackling the challenges of intellectual property, and access to medicines in developing countries.

    PubMed Central

    Musungu, Sisule F.

    2006-01-01

    The impact of intellectual property protection in the pharmaceutical sector on developing countries has been a central issue in the fierce debate during the past 10 years in a number of international fora, particularly the World Trade Organization (WTO) and WHO. The debate centres on whether the intellectual property system is: (1) providing sufficient incentives for research and development into medicines for diseases that disproportionately affect developing countries; and (2) restricting access to existing medicines for these countries. The Doha Declaration was adopted at WTO in 2001 and the Commission on Intellectual Property, Innovation and Public Health was established at WHO in 2004, but their respective contributions to tackling intellectual property-related challenges are disputed. Objective parameters are needed to measure whether a particular series of actions, events, decisions or processes contribute to progress in this area. This article proposes six possible benchmarks for intellectual property-related challenges with regard to the development of medicines and ensuring access to medicines in developing countries. PMID:16710545

  10. A Locally Weighted Fixation Density-Based Metric for Assessing the Quality of Visual Saliency Predictions.

    PubMed

    Gide, Milind S; Karam, Lina J

    2016-08-01

    With the increased focus on visual attention (VA) in the last decade, a large number of computational visual saliency methods have been developed over the past few years. These models are traditionally evaluated by using performance evaluation metrics that quantify the match between predicted saliency and fixation data obtained from eye-tracking experiments on human observers. Though a considerable number of such metrics have been proposed in the literature, there are notable problems in them. In this paper, we discuss shortcomings in the existing metrics through illustrative examples and propose a new metric that uses local weights based on fixation density, which overcomes these flaws. To compare the performance of our proposed metric at assessing the quality of saliency prediction with other existing metrics, we construct a ground-truth subjective database in which saliency maps obtained from 17 different VA models are evaluated by 16 human observers on a five-point categorical scale in terms of their visual resemblance with corresponding ground-truth fixation density maps obtained from eye-tracking data. The metrics are evaluated by correlating metric scores with the human subjective ratings. The correlation results show that the proposed evaluation metric outperforms all other popular existing metrics. In addition, the constructed database and corresponding subjective ratings provide an insight into which of the existing metrics and future metrics are better at estimating the quality of saliency prediction and can be used as a benchmark. PMID:27295671

  11. Developing meaningful metrics of clinical productivity for military treatment facility anesthesiology departments and operative services.

    PubMed

    Mongan, Paul D; Van der Schuur, L T Brian; Damiano, Louis A; Via, Darin K

    2003-11-01

    Comparing clinical productivity is important for strategic planning and the evaluation of resource allocation in any large organization. This process of benchmarking performance allows for the comparison of groups with similar characteristics. However, this process is often difficult when comparing the operative service productivity of large and small military treatment facilities because of the significant heterogeneity in mission focus and case complexity. However, in this article, we describe the application of a new method of benchmarking operative service productivity based on normalizing data for operating room sites, cases, and total American Society of Anesthesiologists units produced per hour. We demonstrate how these benchmarks allow for valid comparisons of operative service productivity among these military treatment facilities and how the data could be used in expanding or contracting operating locations. In addition, these benchmarks are compared with those derived from the use of this system in the civilian sector. PMID:14680041

  12. Development and Application of Benchmark Examples for Mixed-Mode I/II Quasi-Static Delamination Propagation Predictions

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2012-01-01

    The development of benchmark examples for quasi-static delamination propagation prediction is presented and demonstrated for a commercial code. The examples are based on finite element models of the Mixed-Mode Bending (MMB) specimen. The examples are independent of the analysis software used and allow the assessment of the automated delamination propagation prediction capability in commercial finite element codes based on the virtual crack closure technique (VCCT). First, quasi-static benchmark examples were created for the specimen. Second, starting from an initially straight front, the delamination was allowed to propagate under quasi-static loading. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Good agreement between the results obtained from the automated propagation analysis and the benchmark results could be achieved by selecting input parameters that had previously been determined during analyses of mode I Double Cantilever Beam and mode II End Notched Flexure specimens. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Overall the results are encouraging, but further assessment for mixed-mode delamination fatigue onset and growth is required.

  13. Development and Application of Benchmark Examples for Mixed-Mode I/II Quasi-Static Delamination Propagation Predictions

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2012-01-01

    The development of benchmark examples for quasi-static delamination propagation prediction is presented. The example is based on a finite element model of the Mixed-Mode Bending (MMB) specimen for 50% mode II. The benchmarking is demonstrated for Abaqus/Standard, however, the example is independent of the analysis software used and allows the assessment of the automated delamination propagation prediction capability in commercial finite element codes based on the virtual crack closure technique (VCCT). First, a quasi-static benchmark example was created for the specimen. Second, starting from an initially straight front, the delamination was allowed to propagate under quasi-static loading. Third, the load-displacement as well as delamination length versus applied load/displacement relationships from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Overall, the results are encouraging, but further assessment for mixed-mode delamination fatigue onset and growth is required.

  14. Public Interest Energy Research (PIER) Program Development of a Computer-based Benchmarking and Analytical Tool. Benchmarking and Energy & Water Savings Tool in Dairy Plants (BEST-Dairy)

    SciTech Connect

    Xu, Tengfang; Flapper, Joris; Ke, Jing; Kramer, Klaas; Sathaye, Jayant

    2012-02-01

    The overall goal of the project is to develop a computer-based benchmarking and energy and water savings tool (BEST-Dairy) for use in the California dairy industry - including four dairy processes - cheese, fluid milk, butter, and milk powder. BEST-Dairy tool developed in this project provides three options for the user to benchmark each of the dairy product included in the tool, with each option differentiated based on specific detail level of process or plant, i.e., 1) plant level; 2) process-group level, and 3) process-step level. For each detail level, the tool accounts for differences in production and other variables affecting energy use in dairy processes. The dairy products include cheese, fluid milk, butter, milk powder, etc. The BEST-Dairy tool can be applied to a wide range of dairy facilities to provide energy and water savings estimates, which are based upon the comparisons with the best available reference cases that were established through reviewing information from international and national samples. We have performed and completed alpha- and beta-testing (field testing) of the BEST-Dairy tool, through which feedback from voluntary users in the U.S. dairy industry was gathered to validate and improve the tool's functionality. BEST-Dairy v1.2 was formally published in May 2011, and has been made available for free downloads from the internet (i.e., http://best-dairy.lbl.gov). A user's manual has been developed and published as the companion documentation for use with the BEST-Dairy tool. In addition, we also carried out technology transfer activities by engaging the dairy industry in the process of tool development and testing, including field testing, technical presentations, and technical assistance throughout the project. To date, users from more than ten countries in addition to those in the U.S. have downloaded the BEST-Dairy from the LBNL website. It is expected that the use of BEST-Dairy tool will advance understanding of energy and water

  15. Analysis of urban development by means of multi-temporal fragmentation metrics from LULC data

    NASA Astrophysics Data System (ADS)

    Sapena, M.; Ruiz, L. A.

    2015-04-01

    The monitoring and modelling of the evolution of urban areas is increasingly attracting the attention of land managers and administration. New data, tools and methods are being developed and made available for a better understanding of these dynamic areas. We study and analyse the concept of landscape fragmentation by means of GIS and remote sensing techniques, particularly focused on urban areas. Using LULC data obtained from the European Urban Atlas dataset developed by the local component of Copernicus Land Monitoring Services (scale 1:10,000), the urban fragmentation of the province of Rome is studied at 2006 and 2012. A selection of indices that are able to measure the land cover fragmentation level in the landscape are obtained employing a tool called IndiFrag, using as input data LULC data in vector format. In order to monitor the urban morphological changes and growth patterns, a new module with additional multi-temporal metrics has been developed for this purpose. These urban fragmentation and multi-temporal indices have been applied to the municipalities and districts of Rome, analysed and interpreted to characterise quantity, spatial distribution and structure of the urban change. This methodology is applicable to different regions, affording a dynamic quantification of urban spatial patterns and urban sprawl. The results show that urban form monitoring with multi-temporal data using these techniques highlights urbanization trends, having a great potential to quantify and model geographic development of metropolitan areas and to analyse its relationship with socioeconomic factors through the time.

  16. Degree-Day Benchmarks for Sparganothis sulfureana (Lepidoptera: Tortricidae) Development in Cranberries.

    PubMed

    Deutsch, Annie E; Rodriguez-Saona, Cesar R; Kyryczenko-Roth, Vera; Sojka, Jayne; Zalapa, Juan E; Steffan, Shawn A

    2014-12-01

    Sparganothis sulfureana Clemens is a severe pest of cranberries in the Midwest and northeast United States. Timing for insecticide applications has relied primarily on calendar dates and pheromone trap-catch; however, abiotic conditions can vary greatly, rendering such methods unreliable as indicators of optimal treatment timing. Phenology models based on degree-day (DD) accrual represent a proven, superior approach to assessing the development of insect populations, particularly for larvae. Previous studies of S. sulfureana development showed that the lower and upper temperature thresholds for larval development were 10.0 and 29.9°C (49.9 and 85.8°F), respectively. We used these thresholds to generate DD accumulations specific to S. sulfureana, and then linked these DD accumulations to discrete biological events observed during S. sulfureana development in Wisconsin and New Jersey cranberries. Here, we provide the DDs associated with flight initiation, peak flight, flight termination, adult life span, preovipositional period, ovipositional period, and egg hatch. These DD accumulations represent key developmental benchmarks, allowing for the creation of a phenology model that facilitates wiser management of S. sulfureana in the cranberry system. PMID:26470078

  17. Physical Model Development and Benchmarking for MHD Flows in Blanket Design

    SciTech Connect

    Ramakanth Munipalli; P.-Y.Huang; C.Chandler; C.Rowell; M.-J.Ni; N.Morley; S.Smolentsev; M.Abdou

    2008-06-05

    An advanced simulation environment to model incompressible MHD flows relevant to blanket conditions in fusion reactors has been developed at HyPerComp in research collaboration with TEXCEL. The goals of this phase-II project are two-fold: The first is the incorporation of crucial physical phenomena such as induced magnetic field modeling, and extending the capabilities beyond fluid flow prediction to model heat transfer with natural convection and mass transfer including tritium transport and permeation. The second is the design of a sequence of benchmark tests to establish code competence for several classes of physical phenomena in isolation as well as in select (termed here as “canonical”,) combinations. No previous attempts to develop such a comprehensive MHD modeling capability exist in the literature, and this study represents essentially uncharted territory. During the course of this Phase-II project, a significant breakthrough was achieved in modeling liquid metal flows at high Hartmann numbers. We developed a unique mathematical technique to accurately compute the fluid flow in complex geometries at extremely high Hartmann numbers (10,000 and greater), thus extending the state of the art of liquid metal MHD modeling relevant to fusion reactors at the present time. These developments have been published in noted international journals. A sequence of theoretical and experimental results was used to verify and validate the results obtained. The code was applied to a complete DCLL module simulation study with promising results.

  18. Color Metric.

    ERIC Educational Resources Information Center

    Illinois State Office of Education, Springfield.

    This booklet was designed to convey metric information in pictoral form. The use of pictures in the coloring book enables the more mature person to grasp the metric message instantly, whereas the younger person, while coloring the picture, will be exposed to the metric information long enough to make the proper associations. Sheets of the booklet…

  19. SAT Benchmarks: Development of a College Readiness Benchmark and Its Relationship to Secondary and Postsecondary School Performance. Research Report 2011-5

    ERIC Educational Resources Information Center

    Wyatt, Jeffrey; Kobrin, Jennifer; Wiley, Andrew; Camara, Wayne J.; Proestler, Nina

    2011-01-01

    The current study was part of an ongoing effort at the College Board to establish college readiness benchmarks on the SAT[R], PSAT/NMSQT[R], and ReadiStep[TM] as well as to provide schools, districts, and states with a view of their students' college readiness. College readiness benchmarks were established based on SAT performance, using a…

  20. Process for the development of image quality metrics for underwater electro-optic sensors

    NASA Astrophysics Data System (ADS)

    Taylor, James S., Jr.; Cordes, Brett

    2003-09-01

    Electro-optic identification (EOID) sensors have been demonstrated as an important tool in the identification of bottom sea mines and are transitioning to the fleet. These sensors produce two and three-dimensional images that will be used by operators and algorithms to make the all-important decision regarding use of neutralization systems against sonar contacts classified as mine-like. The quality of EOID images produced can vary dramatically depending on system design, operating parameters, and ocean environment, necessitating the need for a common scale of image quality or interpretability as a basic measure of the information content of the output images and the expected performance that they provide. Two candidate approaches have been identified for the development of an image quality metric. The first approach is the development of a modified National Imagery Interpretability Rating Scale (NIIRS) based on the EOID tasks. Coupled with this new scale would be a modified form of the General Image Quality Equation (GIQE) to provide a bridge from the system parameters to the NIIRS scale. The other approach is based on the Target Acquisition Model (TAM) that has foundations in Johnson"s criteria and a set of tasks. The following paper presents these two approaches along with an explanation of the application to the EOID problem.

  1. Benchmarking Model Variants in Development of a Hardware-in-the-Loop Simulation System

    NASA Technical Reports Server (NTRS)

    Aretskin-Hariton, Eliot D.; Zinnecker, Alicia M.; Kratz, Jonathan L.; Culley, Dennis E.; Thomas, George L.

    2016-01-01

    Distributed engine control architecture presents a significant increase in complexity over traditional implementations when viewed from the perspective of system simulation and hardware design and test. Even if the overall function of the control scheme remains the same, the hardware implementation can have a significant effect on the overall system performance due to differences in the creation and flow of data between control elements. A Hardware-in-the-Loop (HIL) simulation system is under development at NASA Glenn Research Center that enables the exploration of these hardware dependent issues. The system is based on, but not limited to, the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k). This paper describes the step-by-step conversion from the self-contained baseline model to the hardware in the loop model, and the validation of each step. As the control model hardware fidelity was improved during HIL system development, benchmarking simulations were performed to verify that engine system performance characteristics remained the same. The results demonstrate the goal of the effort; the new HIL configurations have similar functionality and performance compared to the baseline C-MAPSS40k system.

  2. Alquimia: Exposing mature biogeochemistry capabilities for easier benchmarking and development of next-generation subsurface codes

    NASA Astrophysics Data System (ADS)

    Johnson, J. N.; Molins, S.

    2015-12-01

    The complexity of subsurface models is increasing in order to address pressing scientific questions in hydrology and climate science. In particular, models that attempt to explore the coupling between microbial metabolic activity and hydrology at larger scales need an accurate representation of their underlying biogeochemical systems. These systems tend to be very complicated, and they result in large nonlinear systems that have to be coupled with flow and transport algorithms in reactive transport codes. The complexity inherent in implementing a robust treatment of biogeochemistry is a significant obstacle in the development of new codes. Alquimia is an open-source software library intended to help developers of these codes overcome this obstacle by exposing tried-and-true biogeochemical capabilities in existing software. It provides an interface through which a reactive transport code can access and evolve a chemical system, using one of several supported geochemical "engines." We will describe Alquimia's current capabilities, and how they can be used for benchmarking reactive transport codes. We will also discuss upcoming features that will facilitate the coupling of biogeochemistry to other processes in new codes.

  3. Metrics That Matter.

    PubMed

    Prentice, Julia C; Frakt, Austin B; Pizer, Steven D

    2016-04-01

    Increasingly, performance metrics are seen as key components for accurately measuring and improving health care value. Disappointment in the ability of chosen metrics to meet these goals is exemplified in a recent Institute of Medicine report that argues for a consensus-building process to determine a simplified set of reliable metrics. Overall health care goals should be defined and then metrics to measure these goals should be considered. If appropriate data for the identified goals are not available, they should be developed. We use examples from our work in the Veterans Health Administration (VHA) on validating waiting time and mental health metrics to highlight other key issues for metric selection and implementation. First, we focus on the need for specification and predictive validation of metrics. Second, we discuss strategies to maintain the fidelity of the data used in performance metrics over time. These strategies include using appropriate incentives and data sources, using composite metrics, and ongoing monitoring. Finally, we discuss the VA's leadership in developing performance metrics through a planned upgrade in its electronic medical record system to collect more comprehensive VHA and non-VHA data, increasing the ability to comprehensively measure outcomes. PMID:26951272

  4. Conceptual Framework for Developing Resilience Metrics for the Electricity, Oil, and Gas Sectors in the United States

    SciTech Connect

    Watson, Jean-Paul; Guttromson, Ross; Silva-Monroy, Cesar; Jeffers, Robert; Jones, Katherine; Ellison, James; Rath, Charles; Gearhart, Jared; Jones, Dean; Corbet, Tom; Hanley, Charles; Walker, La Tonya

    2014-09-01

    This report has been written for the Department of Energy’s Energy Policy and Systems Analysis Office to inform their writing of the Quadrennial Energy Review in the area of energy resilience. The topics of measuring and increasing energy resilience are addressed, including definitions, means of measuring, and analytic methodologies that can be used to make decisions for policy, infrastructure planning, and operations. A risk-based framework is presented which provides a standard definition of a resilience metric. Additionally, a process is identified which explains how the metrics can be applied. Research and development is articulated that will further accelerate the resilience of energy infrastructures.

  5. A Strategy for Developing a Common Metric in Item Response Theory when Parameter Posterior Distributions Are Known

    ERIC Educational Resources Information Center

    Baldwin, Peter

    2011-01-01

    Growing interest in fully Bayesian item response models begs the question: To what extent can model parameter posterior draws enhance existing practices? One practice that has traditionally relied on model parameter point estimates but may be improved by using posterior draws is the development of a common metric for two independently calibrated…

  6. How to Advance TPC Benchmarks with Dependability Aspects

    NASA Astrophysics Data System (ADS)

    Almeida, Raquel; Poess, Meikel; Nambiar, Raghunath; Patil, Indira; Vieira, Marco

    Transactional systems are the core of the information systems of most organizations. Although there is general acknowledgement that failures in these systems often entail significant impact both on the proceeds and reputation of companies, the benchmarks developed and managed by the Transaction Processing Performance Council (TPC) still maintain their focus on reporting bare performance. Each TPC benchmark has to pass a list of dependability-related tests (to verify ACID properties), but not all benchmarks require measuring their performances. While TPC-E measures the recovery time of some system failures, TPC-H and TPC-C only require functional correctness of such recovery. Consequently, systems used in TPC benchmarks are tuned mostly for performance. In this paper we argue that nowadays systems should be tuned for a more comprehensive suite of dependability tests, and that a dependability metric should be part of TPC benchmark publications. The paper discusses WHY and HOW this can be achieved. Two approaches are introduced and discussed: augmenting each TPC benchmark in a customized way, by extending each specification individually; and pursuing a more unified approach, defining a generic specification that could be adjoined to any TPC benchmark.

  7. Cone beam computed tomography: Development of system characterization metrics and applications

    NASA Astrophysics Data System (ADS)

    Betancourt Benitez, Jose Ricardo

    Cone beam computed tomography has emerged as a promising medical imaging tool due to its short scanning time, large volume coverage and its isotropic spatial resolution in three dimensions among other characteristics. However, due to its inherent three-dimensionality, it is important to understand and characterize its physical characteristics to be able to improve its performance and extends its applications in medical imaging. One of the main components of a Cone beam computed tomography system is its flat panel detector. Its physical characteristics were evaluated in terms of spatial resolution, linearity, image lag, noise power spectrum and detective quantum efficiency. After evaluating the physical performance of the flat panel detector, metrics to evaluate the image quality of the system were developed and used to evaluate the systems image quality. Especially, the modulation transfer function and the noise power spectrum were characterized and evaluated for a PaxScan 4030CB FPD-based cone beam computed tomography system. Finally, novel applications using cone beam computed tomography images were suggested and evaluated for its practical application. For example, the characterization of breast density was evaluated and further studies were suggested that could impact the health system related to breast cancer. Another novel application was the utilization of cone beam computed tomography for orthopedic imaging. In this thesis, an initial assessment of its practical application was perform. Overall, three cone beam computed tomography systems were evaluated and utilized for different novel applications that would advance the field of medical imaging.

  8. Making Benchmark Testing Work

    ERIC Educational Resources Information Center

    Herman, Joan L.; Baker, Eva L.

    2005-01-01

    Many schools are moving to develop benchmark tests to monitor their students' progress toward state standards throughout the academic year. Benchmark tests can provide the ongoing information that schools need to guide instructional programs and to address student learning problems. The authors discuss six criteria that educators can use to…

  9. DNS benchmark solution of the fully developed turbulent channel flow with heat transfer

    NASA Astrophysics Data System (ADS)

    Jaszczur, M.

    2014-08-01

    In the present paper direct numerical simulation (DNS) of the fully developed turbulent non-isothermal flow has been study for Reτ=150 and for Pr=1.0. The focus is on the role of the thermal boundary condition type on the results. Various types of thermal boundary conditions presented in literature has been considered in this work: isoflux wall boundary conditions, symmetrical isofluxes wall boundary conditions and isothermal b.c. also with combination with adiabatic or isothermal second wall. Turbulence statistics for the fluid flow and thermal field as well turbulence structures are presented and compared. Numerical analysis assuming both zero and non-zero temperature fluctuations at the wall and zero and non-zero temperature gradient in the channel centre shows that thermal structures may differ depend on case and region. Results shows that the type of thermal boundary conditions significantly influence temperature fluctuations while the mean temperature is not affected. Difference in temperature fluctuation generate the difference in turbulent heat fluxes. Presented results are prepared in the form of the benchmark solution data and will be available in the digital form on the website http://home.agh.edu.pl/jaszczur.

  10. Developing chemical criteria for wildlife: The benchmark dose versus NOAEL approach

    SciTech Connect

    Linder, G.

    1995-12-31

    Wildlife may be exposed to a wide variety of chemicals in their environment, and various strategies for evaluating wildlife risk for these chemicals have been developed. One, a ``no-observable-adverse-effects-level`` or NOAEL-approach has increasingly been applied to develop chemical criteria for wildlife. In this approach, the NOAEL represents the highest experimental concentration at which there is no statistically significant change in some toxicity endpoint relative to a control. Another, the ``benchmark dose`` or BMD-approach relies on the lower confidence limit for a concentration that corresponds to a small, but statistically significant, change in effect over some reference condition. Rather than corresponding to a single experimental concentration as does the NOAEL, the BMD-approach considers the full concentration response curve for derivation of the BMD. Here, using a variety of vertebrates and an assortment of chemicals (including carbofuran, paraquat, methylmercury, cadmium, zinc, and copper), the NOAEL-approach will be critically evaluated relative to the BMD approach. Statistical models used in the BMD approach suggest these methods are potentially available for eliminating safety factors in risk calculations. A reluctance to recommend this, however, stems from the uncertainty associated with the shape of concentration-response curves at low concentrations. Also, with existing data the derivation of BMDs has shortcomings when sample size is small (10 or fewer animals per treatment). The success of BMD models clearly depends upon the continued collection of wildlife data in the field and laboratory, the design of toxicity studies sufficient for BMD calculations, and complete reporting of these results in the literature. Overall, the BMD approach for developing chemical criteria for wildlife should be given further consideration, since it more fully evaluates concentration-response data.

  11. Benchmarking B-Cell Epitope Prediction with Quantitative Dose-Response Data on Antipeptide Antibodies: Towards Novel Pharmaceutical Product Development

    PubMed Central

    Caoili, Salvador Eugenio C.

    2014-01-01

    B-cell epitope prediction can enable novel pharmaceutical product development. However, a mechanistically framed consensus has yet to emerge on benchmarking such prediction, thus presenting an opportunity to establish standards of practice that circumvent epistemic inconsistencies of casting the epitope prediction task as a binary-classification problem. As an alternative to conventional dichotomous qualitative benchmark data, quantitative dose-response data on antibody-mediated biological effects are more meaningful from an information-theoretic perspective in the sense that such effects may be expressed as probabilities (e.g., of functional inhibition by antibody) for which the Shannon information entropy (SIE) can be evaluated as a measure of informativeness. Accordingly, half-maximal biological effects (e.g., at median inhibitory concentrations of antibody) correspond to maximally informative data while undetectable and maximal biological effects correspond to minimally informative data. This applies to benchmarking B-cell epitope prediction for the design of peptide-based immunogens that elicit antipeptide antibodies with functionally relevant cross-reactivity. Presently, the Immune Epitope Database (IEDB) contains relatively few quantitative dose-response data on such cross-reactivity. Only a small fraction of these IEDB data is maximally informative, and many more of them are minimally informative (i.e., with zero SIE). Nevertheless, the numerous qualitative data in IEDB suggest how to overcome the paucity of informative benchmark data. PMID:24949474

  12. Development of a new, robust and accurate, spectroscopic metric for scatterer size estimation in optical coherence tomography (OCT) images

    NASA Astrophysics Data System (ADS)

    Kassinopoulos, Michalis; Pitris, Costas

    2016-03-01

    The modulations appearing on the backscattering spectrum originating from a scatterer are related to its diameter as described by Mie theory for spherical particles. Many metrics for Spectroscopic Optical Coherence Tomography (SOCT) take advantage of this observation in order to enhance the contrast of Optical Coherence Tomography (OCT) images. However, none of these metrics has achieved high accuracy when calculating the scatterer size. In this work, Mie theory was used to further investigate the relationship between the degree of modulation in the spectrum and the scatterer size. From this study, a new spectroscopic metric, the bandwidth of the Correlation of the Derivative (COD) was developed which is more robust and accurate, compared to previously reported techniques, in the estimation of scatterer size. The self-normalizing nature of the derivative and the robustness of the first minimum of the correlation as a measure of its width, offer significant advantages over other spectral analysis approaches especially for scatterer sizes above 3 μm. The feasibility of this technique was demonstrated using phantom samples containing 6, 10 and 16 μm diameter microspheres as well as images of normal and cancerous human colon. The results are very promising, suggesting that the proposed metric could be implemented in OCT spectral analysis for measuring nuclear size distribution in biological tissues. A technique providing such information would be of great clinical significance since it would allow the detection of nuclear enlargement at the earliest stages of precancerous development.

  13. Developing and Using Benchmarks for Eddy Current Simulation Codes Validation to Address Industrial Issues

    NASA Astrophysics Data System (ADS)

    Mayos, M.; Buvat, F.; Costan, V.; Moreau, O.; Gilles-Pascaud, C.; Reboud, C.; Foucher, F.

    2011-06-01

    To achieve performance demonstration, which is a legal requirement for the qualification of NDE processes applied on French nuclear power plants, the use of modeling tools is a valuable support, provided that the employed models have been previously validated. To achieve this, in particular for eddy current modeling, a validation methodology based on the use of specific benchmarks close to the actual industrial issue has to be defined. Nonetheless, considering the high variability in code origin and complexity, the feedback from experience on actual cases has shown that it was critical to define simpler generic and public benchmarks in order to perform a preliminary selection. A specific Working Group has been launched in the frame of COFREND, the French Association for NDE, resulting in the definition of several benchmark problems. This action is now ready for mutualization with similar international approaches.

  14. Benchmarking the CRBLASTER Computational Framework on the 350-MHz 49-core Maestro Development Board

    NASA Astrophysics Data System (ADS)

    Mighell, K. J.

    2012-09-01

    I describe the performance of the CRBLASTER computational framework on a 350-MHz 49-core Maestro Development Board (MBD). The 49-core Interim Test Chip (ITC) was developed by the U.S. Government and is based on the intellectual property of the 64-core TILE64 processor of the Tilera Corporation. The Maestro processor is intended for use in the high radiation environments found in space; the ITC was fabricated using IBM 90-nm CMOS 9SF technology and Radiation-Hardening-by-Design (RHDB) rules. CRBLASTER is a parallel-processing cosmic-ray rejection application based on a simple computational framework that uses the high-performance computing industry standard Message Passing Interface (MPI) library. CRBLASTER was designed to be used by research scientists to easily port image-analysis programs based on embarrassingly-parallel algorithms to a parallel-processing environment such as a multi-node Beowulf cluster or multi-core processors using MPI. I describe my experience of porting CRBLASTER to the 64-core TILE64 processor, the Maestro simulator, and finally the 49-core Maestro processor itself. Performance comparisons using the ITC are presented between emulating all floating-point operations in software and doing all floating point operations with hardware assist from an IEEE-754 compliant Aurora FPU (floating point unit) that is attached to each of the 49 cores. Benchmarking of the CRBLASTER computational framework using the memory-intensive L.A.COSMIC cosmic ray rejection algorithm and a computational-intensive Poisson noise generator reveal subtleties of the Maestro hardware design. Lastly, I describe the importance of using real scientific applications during the testing phase of next-generation computer hardware; complex real-world scientific applications can stress hardware in novel ways that may not necessarily be revealed while executing simple applications or unit tests.

  15. Benchmarking Diagnostic Algorithms on an Electrical Power System Testbed

    NASA Technical Reports Server (NTRS)

    Kurtoglu, Tolga; Narasimhan, Sriram; Poll, Scott; Garcia, David; Wright, Stephanie

    2009-01-01

    Diagnostic algorithms (DAs) are key to enabling automated health management. These algorithms are designed to detect and isolate anomalies of either a component or the whole system based on observations received from sensors. In recent years a wide range of algorithms, both model-based and data-driven, have been developed to increase autonomy and improve system reliability and affordability. However, the lack of support to perform systematic benchmarking of these algorithms continues to create barriers for effective development and deployment of diagnostic technologies. In this paper, we present our efforts to benchmark a set of DAs on a common platform using a framework that was developed to evaluate and compare various performance metrics for diagnostic technologies. The diagnosed system is an electrical power system, namely the Advanced Diagnostics and Prognostics Testbed (ADAPT) developed and located at the NASA Ames Research Center. The paper presents the fundamentals of the benchmarking framework, the ADAPT system, description of faults and data sets, the metrics used for evaluation, and an in-depth analysis of benchmarking results obtained from testing ten diagnostic algorithms on the ADAPT electrical power system testbed.

  16. Primary Metrics.

    ERIC Educational Resources Information Center

    Otto, Karen; And Others

    These 55 activity cards were created to help teachers implement a unit on metric measurement. They were designed for students aged 5 to 10, but could be used with older students. Cards are color-coded in terms of activities on basic metric terms, prefixes, length, and other measures. Both individual and small-group games and ideas are included.…

  17. Mastering Metrics

    ERIC Educational Resources Information Center

    Parrot, Annette M.

    2005-01-01

    By the time students reach a middle school science course, they are expected to make measurements using the metric system. However, most are not practiced in its use, as their experience in metrics is often limited to one unit they were taught in elementary school. This lack of knowledge is not wholly the fault of formal education. Although the…

  18. Metric Education Evaluation Package.

    ERIC Educational Resources Information Center

    Kansky, Bob; And Others

    This document was developed out of a need for a complete, carefully designed set of evaluation instruments and procedures that might be applied in metric inservice programs across the nation. Components of this package were prepared in such a way as to permit local adaptation to the evaluation of a broad spectrum of metric education activities.…

  19. Numerical studies and metric development for validation of magnetohydrodynamic models on the HIT-SI experiment

    SciTech Connect

    Hansen, C.; Victor, B.; Morgan, K.; Hossack, A.; Sutherland, D.; Jarboe, T.; Nelson, B. A.; Marklin, G.

    2015-05-15

    We present application of three scalar metrics derived from the Biorthogonal Decomposition (BD) technique to evaluate the level of agreement between macroscopic plasma dynamics in different data sets. BD decomposes large data sets, as produced by distributed diagnostic arrays, into principal mode structures without assumptions on spatial or temporal structure. These metrics have been applied to validation of the Hall-MHD model using experimental data from the Helicity Injected Torus with Steady Inductive helicity injection experiment. Each metric provides a measure of correlation between mode structures extracted from experimental data and simulations for an array of 192 surface-mounted magnetic probes. Numerical validation studies have been performed using the NIMROD code, where the injectors are modeled as boundary conditions on the flux conserver, and the PSI-TET code, where the entire plasma volume is treated. Initial results from a comprehensive validation study of high performance operation with different injector frequencies are presented, illustrating application of the BD method. Using a simplified (constant, uniform density and temperature) Hall-MHD model, simulation results agree with experimental observation for two of the three defined metrics when the injectors are driven with a frequency of 14.5 kHz.

  20. Aircraft Engine Gas Path Diagnostic Methods: Public Benchmarking Results

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Borguet, Sebastien; Leonard, Olivier; Zhang, Xiaodong (Frank)

    2013-01-01

    Recent technology reviews have identified the need for objective assessments of aircraft engine health management (EHM) technologies. To help address this issue, a gas path diagnostic benchmark problem has been created and made publicly available. This software tool, referred to as the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES), has been constructed based on feedback provided by the aircraft EHM community. It provides a standard benchmark problem enabling users to develop, evaluate and compare diagnostic methods. This paper will present an overview of ProDiMES along with a description of four gas path diagnostic methods developed and applied to the problem. These methods, which include analytical and empirical diagnostic techniques, will be described and associated blind-test-case metric results will be presented and compared. Lessons learned along with recommendations for improving the public benchmarking processes will also be presented and discussed.

  1. Nuclear Energy Readiness Indicator Index (NERI): A benchmarking tool for assessing nuclear capacity in developing countries

    SciTech Connect

    Saum-Manning,L.

    2008-07-13

    Declining natural resources, rising oil prices, looming climate change and the introduction of nuclear energy partnerships, such as GNEP, have reinvigorated global interest in nuclear energy. The convergence of such issues has prompted countries to move ahead quickly to deal with the challenges that lie ahead. However, developing countries, in particular, often lack the domestic infrastructure and public support needed to implement a nuclear energy program in a safe, secure, and nonproliferation-conscious environment. How might countries become ready for nuclear energy? What is needed is a framework for assessing a country's readiness for nuclear energy. This paper suggests that a Nuclear Energy Readiness Indicator (NERI) Index might serve as a meaningful basis for assessing a country's status in terms of progress toward nuclear energy utilization under appropriate conditions. The NERI Index is a benchmarking tool that measures a country's level of 'readiness' for nonproliferation-conscious nuclear energy development. NERI first identifies 8 key indicators that have been recognized by the International Atomic Energy Agency as key nonproliferation and security milestones to achieve prior to establishing a nuclear energy program. It then measures a country's progress in each of these areas on a 1-5 point scale. In doing so NERI illuminates gaps or underdeveloped areas in a country's nuclear infrastructure with a view to enable stakeholders to prioritize the allocation of resources toward programs and policies supporting international nonproliferation goals through responsible nuclear energy development. On a preliminary basis, the indicators selected include: (1) demonstrated need; (2) expressed political support; (3) participation in nonproliferation and nuclear security treaties, international terrorism conventions, and export and border control arrangements; (4) national nuclear-related legal and regulatory mechanisms; (5) nuclear infrastructure; (6) the

  2. Development of new VOC exposure metrics and their relationship to ''Sick Building Syndrome'' symptoms

    SciTech Connect

    Ten Brinke, JoAnn

    1995-08-01

    Volatile organic compounds (VOCs) are suspected to contribute significantly to ''Sick Building Syndrome'' (SBS), a complex of subchronic symptoms that occurs during and in general decreases away from occupancy of the building in question. A new approach takes into account individual VOC potencies, as well as the highly correlated nature of the complex VOC mixtures found indoors. The new VOC metrics are statistically significant predictors of symptom outcomes from the California Healthy Buildings Study data. Multivariate logistic regression analyses were used to test the hypothesis that a summary measure of the VOC mixture, other risk factors, and covariates for each worker will lead to better prediction of symptom outcome. VOC metrics based on animal irritancy measures and principal component analysis had the most influence in the prediction of eye, dermal, and nasal symptoms. After adjustment, a water-based paints and solvents source was found to be associated with dermal and eye irritation. The more typical VOC exposure metrics used in prior analyses were not useful in symptom prediction in the adjusted model (total VOC (TVOC), or sum of individually identified VOCs ({Sigma}VOC{sub i})). Also not useful were three other VOC metrics that took into account potency, but did not adjust for the highly correlated nature of the data set, or the presence of VOCs that were not measured. High TVOC values (2--7 mg m{sup {minus}3}) due to the presence of liquid-process photocopiers observed in several study spaces significantly influenced symptoms. Analyses without the high TVOC values reduced, but did not eliminate the ability of the VOC exposure metric based on irritancy and principal component analysis to explain symptom outcome.

  3. Developing Empirical Benchmarks of Teacher Knowledge Effect Sizes in Studies of Professional Development Effectiveness

    ERIC Educational Resources Information Center

    Phelps, Geoffrey; Jones, Nathan; Kelcey, Ben; Liu, Shuangshuang; Kisa, Zahid

    2013-01-01

    Growing interest in teaching quality and accountability has focused attention on the need for rigorous studies and evaluations of professional development (PD) programs. However, the study of PD has been hampered by a lack of suitable instruments. The authors present data from the Teacher Knowledge Assessment System (TKAS), which was designed to…

  4. Surveillance Metrics Sensitivity Study

    SciTech Connect

    Bierbaum, R; Hamada, M; Robertson, A

    2011-11-01

    In September of 2009, a Tri-Lab team was formed to develop a set of metrics relating to the NNSA nuclear weapon surveillance program. The purpose of the metrics was to develop a more quantitative and/or qualitative metric(s) describing the results of realized or non-realized surveillance activities on our confidence in reporting reliability and assessing the stockpile. As a part of this effort, a statistical sub-team investigated various techniques and developed a complementary set of statistical metrics that could serve as a foundation for characterizing aspects of meeting the surveillance program objectives. The metrics are a combination of tolerance limit calculations and power calculations, intending to answer level-of-confidence type questions with respect to the ability to detect certain undesirable behaviors (catastrophic defects, margin insufficiency defects, and deviations from a model). Note that the metrics are not intended to gauge product performance but instead the adequacy of surveillance. This report gives a short description of four metrics types that were explored and the results of a sensitivity study conducted to investigate their behavior for various inputs. The results of the sensitivity study can be used to set the risk parameters that specify the level of stockpile problem that the surveillance program should be addressing.

  5. Surveillance metrics sensitivity study.

    SciTech Connect

    Hamada, Michael S.; Bierbaum, Rene Lynn; Robertson, Alix A.

    2011-09-01

    In September of 2009, a Tri-Lab team was formed to develop a set of metrics relating to the NNSA nuclear weapon surveillance program. The purpose of the metrics was to develop a more quantitative and/or qualitative metric(s) describing the results of realized or non-realized surveillance activities on our confidence in reporting reliability and assessing the stockpile. As a part of this effort, a statistical sub-team investigated various techniques and developed a complementary set of statistical metrics that could serve as a foundation for characterizing aspects of meeting the surveillance program objectives. The metrics are a combination of tolerance limit calculations and power calculations, intending to answer level-of-confidence type questions with respect to the ability to detect certain undesirable behaviors (catastrophic defects, margin insufficiency defects, and deviations from a model). Note that the metrics are not intended to gauge product performance but instead the adequacy of surveillance. This report gives a short description of four metrics types that were explored and the results of a sensitivity study conducted to investigate their behavior for various inputs. The results of the sensitivity study can be used to set the risk parameters that specify the level of stockpile problem that the surveillance program should be addressing.

  6. Quality Metrics in Endoscopy

    PubMed Central

    Gurudu, Suryakanth R.

    2013-01-01

    Endoscopy has evolved in the past 4 decades to become an important tool in the diagnosis and management of many digestive diseases. Greater focus on endoscopic quality has highlighted the need to ensure competency among endoscopists. A joint task force of the American College of Gastroenterology and the American Society for Gastrointestinal Endoscopy has proposed several quality metrics to establish competence and help define areas of continuous quality improvement. These metrics represent quality in endoscopy pertinent to pre-, intra-, and postprocedural periods. Quality in endoscopy is a dynamic and multidimensional process that requires continuous monitoring of several indicators and benchmarking with local and national standards. Institutions and practices should have a process in place for credentialing endoscopists and for the assessment of competence regarding individual endoscopic procedures. PMID:24711767

  7. Development and Benchmarking of a Hybrid PIC Code For Dense Plasmas and Fast Ignition

    SciTech Connect

    Witherspoon, F. Douglas; Welch, Dale R.; Thompson, John R.; MacFarlane, Joeseph J.; Phillips, Michael W.; Bruner, Nicki; Mostrom, Chris; Thoma, Carsten; Clark, R. E.; Bogatu, Nick; Kim, Jin-Soo; Galkin, Sergei; Golovkin, Igor E.; Woodruff, P. R.; Wu, Linchun; Messer, Sarah J.

    2014-05-20

    Computational Sciences, Inc. and Advanced Energy Systems Inc. joined efforts to develop new physics and numerical models for LSP in several key areas to enhance the ability of LSP to model high energy density plasmas (HEDP). This final report details those efforts. Areas addressed in this research effort include: adding radiation transport to LSP, first in 2D and then fully 3D, extending the EMHD model to 3D, implementing more advanced radiation and electrode plasma boundary conditions, and installing more efficient implicit numerical algorithms to speed complex 2-D and 3-D computations. The new capabilities allow modeling of the dominant processes in high energy density plasmas, and further assist the development and optimization of plasma jet accelerators, with particular attention to MHD instabilities and plasma/wall interaction (based on physical models for ion drag friction and ablation/erosion of the electrodes). In the first funding cycle we implemented a solver for the radiation diffusion equation. To solve this equation in 2-D, we used finite-differencing and applied the parallelized sparse-matrix solvers in the PETSc library (Argonne National Laboratory) to the resulting system of equations. A database of the necessary coefficients for materials of interest was assembled using the PROPACEOS and ATBASE codes from Prism. The model was benchmarked against Prism's 1-D radiation hydrodynamics code HELIOS, and against experimental data obtained from HyperV's separately funded plasma jet accelerator development program. Work in the second funding cycle focused on extending the radiation diffusion model to full 3-D, continued development of the EMHD model, optimizing the direct-implicit model to speed up calculations, add in multiply ionized atoms, and improved the way boundary conditions are handled in LSP. These new LSP capabilities were then used, along with analytic calculations and Mach2 runs, to investigate plasma jet merging, plasma detachment and transport, restrike

  8. Metrication in a global environment

    NASA Technical Reports Server (NTRS)

    Aberg, J.

    1994-01-01

    A brief history about the development of the metric system of measurement is given. The need for the U.S. to implement the 'SI' metric system in the international markets, especially in the aerospace and general trade, is discussed. Development of metric implementation and experiences locally, nationally, and internationally are included.

  9. Progress in developing the ASPECT Mantle Convection Code - New Features, Benchmark Comparisons and Applications

    NASA Astrophysics Data System (ADS)

    Dannberg, Juliane; Bangerth, Wolfgang; Sobolev, Stephan

    2014-05-01

    Since there is no direct access to the deep Earth, numerical simulations are an indispensible tool for exploring processes in the Earth's mantle. Results of these models can be compared to surface observations and, combined with constraints from seismology and geochemistry, have provided insight into a broad range of geoscientific problems. In this contribution we present results obtained from a next-generation finite-element code called ASPECT (Advanced Solver for Problems in Earth's ConvecTion), which is especially suited for modeling thermo-chemical convection due to its use of many modern numerical techniques: fully adaptive meshes, accurate discretizations, a nonlinear artificial diffusion method to stabilize the advection equation, an efficient solution strategy based on a block triangular preconditioner utilizing an algebraic multigrid, parallelization of all of the steps above and finally its modular and easily extensible implementation. In particular the latter features make it a very versatile tool applicable also to lithosphere models. The equations are implemented in the form of the Anelastic Liquid Approximation with temperature, pressure, composition and strain rate dependent material properties including associated non-linear solvers. We will compare computations with ASPECT to common benchmarks in the geodynamics community such as the Rayleigh-Taylor instability (van Keken et al., 1997) and demonstrate recently implemented features such as a melting model with temperature, pressure and composition dependent melt fraction and latent heat. Moreover, we elaborate on a number of features currently under development by the community such as free surfaces, porous flow and elasticity. In addition, we show examples of how ASPECT is applied to develop sophisticated simulations of typical geodynamic problems. These include 3D models of thermo-chemical plumes incorporating phase transitions (including melting) with the accompanying density changes, Clapeyron

  10. BENCHMARKING SUSTAINABILITY ENGINEERING EDUCATION

    EPA Science Inventory

    The goals of this project are to develop and apply a methodology for benchmarking curricula in sustainability engineering and to identify individuals active in sustainability engineering education.

  11. The Development of a Benchmarking Methodology to Assist in Managing the Enhancement of University Research Quality

    ERIC Educational Resources Information Center

    Nicholls, Miles G.

    2007-01-01

    The paper proposes a metric, the research quality index (RQI), for assessing and tracking university research quality. The RQI is a composite index that encompasses the three main areas of research activity: publications, research grants and higher degree by research activity. The public availability of such an index will also facilitate…

  12. Limitations of Community College Benchmarking and Benchmarks

    ERIC Educational Resources Information Center

    Bers, Trudy H.

    2006-01-01

    This chapter distinguishes between benchmarks and benchmarking, describes a number of data and cultural limitations to benchmarking projects, and suggests that external demands for accountability are the dominant reason for growing interest in benchmarking among community colleges.

  13. Edible Metrics.

    ERIC Educational Resources Information Center

    Mecca, Christyna E.

    1998-01-01

    Presents an exercise that introduces students to scientific measurements using only metric units. At the conclusion of the exercise, students eat the experiment. Requires dried refried beans, crackers or chips, and dried instant powder for lemonade. (DDR)

  14. Think Metric

    USGS Publications Warehouse

    U.S. Geological Survey

    1978-01-01

    The International System of Units, as the metric system is officially called, provides for a single "language" to describe weights and measures over the world. We in the United States together with the people of Brunei, Burma, and Yemen are the only ones who have not put this convenient system into effect. In the passage of the Metric Conversion Act of 1975, Congress determined that we also will adopt it, but the transition will be voluntary.

  15. Benchmarking expert system tools

    NASA Technical Reports Server (NTRS)

    Riley, Gary

    1988-01-01

    As part of its evaluation of new technologies, the Artificial Intelligence Section of the Mission Planning and Analysis Div. at NASA-Johnson has made timing tests of several expert system building tools. Among the production systems tested were Automated Reasoning Tool, several versions of OPS5, and CLIPS (C Language Integrated Production System), an expert system builder developed by the AI section. Also included in the test were a Zetalisp version of the benchmark along with four versions of the benchmark written in Knowledge Engineering Environment, an object oriented, frame based expert system tool. The benchmarks used for testing are studied.

  16. NASA Software Engineering Benchmarking Study

    NASA Technical Reports Server (NTRS)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    To identify best practices for the improvement of software engineering on projects, NASA's Offices of Chief Engineer (OCE) and Safety and Mission Assurance (OSMA) formed a team led by Heather Rarick and Sally Godfrey to conduct this benchmarking study. The primary goals of the study are to identify best practices that: Improve the management and technical development of software intensive systems; Have a track record of successful deployment by aerospace industries, universities [including research and development (R&D) laboratories], and defense services, as well as NASA's own component Centers; and Identify candidate solutions for NASA's software issues. Beginning in the late fall of 2010, focus topics were chosen and interview questions were developed, based on the NASA top software challenges. Between February 2011 and November 2011, the Benchmark Team interviewed a total of 18 organizations, consisting of five NASA Centers, five industry organizations, four defense services organizations, and four university or university R and D laboratory organizations. A software assurance representative also participated in each of the interviews to focus on assurance and software safety best practices. Interviewees provided a wealth of information on each topic area that included: software policy, software acquisition, software assurance, testing, training, maintaining rigor in small projects, metrics, and use of the Capability Maturity Model Integration (CMMI) framework, as well as a number of special topics that came up in the discussions. NASA's software engineering practices compared favorably with the external organizations in most benchmark areas, but in every topic, there were ways in which NASA could improve its practices. Compared to defense services organizations and some of the industry organizations, one of NASA's notable weaknesses involved communication with contractors regarding its policies and requirements for acquired software. One of NASA's strengths

  17. Toward Developing a New Occupational Exposure Metric Approach for Characterization of Diesel Aerosols

    PubMed Central

    Cauda, Emanuele G.; Ku, Bon Ki; Miller, Arthur L.; Barone, Teresa L.

    2015-01-01

    The extensive use of diesel-powered equipment in mines makes the exposure to diesel aerosols a serious occupational issue. The exposure metric currently used in U.S. underground noncoal mines is based on the measurement of total carbon (TC) and elemental carbon (EC) mass concentration in the air. Recent toxicological evidence suggests that the measurement of mass concentration is not sufficient to correlate ultrafine aerosol exposure with health effects. This urges the evaluation of alternative measurements. In this study, the current exposure metric and two additional metrics, the surface area and the total number concentration, were evaluated by conducting simultaneous measurements of diesel ultrafine aerosols in a laboratory setting. The results showed that the surface area and total number concentration of the particles per unit of mass varied substantially with the engine operating condition. The specific surface area (SSA) and specific number concentration (SNC) normalized with TC varied two and five times, respectively. This implies that miners, whose exposure is measured only as TC, might be exposed to an unknown variable number concentration of diesel particles and commensurate particle surface area. Taken separately, mass, surface area, and number concentration did not completely characterize the aerosols. A comprehensive assessment of diesel aerosol exposure should include all of these elements, but the use of laboratory instruments in underground mines is generally impracticable. The article proposes a new approach to solve this problem. Using SSA and SNC calculated from field-type measurements, the evaluation of additional physical properties can be obtained by using the proposed approach. PMID:26361400

  18. Development and application of an agricultural intensity index to invertebrate and algal metrics from streams at two scales

    USGS Publications Warehouse

    Waite, Ian R.

    2013-01-01

    Research was conducted at 28-30 sites within eight study areas across the United States along a gradient of nutrient enrichment/agricultural land use between 2003 and 2007. Objectives were to test the application of an agricultural intensity index (AG-Index) and compare among various invertebrate and algal metrics to determine indicators of nutrient enrichment nationally and within three regions. The agricultural index was based on total nitrogen and phosphorus input to the watershed, percent watershed agriculture, and percent riparian agriculture. Among data sources, agriculture within riparian zone showed significant differences among values generated from remote sensing or from higher resolution orthophotography; median values dropped significantly when estimated by orthophotography. Percent agriculture in the watershed consistently had lower correlations to invertebrate and algal metrics than the developed AG-Index across all regions. Percent agriculture showed fewer pairwise comparisons that were significant than the same comparisons using the AG-Index. Highest correlations to the AG-Index regionally were −0.75 for Ephemeroptera, Plecoptera, and Trichoptera richness (EPTR) and −0.70 for algae Observed/Expected (O/E), nationally the highest was −0.43 for EPTR vs. total nitrogen and −0.62 for algae O/E vs. AG-Index. Results suggest that analysis of metrics at national scale can often detect large differences in disturbance, but more detail and specificity is obtained by analyzing data at regional scales.

  19. EVA Health and Human Performance Benchmarking Study

    NASA Technical Reports Server (NTRS)

    Abercromby, A. F.; Norcross, J.; Jarvis, S. L.

    2016-01-01

    Multiple HRP Risks and Gaps require detailed characterization of human health and performance during exploration extravehicular activity (EVA) tasks; however, a rigorous and comprehensive methodology for characterizing and comparing the health and human performance implications of current and future EVA spacesuit designs does not exist. This study will identify and implement functional tasks and metrics, both objective and subjective, that are relevant to health and human performance, such as metabolic expenditure, suit fit, discomfort, suited postural stability, cognitive performance, and potentially biochemical responses for humans working inside different EVA suits doing functional tasks under the appropriate simulated reduced gravity environments. This study will provide health and human performance benchmark data for humans working in current EVA suits (EMU, Mark III, and Z2) as well as shirtsleeves using a standard set of tasks and metrics with quantified reliability. Results and methodologies developed during this test will provide benchmark data against which future EVA suits, and different suit configurations (eg, varied pressure, mass, CG) may be reliably compared in subsequent tests. Results will also inform fitness for duty standards as well as design requirements and operations concepts for future EVA suits and other exploration systems.

  20. An evidence-based approach to benchmarking the fairness of health-sector reform in developing countries.

    PubMed

    Daniels, Norman; Flores, Walter; Pannarunothai, Supasit; Ndumbe, Peter N; Bryant, John H; Ngulube, T J; Wang, Yuankun

    2005-07-01

    The Benchmarks of Fairness instrument is an evidence-based policy tool developed in generic form in 2000 for evaluating the effects of health-system reforms on equity, efficiency and accountability. By integrating measures of these effects on the central goal of fairness, the approach fills a gap that has hampered reform efforts for more than two decades. Over the past three years, projects in developing countries on three continents have adapted the generic version of these benchmarks for use at both national and subnational levels. Interdisciplinary teams of managers, providers, academics and advocates agree on the relevant criteria for assessing components of fairness and, depending on which aspects of reform they wish to evaluate, select appropriate indicators that rely on accessible information; they also agree on scoring rules for evaluating the diverse changes in the indicators. In contrast to a comprehensive index that aggregates all measured changes into a single evaluation or rank, the pattern of changes revealed by the benchmarks is used to inform policy deliberation aboutwhich aspects of the reforms have been successfully implemented, and it also allows for improvements to be made in the reforms. This approach permits useful evidence about reform to be gathered in settings where existing information is underused and where there is a weak information infrastructure. Brief descriptions of early results from Cameroon, Ecuador, Guatemala, Thailand and Zambia demonstrate that the method can produce results that are useful for policy and reveal the variety of purposes to which the approach can be put. Collaboration across sites can yield a catalogue of indicators that will facilitate further work. PMID:16175828

  1. An evidence-based approach to benchmarking the fairness of health-sector reform in developing countries.

    PubMed Central

    Daniels, Norman; Flores, Walter; Pannarunothai, Supasit; Ndumbe, Peter N.; Bryant, John H.; Ngulube, T. J.; Wang, Yuankun

    2005-01-01

    The Benchmarks of Fairness instrument is an evidence-based policy tool developed in generic form in 2000 for evaluating the effects of health-system reforms on equity, efficiency and accountability. By integrating measures of these effects on the central goal of fairness, the approach fills a gap that has hampered reform efforts for more than two decades. Over the past three years, projects in developing countries on three continents have adapted the generic version of these benchmarks for use at both national and subnational levels. Interdisciplinary teams of managers, providers, academics and advocates agree on the relevant criteria for assessing components of fairness and, depending on which aspects of reform they wish to evaluate, select appropriate indicators that rely on accessible information; they also agree on scoring rules for evaluating the diverse changes in the indicators. In contrast to a comprehensive index that aggregates all measured changes into a single evaluation or rank, the pattern of changes revealed by the benchmarks is used to inform policy deliberation aboutwhich aspects of the reforms have been successfully implemented, and it also allows for improvements to be made in the reforms. This approach permits useful evidence about reform to be gathered in settings where existing information is underused and where there is a weak information infrastructure. Brief descriptions of early results from Cameroon, Ecuador, Guatemala, Thailand and Zambia demonstrate that the method can produce results that are useful for policy and reveal the variety of purposes to which the approach can be put. Collaboration across sites can yield a catalogue of indicators that will facilitate further work. PMID:16175828

  2. Development of Metric for Measuring the Impact of RD&D Funding on GTO's Geothermal Exploration Goals (Presentation)

    SciTech Connect

    Jenne, S.; Young, K. R.; Thorsteinsson, H.

    2013-04-01

    The Department of Energy's Geothermal Technologies Office (GTO) provides RD&D funding for geothermal exploration technologies with the goal of lowering the risks and costs of geothermal development and exploration. In 2012, NREL was tasked with developing a metric to measure the impacts of this RD&D funding on the cost and time required for exploration activities. The development of this metric included collecting cost and time data for exploration techniques, creating a baseline suite of exploration techniques to which future exploration and cost and time improvements could be compared, and developing an online tool for graphically showing potential project impacts (all available at http://en.openei.org/wiki/Gateway:Geothermal). The conference paper describes the methodology used to define the baseline exploration suite of techniques (baseline), as well as the approach that was used to create the cost and time data set that populates the baseline. The resulting product, an online tool for measuring impact, and the aggregated cost and time data are available on the Open EI website for public access (http://en.openei.org).

  3. Manned Mars Mission on-orbit operations metric development. [astronaut and robot performance in spacecraft orbital assembly

    NASA Technical Reports Server (NTRS)

    Gorin, Barney F.

    1990-01-01

    This report describes the effort made to develop a scoring system, or metric, for comparing astronaut Extra Vehicular Activity with various robotic options for the on-orbit assembly of a very large spacecraft, such as would be needed for a Manned Mars Mission. All trade studies comparing competing approaches to a specific task involve the use of some consistent and unbiased method for assigning a score, or rating factor, to each concept under consideration. The relative scores generated by the selected rating system provide the tool for deciding which of the approaches is the most desirable.

  4. Metric System.

    ERIC Educational Resources Information Center

    Del Mod System, Dover, DE.

    This autoinstructional unit deals with the identification of units of measure in the metric system and the construction of relevant conversion tables. Students in middle school or in grade ten, taking a General Science course, can handle this learning activity. It is recommended that high, middle or low level achievers can use the program.…

  5. An introduction to benchmarking in healthcare.

    PubMed

    Benson, H R

    1994-01-01

    Benchmarking--the process of establishing a standard of excellence and comparing a business function or activity, a product, or an enterprise as a whole with that standard--will be used increasingly by healthcare institutions to reduce expenses and simultaneously improve product and service quality. As a component of total quality management, benchmarking is a continuous process by which an organization can measure and compare its own processes with those of organizations that are leaders in a particular area. Benchmarking should be viewed as a part of quality management programs, not as a replacement. There are four kinds of benchmarking: internal, competitive, functional and generic. With internal benchmarking, functions within an organization are compared with each other. Competitive benchmarking partners do business in the same market and provide a direct comparison of products or services. Functional and generic benchmarking are performed with organizations which may have a specific similar function, such as payroll or purchasing, but which otherwise are in a different business. Benchmarking must be a team process because the outcome will involve changing current practices, with effects felt throughout the organization. The team should include members who have subject knowledge; communications and computer proficiency; skills as facilitators and outside contacts; and sponsorship of senior management. Benchmarking requires quantitative measurement of the subject. The process or activity that you are attempting to benchmark will determine the types of measurements used. Benchmarking metrics usually can be classified in one of four categories: productivity, quality, time and cost-related. PMID:10139084

  6. Development of a strontium chronic effects benchmark for aquatic life in freshwater.

    PubMed

    McPherson, Cathy A; Lawrence, Gary S; Elphick, James R; Chapman, Peter M

    2014-11-01

    There are no national water-quality guidelines for strontium for the protection of freshwater aquatic life in North America or elsewhere. Available data on the acute and chronic toxicity of strontium to freshwater aquatic life were compiled and reviewed. Acute toxicity was reported to occur at concentrations ranging from 75 mg/L to 15 000 mg/L. The majority of chronic effects occurred at concentrations above 11 mg/L; however, calculation of a representative benchmark was confounded by results from 4 studies indicating that chronic effects occurred at lower concentrations than all other studies, in 2 cases below background concentrations reported for US and European streams. Two of these studies, including 1 reporting effects below background concentrations, were repeated and found not to be reproducible; chronic effects occurred at considerably higher strontium concentrations than in the original studies. Studies with narrow-mouthed toad and goldfish were not repeated; both studies reported chronic effects below background concentrations, and both studies had been conducted by the authors of 1 of the 2 studies that were repeated and shown to be nonreproducible. Studies by these authors (3 of the 4 confounding studies), conducted over 30 yr ago, lacked detail in reporting of methods and results. It is thus likely that repeating the toad and goldfish studies would also have resulted in a higher strontium effects concentration. A strontium chronic effects benchmark of 10.7 mg/L that incorporates the results of additional testing summarized in the present study is proposed for freshwater environments. PMID:25051924

  7. The development and application of composite complexity models and a relative complexity metric in a software maintenance environment

    NASA Technical Reports Server (NTRS)

    Hops, J. M.; Sherif, J. S.

    1994-01-01

    A great deal of effort is now being devoted to the study, analysis, prediction, and minimization of software maintenance expected cost, long before software is delivered to users or customers. It has been estimated that, on the average, the effort spent on software maintenance is as costly as the effort spent on all other software costs. Software design methods should be the starting point to aid in alleviating the problems of software maintenance complexity and high costs. Two aspects of maintenance deserve attention: (1) protocols for locating and rectifying defects, and for ensuring that noe new defects are introduced in the development phase of the software process; and (2) protocols for modification, enhancement, and upgrading. This article focuses primarily on the second aspect, the development of protocols to help increase the quality and reduce the costs associated with modifications, enhancements, and upgrades of existing software. This study developed parsimonious models and a relative complexity metric for complexity measurement of software that were used to rank the modules in the system relative to one another. Some success was achieved in using the models and the relative metric to identify maintenance-prone modules.

  8. The Development and Application of Composite Complexity Models and a Relative Complexity Metric in a Software Maintenance Environment

    NASA Astrophysics Data System (ADS)

    Hops, J. M.; Sherif, J. S.

    1994-01-01

    A great deal of effort is now being devoted to the study, analysis, prediction, and minimization of software maintenance expected cost, long before software is delivered to users or customers. It has been estimated that, on the average, the effort spent on software maintenance is as costly as the effort spent on all other software costs. Software design methods should be the starting point to aid in alleviating the problems of software maintenance complexity and high costs. Two aspects of maintenance deserve attention: (1) protocols for locating and rectifying defects, and for ensuring that no new defects are introduced in the development phase of the software process, and (2) protocols for modification, enhancement, and upgrading. This article focuses primarily on the second aspect, the development of protocols to help increase the quality and reduce the costs associated with modi fications, enhancements, and upgrades of existing software. This study developed parsimonious models and a relative complexity metric for complexity measurement of software that were used to rank the modules in the system relative to one another. Some success was achieved in using the models and the relative metric to identify maintenance-prone modules.

  9. Optimization of Deep Drilling Performance - Development and Benchmark Testing of Advanced Diamond Product Drill Bits & HP/HT Fluids to Significantly Improve Rates of Penetration

    SciTech Connect

    Alan Black; Arnis Judzis

    2005-09-30

    This document details the progress to date on the OPTIMIZATION OF DEEP DRILLING PERFORMANCE--DEVELOPMENT AND BENCHMARK TESTING OF ADVANCED DIAMOND PRODUCT DRILL BITS AND HP/HT FLUIDS TO SIGNIFICANTLY IMPROVE RATES OF PENETRATION contract for the year starting October 2004 through September 2005. The industry cost shared program aims to benchmark drilling rates of penetration in selected simulated deep formations and to significantly improve ROP through a team development of aggressive diamond product drill bit--fluid system technologies. Overall the objectives are as follows: Phase 1--Benchmark ''best in class'' diamond and other product drilling bits and fluids and develop concepts for a next level of deep drilling performance; Phase 2--Develop advanced smart bit-fluid prototypes and test at large scale; and Phase 3--Field trial smart bit--fluid concepts, modify as necessary and commercialize products. As of report date, TerraTek has concluded all Phase 1 testing and is planning Phase 2 development.

  10. Benchmarking--Measuring and Comparing for Continuous Improvement.

    ERIC Educational Resources Information Center

    Henczel, Sue

    2002-01-01

    Discussion of benchmarking focuses on the use of internal and external benchmarking by special librarians. Highlights include defining types of benchmarking; historical development; benefits, including efficiency, improved performance, increased competitiveness, and better decision making; problems, including inappropriate adaptation; developing a…

  11. Development and comparison of weighting metrics for probabilistic climate change projections of Mediterranean precipitation

    NASA Astrophysics Data System (ADS)

    Kaspar-Ott, Irena; Hertig, Elke; Pollinger, Felix; Ring, Christoph; Paeth, Heiko; Jacobeit, Jucundus

    2016-04-01

    Climate protection and adaptive measures require reliable estimates of future climate change. Coupled global circulation models are still the most appropriate tool. However, the climate projections of individual models differ considerably, particularly at the regional scale and with respect to certain climate variables such as precipitation. Significant uncertainties also arise on the part of climate impact research. The model differences result from unknown initial conditions, different resolutions and driving mechanisms, different model parameterizations and emission scenarios. It is very challenging to determine which model simulates proper future climate conditions. By implementing results from all important model runs in probability density functions, the exceeding probabilities with respect to certain thresholds of climate change can be determined. The aim of this study is to derive such probabilistic estimates of future precipitation changes in the Mediterranean region for the multi-model ensemble from CMIP3 and CMIP5. The Mediterranean region represents a so-called hot spot of climate change. The analyses are carried out for the meteorological seasons in eight Mediterranean sub-regions, based on the results of principal component analyses. The methodologically innovative aspect refers mainly to the comparison of different metrics to derive model weights, such as Bayesian statistics, regression models, spatial-temporal filtering, the fingerprinting method and quality criteria for the simulated large-scale circulation. The latter describes the ability of the models to simulate the North Atlantic Oscillation, the East Atlantic pattern, the East Atlantic/West Russia pattern and the Scandinavia pattern, as they are the most important large-scale atmospheric drivers for Mediterranean precipitation. The comparison of observed atmospheric patterns with the modeled patterns leads to specific model weights. They are checked for their temporal consistency in the 20th

  12. Development of a benchmark factor to detect wrinkles in bending parts

    NASA Astrophysics Data System (ADS)

    Engel, Bernd; Zehner, Bernd-Uwe; Mathes, Christian; Kuhnhen, Christopher

    2013-12-01

    The rotary draw bending process finds special use in the bending of parts with small bending radii. Due to the support of the forming zone during the bending process, semi-finished products with small wall thicknesses can be bent. One typical quality characteristic is the emergence of corrugations and wrinkles at the inside arc. Presently, the standard for the evaluation of wrinkles is insufficient. The wrinkles' distribution along the longitudinal axis of the tube results in an average value [1]. An evaluation of the wrinkles is not carried out. Due to the lack of an adequate basis of assessment, coordination problems between customers and suppliers occur. They result from an imprecision caused by the lack of quantitative evaluability of the geometric deviations at the inside arc. The benchmark factor for the inside arc presented in this article is an approach to holistically evaluate the geometric deviations at the inside arc. The classification of geometric deviations is carried out according to the area of the geometric characteristics and the respective flank angles.

  13. Metrics for Occupations. Information Series No. 118.

    ERIC Educational Resources Information Center

    Peterson, John C.

    The metric system is discussed in this information analysis paper with regard to its history, a rationale for the United States' adoption of the metric system, a brief overview of the basic units of the metric system, examples of how the metric system will be used in different occupations, and recommendations for research and development. The…

  14. Development of a total dissolved solids (TDS) chronic effects benchmark for a northern Canadian lake.

    PubMed

    Chapman, Peter M; McPherson, Cathy A

    2016-04-01

    Laboratory chronic toxicity tests with plankton, benthos, and fish early life stages were conducted with total dissolved solids (TDS) at an ionic composition specific to Snap Lake (Northwest Territories, Canada), which receives treated effluent from the Snap Lake Diamond Mine. Snap Lake TDS composition has remained consistent from 2007 to 2014 and is expected to remain unchanged through the life of the mine: Cl (45%-47%), Ca (20%-21%), Na (10%-11%), sulfate (9%); carbonate (5%-7%), nitrate (4%), Mg (2%-3%), and minor contributions from K and fluoride. The TDS concentrations that resulted in negligible effects (i.e., 10% or 20% effect concentrations) to taxa representative of resident biota ranged from greater than 1100 to greater than 2200 mg/L, with the exception of a 21% effect concentration of 990 mg/L for 1 of 2 early life stage fish dry fertilization tests (wet fertilization results were >1480 mg/L). A conservative, site-specific, chronic effects benchmark for Snap Lake TDS of 1000 mg/L was derived, below the lowest negligible effect concentration for the most sensitive resident taxon tested, the cladoceran, Daphnia magna (>1100 mg/L). Cladocerans typically only constitute a few percent of the zooplankton community and biomass in Snap Lake; other plankton effect concentrations ranged from greater than 1330 to greater than 1510 mg/L. Chironomids, representative of the lake benthos, were not affected by greater than 1380 mg/L TDS. Early life stage tests with 3 fish species resulted in 10% to 20% effect concentrations ranging from greater than 1410 to greater than 2200 mg/L. The testing undertaken is generally applicable to northern freshwaters, and the concept can readily be adapted to other freshwaters either for TDS where ionic composition does not change or for major ionic components, where TDS composition does change. PMID:26174095

  15. PNNL Information Technology Benchmarking

    SciTech Connect

    DD Hostetler

    1999-09-08

    Benchmarking is a methodology for searching out industry best practices that lead to superior performance. It is exchanging information, not just with any organization, but with organizations known to be the best within PNNL, in industry, or in dissimilar industries with equivalent functions. It is used as a continuous improvement tool for business and technical processes, products, and services. Information technology--comprising all computer and electronic communication products and services--underpins the development and/or delivery of many PNNL products and services. This document describes the Pacific Northwest National Laboratory's (PNNL's) approach to information technology (IT) benchmarking. The purpose is to engage other organizations in the collaborative process of benchmarking in order to improve the value of IT services provided to customers. TM document's intended audience consists of other US Department of Energy (DOE) national laboratories and their IT staff. Although the individual participants must define the scope of collaborative benchmarking, an outline of IT service areas for possible benchmarking is described.

  16. Development of a chronic noncancer oral reference dose and drinking water screening level for sulfolane using benchmark dose modeling.

    PubMed

    Thompson, Chad M; Gaylor, David W; Tachovsky, J Andrew; Perry, Camarie; Carakostas, Michael C; Haws, Laurie C

    2013-12-01

    Sulfolane is a widely used industrial solvent that is often used for gas treatment (sour gas sweetening; hydrogen sulfide removal from shale and coal processes, etc.), and in the manufacture of polymers and electronics, and may be found in pharmaceuticals as a residual solvent used in the manufacturing processes. Sulfolane is considered a high production volume chemical with worldwide production around 18 000-36 000 tons per year. Given that sulfolane has been detected as a contaminant in groundwater, an important potential route of exposure is tap water ingestion. Because there are currently no federal drinking water standards for sulfolane in the USA, we developed a noncancer oral reference dose (RfD) based on benchmark dose modeling, as well as a tap water screening value that is protective of ingestion. Review of the available literature suggests that sulfolane is not likely to be mutagenic, clastogenic or carcinogenic, or pose reproductive or developmental health risks except perhaps at very high exposure concentrations. RfD values derived using benchmark dose modeling were 0.01-0.04 mg kg(-1) per day, although modeling of developmental endpoints resulted in higher values, approximately 0.4 mg kg(-1) per day. The lowest, most conservative, RfD of 0.01 mg kg(-1) per day was based on reduced white blood cell counts in female rats. This RfD was used to develop a tap water screening level that is protective of ingestion, viz. 365 µg l(-1). It is anticipated that these values, along with the hazard identification and dose-response modeling described herein, should be informative for risk assessors and regulators interested in setting health-protective drinking water guideline values for sulfolane. PMID:22936336

  17. Catchment controls on water temperature and the development of simple metrics to inform riparian zone management

    NASA Astrophysics Data System (ADS)

    Johnson, Matthew; Wilby, Robert

    2015-04-01

    of thermal refuge could be important in the context of future climate change, potentially maintaining populations of animals excluded from other parts of the river during hot summer months. International management strategies to mitigate rising temperatures tend to focus on the protection, enhancement or creation of riparian shade. Simple metrics derived from catchment landscape models, the heat capacity of water, and modelled solar radiation receipt, suggest that approximately 1 km of deep riparian shading is necessary to offset a 1° C rise in temperature in the monitored catchments. A similar value is likely to be obtained for similar sized rivers at similar latitudes. Trees would take 20 years to attain sufficient height to shade the necessary solar angles. However, 1 km of deep riparian shade will have substantial impacts on the hydrological and geomorphological functioning of the river, beyond simply altering the thermal regime. Consequently, successful management of rising water temperature in rivers will require catchment scale consideration, as part of an integrated management plan.

  18. Ordinal Distance Metric Learning for Image Ranking.

    PubMed

    Li, Changsheng; Liu, Qingshan; Liu, Jing; Lu, Hanqing

    2015-07-01

    Recently, distance metric learning (DML) has attracted much attention in image retrieval, but most previous methods only work for image classification and clustering tasks. In this brief, we focus on designing ordinal DML algorithms for image ranking tasks, by which the rank levels among the images can be well measured. We first present a linear ordinal Mahalanobis DML model that tries to preserve both the local geometry information and the ordinal relationship of the data. Then, we develop a nonlinear DML method by kernelizing the above model, considering of real-world image data with nonlinear structures. To further improve the ranking performance, we finally derive a multiple kernel DML approach inspired by the idea of multiple-kernel learning that performs different kernel operators on different kinds of image features. Extensive experiments on four benchmarks demonstrate the power of the proposed algorithms against some related state-of-the-art methods. PMID:25163071

  19. Engineering performance metrics

    SciTech Connect

    DeLozier, R. ); Snyder, N. )

    1993-03-31

    Implementation of a Total Quality Management (TQM) approach to engineering work required the development of a system of metrics which would serve as a meaningful msinagement tool for evaluating effectiveness in accomplishing project objectives and in achieving improved customer satisfaction. A team effort was chartered with the goal of developing a system of engineering performance metrics which would measure customer satisfaction, quality, cost effectiveness, and timeliness. The approach to developing this system involved normal systems design phases including, conceptual design, detailed design, implementation, and integration. The lessons teamed from this effort will be explored in this paper. These lessons teamed may provide a starting point for other large engineering organizations seeking to institute a performance measurement system accomplishing project objectives and in achieving improved customer satisfaction. To facilitate this effort, a team was chartered to assist in the development of the metrics system. This team, consisting of customers and Engineering staff members, was utilized to ensure that the needs and views of the customers were considered in the development of performance measurements. The development of a system of metrics is no different than the development of any type of system. It includes the steps of defining performance measurement requirements, measurement process conceptual design, performance measurement and reporting system detailed design, and system implementation and integration.

  20. Engineering performance metrics

    NASA Astrophysics Data System (ADS)

    Delozier, R.; Snyder, N.

    1993-03-01

    Implementation of a Total Quality Management (TQM) approach to engineering work required the development of a system of metrics which would serve as a meaningful management tool for evaluating effectiveness in accomplishing project objectives and in achieving improved customer satisfaction. A team effort was chartered with the goal of developing a system of engineering performance metrics which would measure customer satisfaction, quality, cost effectiveness, and timeliness. The approach to developing this system involved normal systems design phases including, conceptual design, detailed design, implementation, and integration. The lessons teamed from this effort will be explored in this paper. These lessons learned may provide a starting point for other large engineering organizations seeking to institute a performance measurement system accomplishing project objectives and in achieving improved customer satisfaction. To facilitate this effort, a team was chartered to assist in the development of the metrics system. This team, consisting of customers and Engineering staff members, was utilized to ensure that the needs and views of the customers were considered in the development of performance measurements. The development of a system of metrics is no different than the development of any type of system. It includes the steps of defining performance measurement requirements, measurement process conceptual design, performance measurement and reporting system detailed design, and system implementation and integration.

  1. An analytical model of the HINT performance metric

    SciTech Connect

    Snell, Q.O.; Gustafson, J.L.

    1996-10-01

    The HINT benchmark was developed to provide a broad-spectrum metric for computers and to measure performance over the full range of memory sizes and time scales. We have extended our understanding of why HINT performance curves look the way they do and can now predict the curves using an analytical model based on simple hardware specifications as input parameters. Conversely, by fitting the experimental curves with the analytical model, hardware specifications such as memory performance can be inferred to provide insight into the nature of a given computer system.

  2. Changes in Benchmarked Training.

    ERIC Educational Resources Information Center

    Bassi, Laurie J.; Cheney, Scott

    1996-01-01

    Comparisons of the training practices of large companies confirm that the delivery and financing of training is changing rapidly. Companies in the American Society for Training and Development Benchmarking Forum are delivering less training with permanent staff and more with strategic use of technology, contract staff, and external providers,…

  3. Metricize Yourself

    NASA Astrophysics Data System (ADS)

    Falbo, Maria K.

    2006-12-01

    In lab and homework, students should check whether or not their quantitative answers to physics questions make sense in the context of the problem. Unfortunately it is still the case in the US that many students don’t have a “feel” for oC, kg, cm, liters or Newtons. This problem contributes to the inability of students to check answers. It is also the case that just “going over” the tables in the text can be boring and dry. In this talk I’ll demonstrate some classroom activities that can be used throughout the year to give students a metric context in which quantitative answers can be interpreted.

  4. Virtual machine performance benchmarking.

    PubMed

    Langer, Steve G; French, Todd

    2011-10-01

    The attractions of virtual computing are many: reduced costs, reduced resources and simplified maintenance. Any one of these would be compelling for a medical imaging professional attempting to support a complex practice on limited resources in an era of ever tightened reimbursement. In particular, the ability to run multiple operating systems optimized for different tasks (computational image processing on Linux versus office tasks on Microsoft operating systems) on a single physical machine is compelling. However, there are also potential drawbacks. High performance requirements need to be carefully considered if they are to be executed in an environment where the running software has to execute through multiple layers of device drivers before reaching the real disk or network interface. Our lab has attempted to gain insight into the impact of virtualization on performance by benchmarking the following metrics on both physical and virtual platforms: local memory and disk bandwidth, network bandwidth, and integer and floating point performance. The virtual performance metrics are compared to baseline performance on "bare metal." The results are complex, and indeed somewhat surprising. PMID:21207096

  5. Benchmarking Professional Development Practices across Youth-Serving Organizations: Implications for Extension

    ERIC Educational Resources Information Center

    Garst, Barry A.; Baughman, Sarah; Franz, Nancy

    2014-01-01

    Examining traditional and contemporary professional development practices of youth-serving organizations can inform practices across Extension, particularly in light of the barriers that have been noted for effectively developing the professional competencies of Extension educators. With professional development systems changing quickly,…

  6. Are We Doing Ok? Developing a Generic Process to Benchmark Career Services in Educational Institutions

    ERIC Educational Resources Information Center

    McCowan, Col; McKenzie, Malcolm

    2011-01-01

    In 2007 the Career Industry Council of Australia developed the Guiding Principles for Career Development Services and Career Information Products as one part of its strategy to produce a national quality framework for career development activities in Australia. An Australian university career service undertook an assessment process against these…

  7. Geothermal Resource Reporting Metric (GRRM) Developed for the U.S. Department of Energy's Geothermal Technologies Office

    SciTech Connect

    Young, Katherine R.; Wall, Anna M.; Dobson, Patrick F.

    2015-09-02

    This paper reviews a methodology being developed for reporting geothermal resources and project progress. The goal is to provide the U.S. Department of Energy's (DOE) Geothermal Technologies Office (GTO) with a consistent and comprehensible means of evaluating the impacts of its funding programs. This framework will allow the GTO to assess the effectiveness of research, development, and deployment (RD&D) funding, prioritize funding requests, and demonstrate the value of RD&D programs to the U.S. Congress and the public. Standards and reporting codes used in other countries and energy sectors provide guidance to develop the relevant geothermal methodology, but industry feedback and our analysis suggest that the existing models have drawbacks that should be addressed. In order to formulate a comprehensive metric for use by the GTO, we analyzed existing resource assessments and reporting methodologies for the geothermal, mining, and oil and gas industries, and sought input from industry, investors, academia, national labs, and other government agencies. Using this background research as a guide, we describe a methodology for evaluating and reporting on GTO funding according to resource grade (geological, technical and socio-economic) and project progress. This methodology would allow GTO to target funding, measure impact by monitoring the progression of projects, or assess geological potential of targeted areas for development.

  8. Development and Calibration of an Item Bank for PE Metrics Assessments: Standard 1

    ERIC Educational Resources Information Center

    Zhu, Weimo; Fox, Connie; Park, Youngsik; Fisette, Jennifer L.; Dyson, Ben; Graber, Kim C.; Avery, Marybell; Franck, Marian; Placek, Judith H.; Rink, Judy; Raynes, De

    2011-01-01

    The purpose of this study was to develop and calibrate an assessment system, or bank, using the latest measurement theories and methods to promote valid and reliable student assessment in physical education. Using an anchor-test equating design, a total of 30 items or assessments were administered to 5,021 (2,568 boys and 2,453 girls) students in…

  9. Percentile-Based Journal Impact Factors: A Neglected Collection Development Metric

    ERIC Educational Resources Information Center

    Wagner, A. Ben

    2009-01-01

    Various normalization techniques to transform journal impact factors (JIFs) into a standard scale or range of values have been reported a number of times in the literature, but have seldom been part of collection development librarians' tool kits. In this paper, JIFs as reported in the Journal Citation Reports (JCR) database are converted to…

  10. Developing Composite Metrics of Teaching Practice for Mediator Analysis of Program Impact

    ERIC Educational Resources Information Center

    Lazarev, Val; Newman, Denis

    2014-01-01

    Efficacy studies of educational programs often involve mediator analyses aimed at testing empirically appropriate theories of action. In particular, in the studies of professional development programs, the intervention targets primarily teachers' pedagogical skills and content knowledge, while the ultimate outcome is the student achievement…

  11. Developing an Aggregate Metric of Teaching Practice for Use in Mediator Analysis

    ERIC Educational Resources Information Center

    Lazarev, Valeriy; Newman, Denis; Grossman, Pam

    2013-01-01

    Efficacy studies of educational programs often involve mediator analyses aimed at testing empirically appropriate theories of action. In particular, in the studies of professional teacher development programs, the intervention targets presumably teacher performance while the ultimate outcome is the student achievement measured by a standardized…

  12. International Benchmarking: State and National Education Performance Standards

    ERIC Educational Resources Information Center

    Phillips, Gary W.

    2014-01-01

    This report uses international benchmarking as a common metric to examine and compare what students are expected to learn in some states with what students are expected to learn in other states. The performance standards in each state were compared with the international benchmarks used in two international assessments, and it was assumed that…

  13. Benchmarking local healthcare-associated infections: available benchmarks and interpretation challenges.

    PubMed

    El-Saed, Aiman; Balkhy, Hanan H; Weber, David J

    2013-10-01

    Growing numbers of healthcare facilities are routinely collecting standardized data on healthcare-associated infection (HAI), which can be used not only to track internal performance but also to compare local data to national and international benchmarks. Benchmarking overall (crude) HAI surveillance metrics without accounting or adjusting for potential confounders can result in misleading conclusions. Methods commonly used to provide risk-adjusted metrics include multivariate logistic regression analysis, stratification, indirect standardization, and restrictions. The characteristics of recognized benchmarks worldwide, including the advantages and limitations are described. The choice of the right benchmark for the data from the Gulf Cooperation Council (GCC) states is challenging. The chosen benchmark should have similar data collection and presentation methods. Additionally, differences in surveillance environments including regulations should be taken into consideration when considering such a benchmark. The GCC center for infection control took some steps to unify HAI surveillance systems in the region. GCC hospitals still need to overcome legislative and logistic difficulties in sharing data to create their own benchmark. The availability of a regional GCC benchmark may better enable health care workers and researchers to obtain more accurate and realistic comparisons. PMID:23999329

  14. Beyond Human Capital Development: Balanced Safeguards Workforce Metrics and the Next Generation Safeguards Workforce

    SciTech Connect

    Burbank, Roberta L.; Frazar, Sarah L.; Gitau, Ernest TN; Shergur, Jason M.; Scholz, Melissa A.; Undem, Halvor A.

    2014-03-28

    Since its establishment in 2008, the Next Generation Safeguards Initiative (NGSI) has achieved a number of objectives under its five pillars: concepts and approaches, policy development and outreach, international nuclear safeguards engagement, technology development, and human capital development (HCD). As a result of these efforts, safeguards has become much more visible as a critical U.S. national security interest across the U.S. Department of Energy (DOE) complex. However, limited budgets have since created challenges in a number of areas. Arguably, one of the more serious challenges involves NGSI’s ability to integrate entry-level staff into safeguards projects. Laissez fair management of this issue across the complex can lead to wasteful project implementation and endanger NGSI’s long-term sustainability. The authors provide a quantitative analysis of this problem, focusing on the demographics of the current safeguards workforce and compounding pressures to operate cost-effectively, transfer knowledge to the next generation of safeguards professionals, and sustain NGSI safeguards investments.

  15. Measuring Impact of U.S. DOE Geothermal Technologies Office Funding: Considerations for Development of a Geothermal Resource Reporting Metric

    SciTech Connect

    Young, Katherine R.; Wall, Anna M.; Dobson, Patrick F.; Bennett, Mitchell; Segneri, Brittany

    2015-04-25

    This paper reviews existing methodologies and reporting codes used to describe extracted energy resources such as coal and oil and describes a comparable proposed methodology to describe geothermal resources. The goal is to provide the U.S. Department of Energy's (DOE) Geothermal Technologies Office (GTO) with a consistent and comprehensible means of assessing the impacts of its funding programs. This framework will allow for GTO to assess the effectiveness of research, development, and deployment (RD&D) funding, prioritize funding requests, and demonstrate the value of RD&D programs to the U.S. Congress. Standards and reporting codes used in other countries and energy sectors provide guidance to inform development of a geothermal methodology, but industry feedback and our analysis suggest that the existing models have drawbacks that should be addressed. In order to formulate a comprehensive metric for use by GTO, we analyzed existing resource assessments and reporting methodologies for the geothermal, mining, and oil and gas industries, and we sought input from industry, investors, academia, national labs, and other government agencies. Using this background research as a guide, we describe a methodology for assessing and reporting on GTO funding according to resource knowledge and resource grade (or quality). This methodology would allow GTO to target funding or measure impact by progression of projects or geological potential for development.

  16. Software Quality Assurance Metrics

    NASA Technical Reports Server (NTRS)

    McRae, Kalindra A.

    2004-01-01

    Software Quality Assurance (SQA) is a planned and systematic set of activities that ensures conformance of software life cycle processes and products conform to requirements, standards and procedures. In software development, software quality means meeting requirements and a degree of excellence and refinement of a project or product. Software Quality is a set of attributes of a software product by which its quality is described and evaluated. The set of attributes includes functionality, reliability, usability, efficiency, maintainability, and portability. Software Metrics help us understand the technical process that is used to develop a product. The process is measured to improve it and the product is measured to increase quality throughout the life cycle of software. Software Metrics are measurements of the quality of software. Software is measured to indicate the quality of the product, to assess the productivity of the people who produce the product, to assess the benefits derived from new software engineering methods and tools, to form a baseline for estimation, and to help justify requests for new tools or additional training. Any part of the software development can be measured. If Software Metrics are implemented in software development, it can save time, money, and allow the organization to identify the caused of defects which have the greatest effect on software development. The summer of 2004, I worked with Cynthia Calhoun and Frank Robinson in the Software Assurance/Risk Management department. My task was to research and collect, compile, and analyze SQA Metrics that have been used in other projects that are not currently being used by the SA team and report them to the Software Assurance team to see if any metrics can be implemented in their software assurance life cycle process.

  17. Testing, Requirements, and Metrics

    NASA Technical Reports Server (NTRS)

    Rosenberg, Linda; Hyatt, Larry; Hammer, Theodore F.; Huffman, Lenore; Wilson, William

    1998-01-01

    The criticality of correct, complete, testable requirements is a fundamental tenet of software engineering. Also critical is complete requirements based testing of the final product. Modern tools for managing requirements allow new metrics to be used in support of both of these critical processes. Using these tools, potential problems with the quality of the requirements and the test plan can be identified early in the life cycle. Some of these quality factors include: ambiguous or incomplete requirements, poorly designed requirements databases, excessive or insufficient test cases, and incomplete linkage of tests to requirements. This paper discusses how metrics can be used to evaluate the quality of the requirements and test to avoid problems later. Requirements management and requirements based testing have always been critical in the implementation of high quality software systems. Recently, automated tools have become available to support requirements management. At NASA's Goddard Space Flight Center (GSFC), automated requirements management tools are being used on several large projects. The use of these tools opens the door to innovative uses of metrics in characterizing test plan quality and assessing overall testing risks. In support of these projects, the Software Assurance Technology Center (SATC) is working to develop and apply a metrics program that utilizes the information now available through the application of requirements management tools. Metrics based on this information provides real-time insight into the testing of requirements and these metrics assist the Project Quality Office in its testing oversight role. This paper discusses three facets of the SATC's efforts to evaluate the quality of the requirements and test plan early in the life cycle, thus preventing costly errors and time delays later.

  18. Metrication: A Guide for Consumers.

    ERIC Educational Resources Information Center

    Consumer and Corporate Affairs Dept., Ottawa (Ontario).

    The widespread use of the metric system by most of the major industrial powers of the world has prompted the Canadian government to investigate and consider use of the system. This booklet was developed to aid the consuming public in Canada in gaining some knowledge of metrication and how its application would affect their present economy.…

  19. Development of an occult metric for common motor vehicle crash injuries - biomed 2013.

    PubMed

    Schoell, Samantha L; Weaver, Ashley A; Stitzel, Joel D

    2013-01-01

    Detection of occult injuries, which are not easily recognized and are life-threatening, in motor vehicle crashes (MVCs) is crucial in order to reduce fatalities. An Occult Injury Database (OID) was previously developed by the Center for Transportation Injury Research (CenTIR) using the National Automotive Sampling System Crashworthiness Data System (NASS-CDS) 1997-2001 which identified occult and non-occult head, thorax, and abdomen injuries. The objective of the current work was to develop an occult injury model based on underlying injury characteristics to derive an Occult Score for common MVC-induced injuries. A multiple logistic regression model was developed utilizing six injury parameters to generate a probability formula which assigned an Occult Score for each injury. The model was applied to a list of 240 injuries comprising the top 95 percent of injuries occurring in NASS-CDS 2000-2011. The parameters in the model included a continuous Cause MRR/year variable indicating the annual proportion of occupants sustaining a given injury whose cause of death was attributed to that injury. The categorical variables in the model were AIS 2-3 vs. 4-6, laceration, hemorrhage/hematoma, contusion, and intracranial. Results indicated that injuries with a low Cause MRR/year and AIS severity of 4-6 had an increased likelihood of being occult. In addition, the presence of a laceration, hemorrhage/hematoma, contusion, or intracranial injury also increased the likelihood of an injury being occult. The Occult Score ranges from zero to one with a threshold of 0.5 as the discriminator of an occult injury. Of the considered injuries, it was determined that 54% of head, 26% of thorax, and 23% of abdominal injuries were occult injuries. No occult injuries were identified in the face, spine, upper extremity, or lower extremity body regions. The Occult Score generated can be useful in advanced automatic crash notification research and for the detection of serious occult injuries in

  20. Population health metrics: crucial inputs to the development of evidence for health policy

    PubMed Central

    Mathers, Colin D; Murray, Christopher JL; Ezzati, Majid; Gakidou, Emmanuela; Salomon, Joshua A; Stein, Claudia

    2003-01-01

    Valid, reliable and comparable measures of the health states of individuals and of the health status of populations are critical components of the evidence base for health policy. We need to develop population health measurement strategies that coherently address the relationships between epidemiological measures (such as risk exposures, incidence, and mortality rates) and multi-domain measures of population health status, while ensuring validity and cross-population comparability. Studies reporting on descriptive epidemiology of major diseases, injuries and risk factors, and on the measurement of health at the population level – either for monitoring trends in health levels or inequalities or for measuring broad outcomes of health systems and social interventions – are not well-represented in traditional epidemiology journals, which tend to concentrate on causal studies and on quasi-experimental design. In particular, key methodological issues relating to the clear conceptualisation of, and the validity and comparability of measures of population health are currently not addressed coherently by any discipline, and cross-disciplinary debate is fragmented and often conducted in mutually incomprehensible language or paradigms. Population health measurement potentially bridges a range of currently disjoint fields of inquiry relating to health: biology, demography, epidemiology, health economics, and broader social science disciplines relevant to assessment of health determinants, health state valuations and health inequalities. This new journal will focus on the importance of a population based approach to measurement as a way to characterize the complexity of people's health, the diseases and risks that affect it, its distribution, and its valuation, and will attempt to provide a forum for innovative work and debate that bridge the many fields of inquiry relevant to population health in order to contribute to the development of valid and comparable methods for

  1. Quality metrics for product defectiveness at KCD

    SciTech Connect

    Grice, J.V.

    1993-07-01

    Metrics are discussed for measuring and tracking product defectiveness at AlliedSignal Inc., Kansas City Division (KCD). Three new metrics, the metric (percent defective) that preceded the new metrics, and several alternatives are described. The new metrics, Percent Parts Accepted, Percent Parts Accepted Trouble Free, and Defects Per Million Observations, (denoted by PPA, PATF, and DPMO, respectively) were implemented for KCD-manufactured product and purchased material in November 1992. These metrics replace the percent defective metric that had been used for several years. The PPA and PATF metrics primarily measure quality performance while DPMO measures the effects of continuous improvement activities. The new metrics measure product quality in terms of product defectiveness observed only during the inspection process. The metrics were originally developed for purchased product and were adapted to manufactured product to provide a consistent set of metrics plant- wide. The new metrics provide a meaningful tool to measure the quantity of product defectiveness in terms of the customer`s requirements and expectations for quality. Many valid metrics are available and all will have deficiencies. These three metrics are among the least sensitive to problems and are easily understood. They will serve as good management tools for KCD in the foreseeable future until new flexible data systems and reporting procedures can be implemented that can provide more detailed and accurate metric computations.

  2. Benchmark campaign and case study episode in central Europe for development and assessment of advanced GNSS tropospheric models and products

    NASA Astrophysics Data System (ADS)

    Douša, Jan; Dick, Galina; Kačmařík, Michal; Brožková, Radmila; Zus, Florian; Brenot, Hugues; Stoycheva, Anastasia; Möller, Gregor; Kaplon, Jan

    2016-07-01

    Initial objectives and design of the Benchmark campaign organized within the European COST Action ES1206 (2013-2017) are described in the paper. This campaign has aimed to support the development and validation of advanced Global Navigation Satellite System (GNSS) tropospheric products, in particular high-resolution and ultra-fast zenith total delays (ZTDs) and tropospheric gradients derived from a dense permanent network. A complex data set was collected for the 8-week period when several extreme heavy precipitation episodes occurred in central Europe which caused severe river floods in this area. An initial processing of data sets from GNSS products and numerical weather models (NWMs) provided independently estimated reference parameters - zenith tropospheric delays and tropospheric horizontal gradients. Their provision gave an overview about the product similarities and complementarities, and thus a potential for improvements of a synergy in their optimal exploitations in future. Reference GNSS and NWM results were intercompared and visually analysed using animated maps. ZTDs from two reference GNSS solutions compared to global ERA-Interim reanalysis resulted in accuracy at the 10 mm level in terms of the root mean square (rms) with a negligible overall bias, comparisons to Global Forecast System (GFS) forecasts showed accuracy at the 12 mm level with the overall bias of -5 mm and, finally, comparisons to mesoscale ALADIN-CZ forecast resulted in accuracy at the 8 mm level with a negligible total bias. The comparison of horizontal tropospheric gradients from GNSS and NWM data demonstrated a very good agreement among independent solutions with negligible biases and an accuracy of about 0.5 mm. Visual comparisons of maps of zenith wet delays and tropospheric horizontal gradients showed very promising results for future exploitations of advanced GNSS tropospheric products in meteorological applications, such as severe weather event monitoring and weather nowcasting

  3. NAS Parallel Benchmarks Results

    NASA Technical Reports Server (NTRS)

    Subhash, Saini; Bailey, David H.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    The NAS Parallel Benchmarks (NPB) were developed in 1991 at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a pencil and paper fashion i.e. the complete details of the problem to be solved are given in a technical document, and except for a few restrictions, benchmarkers are free to select the language constructs and implementation techniques best suited for a particular system. In this paper, we present new NPB performance results for the following systems: (a) Parallel-Vector Processors: Cray C90, Cray T'90 and Fujitsu VPP500; (b) Highly Parallel Processors: Cray T3D, IBM SP2 and IBM SP-TN2 (Thin Nodes 2); (c) Symmetric Multiprocessing Processors: Convex Exemplar SPP1000, Cray J90, DEC Alpha Server 8400 5/300, and SGI Power Challenge XL. We also present sustained performance per dollar for Class B LU, SP and BT benchmarks. We also mention NAS future plans of NPB.

  4. 40 CFR 141.709 - Developing the disinfection profile and benchmark.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... determine the total log inactivation for Giardia lamblia and viruses. If systems monitor more frequently... must monitor weekly during the period of operation. Systems must determine log inactivation for Giardia... must develop a virus profile using the same monitoring data on which the Giardia lamblia profile...

  5. 40 CFR 141.709 - Developing the disinfection profile and benchmark.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... determine the total log inactivation for Giardia lamblia and viruses. If systems monitor more frequently... must monitor weekly during the period of operation. Systems must determine log inactivation for Giardia... must develop a virus profile using the same monitoring data on which the Giardia lamblia profile...

  6. 40 CFR 141.709 - Developing the disinfection profile and benchmark.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... determine the total log inactivation for Giardia lamblia and viruses. If systems monitor more frequently... must monitor weekly during the period of operation. Systems must determine log inactivation for Giardia... must develop a virus profile using the same monitoring data on which the Giardia lamblia profile...

  7. Brain development in rodents and humans: Identifying benchmarks of maturation and vulnerability to injury across species

    PubMed Central

    Semple, Bridgette D.; Blomgren, Klas; Gimlin, Kayleen; Ferriero, Donna M.; Noble-Haeusslein, Linda J.

    2013-01-01

    Hypoxic-ischemic and traumatic brain injuries are leading causes of long-term mortality and disability in infants and children. Although several preclinical models using rodents of different ages have been developed, species differences in the timing of key brain maturation events can render comparisons of vulnerability and regenerative capacities difficult to interpret. Traditional models of developmental brain injury have utilized rodents at postnatal day 7–10 as being roughly equivalent to a term human infant, based historically on the measurement of post-mortem brain weights during the 1970s. Here we will examine fundamental brain development processes that occur in both rodents and humans, to delineate a comparable time course of postnatal brain development across species. We consider the timing of neurogenesis, synaptogenesis, gliogenesis, oligodendrocyte maturation and age-dependent behaviors that coincide with developmentally regulated molecular and biochemical changes. In general, while the time scale is considerably different, the sequence of key events in brain maturation is largely consistent between humans and rodents. Further, there are distinct parallels in regional vulnerability as well as functional consequences in response to brain injuries. With a focus on developmental hypoxicischemic encephalopathy and traumatic brain injury, this review offers guidelines for researchers when considering the most appropriate rodent age for the developmental stage or process of interest to approximate human brain development. PMID:23583307

  8. The Development of the Children's Services Statistical Neighbour Benchmarking Model. Final Report

    ERIC Educational Resources Information Center

    Benton, Tom; Chamberlain, Tamsin; Wilson, Rebekah; Teeman, David

    2007-01-01

    In April 2006, the Department for Education and Skills (DfES) commissioned the National Foundation for Educational Research (NFER) to conduct an independent external review in order to develop a single "statistical neighbour" model. This single model aimed to combine the key elements of the different models currently available and be relevant to…

  9. THE NEW ENGLAND AIR QUALITY FORECASTING PILOT PROGRAM: DEVELOPMENT OF AN EVALUATION PROTOCOL AND PERFORMANCE BENCHMARK

    EPA Science Inventory

    The National Oceanic and Atmospheric Administration recently sponsored the New England Forecasting Pilot Program to serve as a "test bed" for chemical forecasting by providing all of the elements of a National Air Quality Forecasting System, including the development and implemen...

  10. Development of aquatic toxicity benchmarks for oil products using species sensitivity distributions

    EPA Science Inventory

    Determining the sensitivity of a diversity of species to spilled oil and chemically dispersed oil continues to be a significant challenge in spill response and impact assessment. We used standardized tests from the literature to develop species sensitivity distributions (SSDs) of...